Be accountable!
- Sexual content is often removed from platforms in ways that are arbitrary and inconsistent. Platforms should have an accessible appeals process for users who have had their content removed, with the platform required to offer tailored and individualised reasons, as well as transparent and independent dispute resolution processes.
- Platforms ought to regularly record, explain and justify their content moderation processes and decisions, with clear and transparent policies for resolving conflict. They ought to make detailed disaggregated data available and accessible to users, researchers and the public on a frequent basis. This should include data about flagging, complaints, decisions, recommender systems, suspensions and take-downs, as well as platform interpretations of what constitutes sensitive or borderline content.
- Nuanced content moderation is a result of community buy-in (creators tagging and categorising their content without being penalised for it, and audiences knowing they can trust the tags). Platform moderators should have an in-depth understanding of the platform, be invested in the community, and be well trained, resourced, supported and compensated.
- Platforms should allow creators to categorise their own content and to challenge automated tags. They should afford users control over what they see and when, including opportunities to opt-in and opt-out of specific content. Accurately tagging content supports users to manage their expectations when encountering, avoiding or searching for content. Disingenuous tagging for the purposes of advertising, attention hacking, trolling, etc, should be discouraged and moderated for.
- Platforms should make their algorithms transparent, explainable and available for scrutiny so that users can understand and evaluate how their sexual content is being ranked, labelled, organised and sorted. They should make their application programming interface (API) available to researchers and civil society for analysis.