Advancing Global Digital Content Safety

Advancing Global Digital Content Safety is a project focusing on solutions to tackle the spread of harmful content online. Online content has the power to influence minds, incite action, and shape the fabric of society. What is posted and shared on the internet has proliferated substantially, leading to questions on how to reduce the spread of harmful content, particularly across social networks, search engines, streaming services, and others within a layered internet ecosystem. This project aims to explore solutions to advance industry and regulatory progress in the area of digital content safety. Taking into account and developing further the outcomes of the previous and current initiatives on the topic, the analysis will be centered around three main workstreams:

Content Moderation: This workstream will look at current practices used by platforms to define harmful content and take action on such content, highlighting best practices for processes of categorizing, detecting, reporting, and governing content 


  • What practices are currently in place to moderate content on major platforms? 

  • What is the current balance between safety and free expression implied or stated based on content moderation decisions we have seen to date?

  • What are the best practices for developing and executing on the necessary tools, processes, governance, and reporting to moderate content effectively?

  • For harmful content with a clear definition, how can detection and removal be improved?

  • For content with a less clear definition of harm, how can decisions be made more transparently?

  • What independent auditing may be needed and how would this function?

  • What metrics, if any, should be used to assess performance of content moderation practices?

  • How can content moderation best practices be harmonized across the media ecosystem to enhance public accountability?

Regulation and Liability: This workstream will look at current regulation of platforms globally, highlighting various approaches to assigning responsibility / liability for third party content across social networks, search engines, and other internet companies


  • What is the responsibility for addressing harmful content across the internet stack? 

  • How do current liability laws (e-Commerce directive, section 230 of CDA) impact content on platforms?

  • How should social platforms be treated when it comes to content liability (on the spectrum of publisher to distributor)? 

  • Is a two-tiered regulatory approach needed, and how would this function effectively if so?

  • Should concept of fiduciary duty be regulated upon platforms?

  • Should specific measures / targets (e.g. exposure) be enforced with regulation?

  • What are the most effective remedies to put in place if a company has violated regulations related to content on its platforms?

  • What should be self-governed vs regulated?

  • Given regulations that improve safety for consumers may sometimes be in conflict with regulation that improve privacy, how should regulation be effectively coordinated to optimize for consumer well-being?

Business Model and Competition: This workstream will analyze the impact of an engagement-driven business model as well as the role of competition in addressing exposure to harmful content while considering impacts to innovation and growth


  • What is the role of increased competition in addressing exposure to harmful content?

  • Would increased competition be effective in reducing (in part) the average exposure to harmful content – how could this be modelled? 

  • How do various consumer well-being goals (price, security, safety from harmful content, choice, privacy, etc.) need to be balanced here?

  • How do current business model practices – focused on maximizing user engagement to drive advertising revenue - impact the type of content that users see?

  • Are current business model practices incompatible with long-term goals of gaining user trust and avoiding engaging in controversial content governance decisions? If so, what long-term strategic shifts could be taken by platforms to maintain or grow profits whilst reducing dependence on advertising revenues (reference to Value in Media insights)?

The three workstreams will address harmful content of the following scope:

- Harms with a clear definition (e.g., child sexual exploitation)

- Harms with a less clear definition (e.g., disinformation)

- In light of COVID-19, we will specifically focus on health-related misinformation

Licencia y Republicación

El Foro Económico Mundial más proyectos del foro puede volver a publicarse de acuerdo con nuestros Términos de uso.