Product team

Product Team Guidelines

Responsible design starts with product teams and continues to be their ongoing responsibility throughout the product life cycle.

Why?

Product teams design, develop and deploy the AI technology that children and youth will use. Responsible design starts with product teams and continues to be your ongoing responsibility throughout the product life cycle. These guidelines are designed to help you develop responsible AI products for children and youth.

Who?

The entire product team: developers, programme managers, technical writers, product owners, software architects, UX designers, marketing managers and anyone else with a hand in product development.

How?

Dive into the five categories of “Putting children and youth FIRST” – Fair, Inclusive, Responsible, Safe and Transparent. Each theme is also organized into three sections: goals, greatest potential for harm, and mitigate risks. Use these categories and resources as a starting point. Responsible AI is a journey, and you’ll want to form a diverse and dynamic team as you develop AI for children and youth.

  • Fair: Whenever data is collected, systems are engineered or products are sold, ethical obligations arise to be fair and honest, and to do good work and avoid harm. These obligations are all the more pressing when working with children and youth, who are among the most vulnerable members of society. Adults have a special responsibility to help them flourish and to shield them from harm. Technologies and systems powered by AI and ML could transform how people interact with each other. But they also bring potential bias, exclusion and lack of fairness to their users. With this potential for change and shift in power also comes requisite moral duties. As a result, designers, developers, maintainers, archivists and researchers of AI-driven tools for children and youth are urged to be mindful of the sensitivity and ethical ramifications of their work as they design fair AI systems.
  • Inclusive: Inclusion is an essential ingredient in people’s sense of emotional, psychological and physical safety. Humans are social animals who struggle to make progress on most developmental scales without a sense of community. All people crave a sense of belonging and are naturally attuned to feelings of exclusion, inclusion and the resulting uncertainty. Children and youth often lack the coping skills necessary to manage negative feelings of exclusion (real or perceived). By not focusing on building an inclusive experience, you may cause cognitive, emotional or social distress and harm. Feelings of exclusion can harm a child’s confidence, feelings of self-worth and development.
    Technology teams may be inclined to equate inclusion with accessibility. The Smart Toy Awards, developed in collaboration with the World Economic Forum, defines accessibility as an AI-powered toy that’s accessible for children with physical, mental and learning disabilities, including neurodiversity, and children speaking languages other than English and from other cultures. This type of inclusivity is as important as the emotional inclusivity already noted.
  • Responsible: The goal of this theme is to confirm that product teams have internalized their responsibilities towards the children and youth who use their products. This starts with product teams considering (and mitigating) the possibility that they may not have the skills or expertise to adequately evaluate the risks of introducing their AI-enabled product to children and youth. Next, product ideation, design, planning, development and testing should be grounded in age and development-stage appropriateness, with methods to test for appropriateness that reflects the latest learning science. Layered on this traditional product cycle should be considerations for the emotional, psychological and physical safety of the children and youth being targeting. Responsible AI design is an act of collaboration between the product team and their customers and their guardians, as well as experts in learning science, ethics, developmental psychology and other relevant research fields.
  • Safe: Psychosocial development is typically predicated on feelings of safety and security. Children and youth whose environments are chaotic, dangerous and unpredictable struggle to meet developmental milestones, including learning milestones, emotional regulation and bonding. This makes sense – the brain of a child at risk will allocate its precious resources to keeping itself alive above acquiring the next developmental milestone. This is a steep price for children and youth to pay, however, no matter the severity of their experience. Children can easily find themselves in harm’s way. Their underdeveloped prefrontal cortex means they are less able to predict consequences and are more impulsive, lack self-control and lack the experience to know when they are being manipulated. They will seek instant gratification and may not have interest in limiting things like screen time, purchases and interaction with online strangers. For these reasons, the product team must consider themselves an ally for the children’s and youth’s guardians, jointly taking responsibility for protecting all users of the technology from harm.
  • Transparent: Transparency in AI can take many forms. First, there are the clear disclaimers all products must deliver to customers based on local and state regulations. Product teams are encouraged to include the proposed AI labelling system, which is part of this toolkit, in each product – both on the physical packaging and accessible online through a QR code. Products with AI for children and youth should include the following six categories and explanations to create transparency among the product, the buyer (parent/guardian or other adult) and end user (children or youth).

C-suite and Corporate Decision-Makers' Checklist

Product Team Guidelines

AI Labelling System

Guide for Parents and Guardians

Sobre nosostros

Eventos

Medios

Socios y Miembros

  • Únete

Ediciones en otros idiomas

Política de privacidad y normas de uso

© 2024 Foro Económico Mundial