Creation of an overarching ethics framework for the design, development and deployment of artificial intelligence in India, along with self-assessment guidelines for public- and private-sector organizations
As artificial intelligence adoption has increased, inherent risks of relying on AI have emerged across a number of areas. AI-powered solutions can sometimes be discriminatory, are unable to explain the decisions made by their algorithms, and potentially pose a risk to individual privacy given their heavy reliance on data. Issues related to AI development, such as “explainability”, transparency and accountability remain ongoing, raising questions about ethics, privacy and security. Using AI with malicious intent – for example, creating “deepfakes” or autonomous weapons – can have serious repercussions on society. Alongside this, it is likely that the strong position of those countries that enjoy the advantage of being able to freely collect and distribute data will be consolidated in an increasingly automated and digitized world.
Globally, a number of principles for the creation of ethical AI solutions have emerged in both the public and private sector. However, there is a lack of specific AI “guidelines” for India. India’s National Strategy for AI emphasizes that AI-related risks must be mitigated through effective policy, along with the creation of standards and awareness among relevant stakeholders. There is a need to develop effective guidelines for the public and private sectors to promote the creation of responsible AI for India.