Overview of AI Governance Report

India’s diverse demographic and socio-economic landscape offers immense potential for AI-driven growth. To harness this potential responsibly, the IndiaAI Mission was approved in March 2024 with a budget of INR 10,371.92 crore. The mission focuses on democratizing computing access, enhancing data quality, fostering innovation, and developing ethical, socially impactful AI solutions. It operates through seven key pillars, including Safe & Trusted AI, which emphasizes ethical AI development and governance. A multistakeholder Advisory Group and Subcommittee were formed to create an India-specific AI regulatory framework, providing actionable recommendations to ensure trust, accountability, and inclusive AI-driven progress. 

Categorization of Principles

The Sub-Committee has recently summarized the core principles envisaged by different stakeholders and policymaking organizations including the Organization of Economic Cooperation and Development (OECD), NASSCOM, and NITI Aayog. The sub-committee has categorized the principles under the following – Transparency; Accountability; Safety, reliability, & robustness; Privacy & Security; Fairness & non-discrimination; Human centred values; Inclusive & sustainable innovation; and Digital by design governance (leveraging digital technologies for enhancing governance).

How to operationalize the principles?

The sub-committee has identified three key concepts that can help in the operationalisation of the principles.

  1. Examining AI systems using a lifecycle approach: A lifecycle approach to the development, deployment, and use of AI systems which can be divided into three parts:
    1. Development: Examining the designing, training, and testing of a given system.
    2. Deployment: Examining the putting of a given AI system into operation and use.
    3. Diffusion: Examining long-term implications of multiple AI systems being widely deployed
      and used across multiple sectors and domains.
  2. Taking an ecosystem-view of AI actors:
    Multiple sets of actors can be involved in the lifecycle of a model which form an ecosystem, including but not limited to Data Principals, Data Providers, AI Developers (including Model Builders), AI Deployers (including App Builders and Distributors), and End-users (including both businesses and citizens). By looking at governance for the whole ecosystem, better and holistic outcomes can be obtained.
  3. Leveraging technology for governance:
    The rapidly evolving AI ecosystem, driven by advancements in models, applications, and outputs, requires a governance strategy which is focused on the “techno-legal” approach. It should integrate legal frameworks with technology tools, enabling scalable compliance, risk mitigation, and enhanced monitoring. This strategy leverages technologies like unique identity artifacts and liability chains to ensure accountability and promote self-regulation among ecosystem players. It also supports government efforts in identifying unlawful activities while ensuring periodic review of automated tools for fairness, accuracy, and rights protection.

Addressing the Necessary Gaps

The sub-committee emphasized addressing gaps by:

  1. Prioritizing compliance and enforcement of existing laws where AI exacerbates known harms.
  2. Ensuring regulators have adequate information about the AI ecosystem, including data, models, applications, and actors.
  3. Adopting a whole-of-government approach to manage emerging risks as AI evolves and increasingly impacts multiple domains.

Recommendations

  1. Establishment of an empowered mechanism to coordinate AI Governance: An Inter-Ministerial AI Coordination Committee or Governance Group is proposed as a permanent mechanism to coordinate AI governance across national authorities. This body will align key institutions to implement a whole-of-government approach, addressing AI’s complexity and scale. It will map the AI ecosystem, assess risks, and harmonize efforts across sectors like health, transportation, and consumer protection. The group, headed by the Principal Scientific Adviser, will include government officials, sectoral regulators, and external experts from industry and academia to ensure diverse perspectives and effective governance.
  2. Establish and administratively house, a Technical Secretariat to serve as a technical advisory body and coordination focal point for the Committee/ Group: The Ministry of Electronics and IT (MeitY) should establish a technical secretariat to support the Inter-Ministerial AI Coordination Committee. This secretariat, staffed by officials, experts, and lateral hires, will pool multidisciplinary expertise, map India’s AI ecosystem, conduct horizon-scanning, assess societal and consumer risks, and develop metrics and frameworks for responsible AI use. It will engage with industry to co-develop solutions and identify governance gaps requiring strengthened capacities. The secretariat will focus on technical advisory and risk assessment, complementing the Committee’s policymaking and regulatory coordination.
  3. Establish, house, and operate an AI incident database as a repository of problems experienced in the real world: The Technical Secretariat should establish an AI incident database to document AI-related risks in India. Initially, public sector organizations deploying AI and private entities should be encouraged to report incidents voluntarily, focusing on harm mitigation rather than fault finding. “AI incidents” encompass cyber and non-cyber issues, such as malfunctions, privacy violations, discriminatory outcomes, and system failures. The database will help identify patterns, inform governance, and improve AI safety. CERT-IN may manage the database under the Technical Secretariat’s guidance to ensure its effective operation.
  4. Engagement of the industry to drive voluntary commitments on transparency across the overall AI ecosystem and on baseline commitments for high capability/widely deployed systems: Operationalizing AI governance principles requires collaboration between the government and industry. Key recommendations include encouraging industry self-regulation through transparency measures, such as voluntary disclosures (e.g., transparency reports, model cards) and processes for testing, monitoring, and validating AI systems.
  5. Examination of the suitability of technological measures to address AI related risks: A systems-level approach is essential to address AI-related risks effectively. Technological artifacts can model interactions between datasets, models, applications, and users across domains (e.g., healthcare, finance) and enable real-time tracking of negative outcomes.
  6. Evaluation of technological solutions for compliance and enforcement tools. A priority area is addressing malicious synthetic media like “deepfakes”. The Technical Secretariat should: Research the viability of solutions like watermarking, platform labeling, and fact-checking tools. Engage globally with industry and governments to develop standards and mechanisms for content provenance, ensuring modifications are traceable. Use findings to support nationwide user awareness programs, leveraging Committee/Council communication channels.
  7. Formation of a sub-group to work with MEITY: The sub-group will suggest specific measures that may be considered under the proposed legislation like Digital India Act (DIA) to strengthen and harmonise the legal framework, regulatory and technical capacity and the adjudicatory set-up for the digital industries to ensure effective grievance redressal and ease of doing business

Conclusion

The recommendations by the committee are a step in the right direction as we look towards a revolutionary decade with the help of Artificial Intelligence and Machine Learning technologies. The report focuses on real-world problems and impact of AI/ML and proposes actionable steps to regulate this disruptive technology for the upcoming future. Now it remains to be seen how many recommendations are brought into actual working of the government.

Leave a Reply

Your email address will not be published. Required fields are marked *

Picture of Akshay Garg

Akshay Garg

Mr. Akshay is a 3rd year law student at Campus Law Centre, University of Delhi. He is keenly interested in becoming a Corporate Lawyer.