Unlocking India’s AI Governance: Key Insights and Diverse Approaches from the Latest Report | TechPolicy.Press

Admin

Unlocking India’s AI Governance: Key Insights and Diverse Approaches from the Latest Report | TechPolicy.Press

Recently, the Indian Ministry for Electronics and Information Technology shared a draft of the Digital Personal Data Protection Rules, 2025. Alongside it, they released a report on developing AI governance guidelines. This comes as discussions about establishing an AI Safety Institute in India gain traction.

Microsoft 365 subscription banner - starting at

The subcommittee’s report offers several noteworthy suggestions. Interestingly, many of these ideas could result in a shift away from the government’s previous preference for a less stringent regulatory approach to AI. The report also leans towards various regulatory strategies, which might not easily align with one another. Moreover, it emphasizes ‘voluntary’ commitments, which could weaken the intent behind its recommendations.

A key point in the report is the emphasis on a ‘whole-of-government’ strategy. This means that different government departments would coordinate efforts in AI governance. Such teamwork is crucial, as AI impacts various legal areas like intellectual property, consumer rights, and data protection. A good first step might be forming a dedicated committee with members from relevant regulatory bodies, such as the Reserve Bank of India and the Telecom Regulatory Authority of India.

The idea of creating a ‘Technical Secretariat’ also stands out. This group would focus on understanding how AI works, identifying gaps, and developing protocols for accountability. They might even set up an AI incident database to track any problems that arise, ensuring that people are aware of potential risks associated with AI technology. Increasing regulatory capacity is essential, although past delays in implementing laws, like the Data Protection Act, raise concerns about how quickly these new ideas can be put into action.

The report insists on the need for legal recognition of AI-related harms, calling for regulations that prioritize ‘minimizing harm.’ This approach mirrors the EU’s AI Act, which contrasts with more innovation-focused strategies seen in the UK. However, it remains unclear if the subcommittee is advocating for strict laws similar to the EU model.

Furthermore, the report suggests that a fixed definition for AI systems might hinder future-ready regulation. This suggests that India could be considering a technology-neutral approach, though there’s a lack of clarity regarding the best regulatory methods for different domains like intellectual property and cybersecurity. Notably, there’s minimal discussion about personal data use in training AI models, which raises concerns about data protection aspects.

As for regulatory strategies, the report proposes at least three. The first is a principle-based approach, informed by established guidelines from the OECD and Indian industry. This method promotes a ‘lifecycle approach’ to assessing risks at different stages, considering all players in the AI ecosystem—developers, deployers, and end-users.

The second strategy is a techno-legal approach. This combines legal regulations with technology, allowing legal requirements to be built into AI development processes. This could enhance accountability and transparency by assigning clear responsibilities across the AI value chain.

The third strategy includes an entity-based regulation system, requiring licenses or authorizations for AI use, and an activity-based regulation focused on specific deployment sectors. Initially, the subcommittee suggests that the activity-based approach might better serve the goal of reducing harm.

While the exploration of different regulatory methods is valuable, merging these approaches effectively could prove challenging. The principle-based method aligns with flexibility and innovation seen in the UK, but it has its drawbacks, especially in sectors like data protection. In contrast, the entity or activity-based method closely resembles the EU’s stricter regulations, which enforce legal obligations around AI use.

A techno-legal approach could enrich either the principle or entity/activity-based methods, enhancing design safety. However, it’s uncertain if ‘techno-legal’ here implies using technology to bolster regulation or solely to improve regulatory efficiency. This distinction matters greatly, as it could determine the obligations for AI developers to mitigate various harms identified in the report.

Despite some promising recommendations, there is concern about advocating self-regulation and voluntary commitments. The inherent risks of AI demand more stringent accountability measures rather than relying on good intentions from tech companies. Without solid requirements, there’s a danger of AI developers doing only what is necessary, leaving crucial information undisclosed.

AI harms are already a reality, affecting individuals and society. Recognizing these issues is vital, especially when previous legislation has overlooked critical technology-related harm definitions. The subcommittee’s report offers a hopeful direction for AI regulation in India, aiming to build regulatory capacity and unify frameworks across different sectors. However, the success of these recommendations will rely heavily on actual implementation, and it remains to be seen how well these diverse approaches can be integrated. If they remain based solely on voluntary commitments, the potential impact could be significantly weakened.

Source link