These girls from the Kutia ethnic group are using virtual reality glasses for the first time.

One Year on from Schufa and MEITY

Emerging Principles of AI Regulation

**Ted Perkin

That Artificial Intelligence (AI) holds the potential to transform society is often repeated. While some business leaders herald AI as the next industrial revolution, even tech magnates like Elon Musk warn of the existential risks to humanity. For what it’s worth, Musk estimates the risk of human annihilation at 20%. 

Capital investment has flooded into the industry, transforming the face of the global market. This economic dynamism has heaped pressure on politicians and policymakers to facilitate ease of business for AI companies. On February 11th 2025, the UK and US declined to join France, China, and India in signing a statement pledging an ‘open’, ‘inclusive’ and ‘ethical’ approach to the technology. Soon after US Vice-President Vance’s criticism of European regulation, the EU Commission also withdrew its AI liability directive

Meanwhile, India has yet to pursue a clear strategy for AI regulation. In 2023, the Ministry for Electronic and Information Technology (MEITY) published a blueprint for a new Digital India Act. No signs of intervention followed until MEITY’s advisory on AI in spring 2024, reflecting ongoing internal disagreement about the correct regulatory approach.

As legislators consider regulating AI, a few key challenges have emerged: to what extent should AI influence human decisions? And who is accountable for making AI systems safe, compliant, and unbiased?

The Schufa ruling in the European Union (EU) and India’s MEITY advisory represent early regulatory skirmishes in addressing these questions. One year on, this article assesses their shared concerns, legal solutions and consequences. Three key themes emerge: protections to human thinking, blanket legal categorisation, and ambiguous liability. The article evaluates these against recent developments in AI regulation and highlights a few outstanding points of contention.

Schufa

In the Schufa case, concluded in December 2023, the Court of Justice of the European Union (CJEU) decided in favour of consumer protections in the context of automated processing. A woman took Schufa, Germany’s largest credit rating agency, to court after they refused her access to the AI algorithm behind a credit score, which the bank used to reject her application. The algorithm’s legal classification was unclear, since Schufa acts as a third-party providing the AI-generated score to lenders.

The court’s broader concern was with the effect of AI outputs on human decisions and autonomy. The CJEU determined that Schufa’s AI model scoring system was an ‘automated individual decision-making’ under Article 22(1) of the EU’s General Data Protection Regulation (GDPR). The three conditions that Schufa’s model met were (1) a decision (2) based solely on automated processing and producing (3) legal or similar effects on the natural person. This was supported by an extremely high correlation between a negative Schufa score and the rejection of a credit application by the bank. 

Consequently, Schufa was instructed to give consumers the option to consent to AI-decisions, contest the decision and express their view freely, and have human intervention.

MEITY

The first MEITY advisory, published on 1st March 2024 instructed entities to seek approval from the government before testing their AI models on users in India. Industry figures criticised this measure, charging that it would massively slow the rollout out AI models especially for start-ups. In response, the Minister, Ashwini Vaishnaw, posted on X that the advisory did not apply to start-ups along with further clarifications that it was ‘simply that – advise’. The scope and legal authority of the advisory was also ambiguous, further complicating compliance.

The second advisory, issued on 15th March in supersession of the previous one, removed the requirement of government approval and instead stated that under-tested models must notify their users of the potential unreliability of their outputs. The new advisory restricted the content generated by AI outputs: generative models must prevent users from producing unlawful content and cannot ‘permit any bias or discrimination or threaten the integrity of the electoral process’. Further, the synthetic creation of content, such as misinformation and ‘deepfakes’, must be permanently watermarked.

I. Asserting Protections on Human Thinking

MEITY and Schufa share a recent trend in AI regulation: asserting human autonomy against the impact of automated processing. This reflects the increasing power of AI outputs, which have reconfigured existing legal protections.

In part, MEITY was responding to the influence of platform embedded AI on voters during the 2024 General Election cycle. The advisory was sent to eight social media intermediaries, targeting organisations where AI-generated content could have a greater impact.

This political threat is far from unique to India: according to one recent study in the US, generative AI models produce accurate news information about elections only 50% of the time. However, a diverse landscape of ethnic, religious, and caste identities make India especially vulnerable to algorithmic bias or misinformation. India’s constitutional rights of equality and freedom provide a further legal imperative for preventing discrimination in AI models.

The Schufa ruling was not a response to political conditions, but a broad application of Article 22 protections of the individual against ‘legally consequential’ AI outputs. As the Hamburg Data Protection Commissioner clarified on the same day as the court’s ruling, human review of automated decisions must constitute more than a rubber stamp: ‘the person making the final decision needs expertise and enough time to question the machine-made initial decision’.

In strengthening human review, the CJEU checks ‘the tendency to over-rely on automation’ known as automation bias. Human decision makers may place too much trust in AI models, approving outputs that stem from algorithmic errors, bias, or unrepresentative training data. This is especially true of high-volume decisions like credit applications, where it may be less practical to investigate each individual machine-made decision.

Both cases, therefore, pitted AI models and the autonomy of the individual against each other. They asserted a legal duty on developers and intermediaries to restrict the influence of AI on human thinking, especially in economically, socially, or politically sensitive areas.

II. AI Categorisation: One-Size-Fits-All Regulation

MEITY and the CJEU both used blunt regulatory instruments, creating uniform legal categorisation across sectors.

For example, the Hamburg Data Commission press release accompanying the ruling also discussed the use of AI in recruitment and other non-financial sectors. A European Court of Justice (ECJ) judgement on 27th February 2025 confirmed these broad consumer protections against autonomous products. The court required that a credit information agency provide the customer with ‘meaningful information about the logic involved’, further applying the right of access to information to AI models. Although the EU AI Act, ratified in August, adopts a differentiated risk model based on sector, the ECJ continued the CJEU application of GDPR protections to AI models.

The MEITY advisory chaos highlights the consequences of AI regulation pushed through without industry dialogue. MEITY’s specific concern seems to have been the problem of AI-generated content on social media, such as deepfakes, misinformation, or illegal content. In response, it introduced general and non-specific regulation on all AI models.

Uniform legal categorisations like this fail to distinguish between high- and low-risk sectors of applications. Rather than reactive blanket regulation, legislators might consider intervention only in high-risk industries or where existing legislation is insufficient.

III. Unclear Liability: Duplicating Compliance in Development

Both regulatory decisions applied liability across the AI development process, imposing unnecessary compliance burdens. 

Hitherto, EU restrictions on AI processing were considered to apply to a business that undertook a ‘solely’ AI process and then made a ‘decision’ with legal consequences for the data subject. However, the CJEU sought to address a ‘lacuna in legal protection’, indicating that GDPR rules may apply even when the generator and user of an AI output are distinct. In the absence of an EU Liability Directive, the CJEU lacks the guidance or intent of legislators in this area. As a result, it has apportioned liability for automated processing across the supply chain.

The MEITY advisory also significantly extends liability. The advisory covers all ‘intermediaries and platforms’, the latter undefined under the Information Technology Act (2000) or the IT Rules (2021). Furthermore, intermediaries may lose their liability exemptions under Section 79 of the Information Technology Act if they fail to comply with the advisory. This carves out a separate and ambiguous legal category of liability for intermediary AI models, threatening significant legal repercussions for breaching the advisory.

Despite that, the advisory leaves some key questions unaddressed. For example, intermediaries and platforms are required to prevent users interacting with illegal or biased AI generated content. Should this apply to developers, whose AI models are then used or hosted on intermediaries?

General regulation with ambiguous liability combines the worst of both worlds: intervention in unnecessary areas, and a lack of clarity as to its purpose. Under both Schufa and MEITY, user and generator, or intermediary and platform, are subject to similar legal constraints. Across the supply chain, organisations will likely adopt compliance measures without certainty as to who is liable for the use, and misuse, of AI outputs. 

Balancing Risk and Innovation 

Threats to citizen autonomy are not lightly ignored. For the CJEU and MEITY, this warranted broad protections of human thinking against AI models. Though the two bodies have different legal roles, similar concerns led both to a somewhat uniform regulatory framework for AI. While the intention is laudable, the legal solutions prescribed are broad and onerous. 

MEITY and Schufa have imposed on AI models a uniform legal category with ambiguous liability. The resulting compliance environment is burdensome, deterring domestic entrants and dampening the vast potential of AI. 

This approach seems increasingly out-of-step with global trends, placing India and the EU at a disadvantage. In January, the Trump administration passed an Executive Order pressing for looser AI regulation. Japan, South Korea, and China have adopted text and data mining (TDM) copyright exceptions in a lighter-touch approach to data protection. While the EU also has TDM exceptions under the Digital Single Market Directive, they are more narrowly defined and risk models trained on older data or imported models. On liability too, more promising models exist: Singapore and, to a lesser degree, the EU AI Act have developed models based on stake in the AI development process.

AI does not necessarily require sweeping new legislation. As Singapore demonstrates, much of this regulation can be done through non-binding guidance or existing laws.

The path forward requires a balance: legislators must protect individual rights and foster a dynamic AI industry. Achieving both objectives requires legislators to acknowledge that not all AI models pose an equal threat, nor do they require equal regulation.

**Ted Perkin is a Researcher to Baroness Finlay of Llandaff in the House of Lords, with a focus on legislation affecting the end-of-life. Prior to his current role, Ted interned at Vidhi Centre for Legal Policy in 2024, working with the criminal and corporate law teams, and judicial reforms (JALDI). He was also a Briefing Author for the New Diplomacy Project think tank, analysing the significance of India’s 2024 election for UK foreign policy. Ted graduated from the University of Cambridge with a First Class Honours degree in History, specialising in Indian democracy and political thought. His research interests lie in the intersection of law and policy across health, trade, and technology.

**Disclaimer: The views expressed in this blog do not necessarily align with the views of the Vidhi Centre for Legal Policy.