top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Why AI Standardisation & Launching AIStandard.io & Re-introducing IndoPacific.App



Artificial Intelligence (AI) is widely recognized as a disruptive technology with the potential to transform various sectors globally. However, the economic value of AI technologies remains inadequately quantified. Despite numerous reports on AI ethics and governance, many of these efforts have been inconsistent and reactionary, often failing to address the complexities of regulating AI effectively. Even India's MeitY AI Advisory, which faces constitutional challenges, was a result of knee-jerk reactions.



Many companies are hastily deploying AI without a comprehensive understanding of its limitations, resulting in substandard or half-baked solutions that can cause more harm than good.




While AI solutions have demonstrated tangible benefits in B2B sectors such as agriculture, supply chain management, human resources, transportation, healthcare, and manufacturing, the impact on B2C segments like creative, content, education, and entertainment remains unclear. The long-term impact of RoughDraft AI or GenAI should be approached with caution, and governments worldwide should prioritize addressing the risks associated with the misuse of AI, which can affect the professional capabilities of key workers and employees involved with AI systems.


This article aims to explain why AI standardization is necessary and what can be achieved through it in and for India. With the wave of AI hype, legal-ethical risks surrounding substandard AI solutions, and a plethora of AI policy documents, it is crucial to understand the true nature of AI and its significance for the majority of the population.


By establishing comprehensive ethics principles for the design, development, and deployment of AI in India, drawing from global initiatives but grounded in the Indian legal and regulatory context, India can harness the potential of AI while mitigating the associated risks, ultimately leading to a more robust and ethical AI landscape.


The Hype and Reality of AI in India




The rapid advancement of Artificial Intelligence (AI) has generated significant excitement and hype in India. However, it is crucial to separate the hype from reality and address the challenges and ethical considerations that come with AI adoption.


The Snoozefest of AI Policy Jargon: Losing Sight of What Matters


In the midst of the AI hype train, we find ourselves drowning in a deluge of policy documents that claim to provide guidance and clarity, but instead leave us more confused than ever. These so-called "thought leaders" and "experts" seem to have mastered the art of saying a whole lot of nothing, using buzzwords and acronyms that would make even the most seasoned corporate drone's head spin.


Take, for example, the recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) on March 1, 2024.This masterpiece of bureaucratic jargon manages to use vague terms like "undertested" and "unreliable" AI without bothering to define them or provide any meaningful context.It's almost as if they hired a team of interns to play buzzword bingo and then published the results as official policy.


Just a few days later, on March 15, the government issued yet another advisory, this time stipulating that AI models should only be accessible to Indian users if they have clear labels indicating potential inaccuracies or unreliability in the output they generate. Because apparently, the solution to the complex challenges posed by AI is to slap a warning label on it and call it a day.

And let's not forget the endless stream of reports, standards, and frameworks that claim to provide guidance on AI ethics and governance. From the IEEE's Ethically Aligned Design initiativeto the OECD AI Principles, these documents are filled with high-minded principles and vague platitudes that do little to address the real-world challenges of AI deployment.


Meanwhile, the actual stakeholders – the developers, researchers, and communities impacted by AI – are left to navigate this maze of jargon and bureaucracy on their own. Startups and SMEs struggle to keep up with the constantly shifting regulatory landscape, while marginalized communities bear the brunt of biased and discriminatory AI systems.


It's time to cut through the noise and focus on what really matters: developing AI systems that are transparent, accountable, and aligned with human values. We need policies that prioritize the needs of those most impacted by AI, not just the interests of big tech companies and investors. And we need to move beyond the snoozefest of corporate jargon and engage in meaningful, inclusive dialogue about the future we want to build with AI.


So let's put aside the TESCREAL frameworks and the buzzword-laden advisories, and start having real conversations about the challenges and opportunities of AI. Because at the end of the day, AI isn't about acronyms and abstractions – it's about people, and the kind of world we want to create together.


Overpromising and Underdelivering


Many companies in India are rushing to deploy AI solutions without fully understanding their capabilities and limitations. This has led to a proliferation of substandard or half-baked AI products that often overpromise and underdeliver, creating confusion and mistrust among consumers. The excessive focus on generative AI and large language models (LLMs) has also overshadowed other vital areas of AI research, potentially limiting innovation.


Ethical and Legal Considerations


The integration of AI in various sectors, including healthcare and the legal system, raises complex ethical and legal questions. Concerns about privacy, bias, accountability, and transparency need to be addressed to ensure the responsible development and deployment of AI. The lack of clear regulations and ethical guidelines around AI in India has created uncertainty and potential risks.


Policy and Regulatory Challenges


India's approach to AI regulation has been reactive rather than strategic, with ad hoc responses and unclear guidelines. The recent AI advisory issued by the Ministry of Electronics and Information Technology (MeitY) has faced criticism for its vague terms and lack of legal validity. There is a need for a comprehensive legal framework that addresses the unique aspects of AI while fostering innovation and protecting individual rights.


Balancing Innovation and Competition


AI has the potential to drive efficiency and innovation, but it also raises concerns about market concentration and anti-competitive behavior. The Competition Commission of India (CCI) has recognized the need to study the impact of AI on market dynamics and formulate policies that effectively address its implications on competition.


What's Really Happening in the "India" AI Landscape?


Lack of Settled Legal Understanding of AI


India currently lacks a clear legal framework that defines AI and its socio-economic and juridical implications. This absence of settled laws has led to confusion among the judiciary and executive branches regarding what can be achieved through consistent AI regulations and guidelines[1].


A recent advisory issued by the Ministry of Electronics and Information Technology (MeitY) in March 2024 aimed to provide guidelines for AI models under the Information Technology Act. However, the advisory faced criticism for its vague terms and lack of legal validity, highlighting the challenges posed by the current legal vacuum[2].

The ambiguity surrounding AI regulation is exemplified by the case of Ankit Sahni, who attempted to register an AI-generated artwork but was denied by the Indian Copyright Office. The decision underscored the inadequacy of existing intellectual property laws in addressing AI-generated content[3].


Limited Participation from Key Stakeholders


The AI discourse in India is largely driven by investors and marketing leaders, often resulting in half-baked narratives that fail to address holistic questions around AI policy, compute economics, patentability, and productization[1].


The science and research community, along with the startup and MSME sectors, have not actively participated in shaping realistic and effective AI policies. This lack of engagement from key stakeholders has hindered the development of a comprehensive AI ecosystem[4].

Successful multistakeholder collaborations, such as the IEEE's Ethically Aligned Design initiative, demonstrate the value of inclusive policymaking[5]. India must encourage greater participation from diverse groups to foster innovation and entrepreneurship in the AI sector.


Impact of AI on Employment


The impact of AI on employment in India is multifaceted, with varying effects across industries. While AI solutions have shown tangible benefits in B2B sectors like agriculture, supply chain management, and healthcare, the impact on B2C segments such as creative, content, and education remains unclear[1].


A study by NASSCOM estimates that around 9 million people are employed in low-skilled services and BPO roles in India's IT sector[6]. As AI adoption increases, there are concerns about potential job displacement in these segments.


However, AI also has the potential to enhance productivity and create new job opportunities. The World Economic Forum predicts that AI will generate specific job roles in the coming decades, such as AI and Machine Learning Specialists, Data Scientists, and IoT Specialists[7].


To harness the benefits of AI while mitigating job losses, India must invest in reskilling and upskilling initiatives. The government has launched programs like the National Educational Technology Forum (NETF) and the Atal Innovation Mission to promote digital literacy and innovation[8].


As India navigates the impact of AI on employment, it is crucial to approach the long-term implications of RoughDraft AI and GenAI with caution. Policymakers must prioritize addressing the risks associated with AI misuse and its potential impact on the professional capabilities of workers involved with AI systems[1].

By expanding on these key points with relevant examples and trends, the article aims to provide a comprehensive overview of the challenges and considerations surrounding AI policy in India. The next section will delve into potential solutions and recommendations to address these issues.


A Proposal to "Regulate" AI in India: AIACT.IN


The Draft Artificial Intelligence (Development & Regulation) Act, 2023 (AIACT.IN) Version 2, released on March 14, 2024, is an important private regulation proposal developed by yours truly. While not an official government statute, AIACT.IN v2 offers a comprehensive regulatory framework for responsible AI development and deployment in India.AIACT.IN v2 introduces several key provisions that make it a significant contribution to the AI policy discourse in India:

  1. Risk-based approach: The bill adopts a risk-based stratification and technical classification of AI systems, tailoring regulatory requirements to the intensity and scope of risks posed by different AI applications. This approach aligns with global best practices, such as the EU AI Act. Apart from the risk-based approach, there are 3 other ways to classify AI.

  2. Promoting responsible innovation: AIACT.IN v2 includes measures to support innovation and SMEs, such as regulatory sandboxes and real-world testing. It also encourages the sharing of AI-related knowledge assets through open-source repositories, subject to IP rights.

  3. Addressing ethical and societal concerns: The bill tackles issues such as content provenance and watermarking of AI-generated content, intellectual property protections, and countering AI hype. These provisions aim to foster transparency, accountability, and public trust in AI systems.

  4. Harmonization with global standards: AIACT.IN v2 draws inspiration from international initiatives such as the UNESCO Recommendations on AI and the G7 Hiroshima Principles on AI. By aligning with global standards, the bill promotes interoperability and facilitates India's integration into the global AI ecosystem.


Despite its status as a private bill, AIACT.IN v2 has garnered significant attention and support from the AI community in India. The Indian Society of Artificial Intelligence and Law (ISAIL) has featured the bill on its website, recognizing its potential to shape the trajectory of AI regulation in the country.


Now, to disclose, I had proposed this AIACT.IN in November 2023 and then in March 2024 to promote a democratic discourse and not a blind implementation of this bill in the form of a law. The response has been overwhelming so far, and a third version of the Draft Act is in the works already.


However, as I had taken feedback from advocates, corporate lawyers, legal scholars, technology professionals and even some investors and C-suite professionals in tech companies, the feedback that I received was that benchmarking AI itself is a hard task, which even through this AIACT.IN proposal could become difficult to implement due to lack of general understandings around AI.


What to Standardise Then?


Before we standardise artificial intelligence in India, let us configure and understand what exactly can be standardised.


To be fair, standardisation of AI in India is contingent upon the nature of the industry itself. As of now, the industry is at a nascent stage despite all the hype, and the so-called discourse around "GenAI" training. This explains that we are mostly at the scaling up and R&D stages around AI & GenAI, be B2B, B2C or D2C in India.


Second, let's ask - who should be subject to standardisation? In my view - AI standardisation must be neutral of the net worth or economic status of any company in the market. This means that the principles of AI standardisation, both sector-neutral & sector-specific across the aisle, must apply on all market players, in a competitive sense. This is why the Indian Society of Artificial Intelligence and Law has introduced Certification Standards for Online Legal Education (edtech).


Nevertheless, the way AI standards must be developed must have a sense of distinction that it remains mindful of the original / credible use cases that are coming up. The biggest risk of AI hype in this decade is that any random company starts claiming they have a major AI use case, only to find out they haven't tested or effectively built that AI even at the stage of their "solution" being a test case. This is why it becomes necessary to address AI use cases critically.

There are 2 key ways that one can standardise AI and not regulate it - (1) the Legal-Ethical Way; and (2) the Technical Way. None of the means can be opted to discount another. In my view, both methods must be implemented, with caution and sense. The reason is obvious. Technical benchmarking enables us to track the evolution of any technology and its sister and daughter u