Visual Legal Analytica

Discover Legal & Policy Ideas, in the Language of Graphics

New Report: Deciphering Artificial Intelligence Hype and its Legal-Economic Risks, VLiGTA-TR-001


We are eager to release the Vidhitsa Law Institute's first technical report, on artificial intelligence hype and its legal-economic risks.

Bhavana J Sekhar, Principal Researcher and Poulomi Chatterjee, Contributing Researcher have co-authored this report with me. In this work, we have addressed the issue of hype cycles caused by artificial intelligence technologies in detail.


This report is an initial research contribution developed by the team of Vidhitsa Law Institute of Global and Technology Affairs (VLiGTA) as a part of the efforts in the Artificial Intelligence Resilience department. We have continued our work which we had started in the Indian Society of Artificial Intelligence and Law (ISAIL) since 2021, on formalising ethics research on the trend of Artificial Intelligence hype. In my discussions and consultations with Dr Jeffrey Funk, a former Faculty at the National University of Singapore, Bogdan Grigorescu, a tech industry expert and an ISAIL Alumnus and Dr Richard Self from the University of Derby, I realised that it is necessary to cater to encapsulate the scope and extent of Artificial Intelligence hype beyond competition policy and data privacy issues, which many developed countries in the D9 group of countries have already faced. Many technology companies inflate their valuations and use Artificial Intelligence to hype their products and services’ value. This however can be done by influencing stocks, distorting perceptions, misdirecting demand and credibility concerns and other methods as well. The key to the exploitative nature of AI hype as we know them is based on the interconnectedness of the information and digital economy and how minuscule economic and ethical innovations in AI as a technology, can be abused.


Bhavana’s Market Analysis is succinct and focuses on the points of convergence and Poulomi’s evaluation of the ethics of Artificial Intelligence is appreciated. I express my special regards to Sanad Arora from the Vidhitsa Law Institute and Ayush Kumar Rathore from Indic Pacific’s Technology Team for their moral support.


Some of the key aspects discussed in report are about the perpetuation of the hype cycles and their formalisation in the legal rubric for regulators. We have also focused with a soft law perception to address larger economic and technical issues and offered recommendations. Based on our research, we have formulated seven working conditions to determine artificial intelligence hype, which are based on a set of stages:


Stage 1: Influence or Generation Determination

  • An Artificial Intelligence hype cycle is perpetuated to influence or generate market perception in a real-time scenario such that a class of Artificial Intelligence technology as a product / service is used in a participatory or preparatory sense to influence or generate the hype cycle.

Stage 2: Influencing or Generating Market Perceptions & Conditions

  • The hype cycle may be continuous or erratic, but the real-time impact on market perceptions which affect the market of the product / services involving Artificial Intelligence technologies, as estimated from a standardised / regulatory / judicial / statutory point of view.

  • The hype cycle may directly or indirectly perpetuate the course of specific anti-competitive practices.

  • Beyond the real-time impact on market perceptions, the consecutive effects of the real-time impact may distort a limited set of related markets, provided that the specific anti-competitive practices are furthered in a distinct pattern.

Stage 3: Uninformed or Disinformed Markets

  • The features of the product / service subject to hype cycle are uninformed / disinformed to the market. It may be stated that misinforming the market may be construed as keeping the market just uninformed, except not in mutually exclusive cases.

Stage 4: Misdirected Perceptions in the Information & Digital Economy

  • The hype cycle may be used to distract the information economy by converting the state of being uninformed or disinformed into misdirected perception. This means that the hype cycle about a product or service may not clarify certain specifics and may cause the public or market players to distract their focus towards ancillary considerations, to comfortably ignore the fact that they have being uninformed or disinformed.

Stage 5: Estimation of the Hype Cycle through Risk Determination

  • In addition, even if preliminary clarifications or assessments are provided to the market, the lack of due diligence in determining the inexplicable features of the Artificial Intelligence technology in any form or means as a part of the product or service involves the assessment of the hype cycle with a risk-centric approach.

Further interpretation and explanations have been provided in the report.

 

Recommendations in this Report

  1. Companies must make it clear to the regulatory bodies on the investment and ethical design of the products and services which involve narrow AI and high-intensive AI technologies.

  2. Maintaining efficient knowledge management systems catering to IP issues is important. It is essential that the economic and ethical repercussions of the biproducts of knowledge management are addressed carefully due to the case that many Artificial Intelligence technologies still would remain inexplicable due to reasons including ethical ambiguity.

  3. If Artificial Intelligence technologies are included at any managerial level groups, departments and divisions, which also includes the board of directors for consultative, reliance or any other tangible cause, then regardless of their attribution to the knowledge management systems maintained by the company itself, including concerns on intellectual property, a risk-oriented practice of maintaining legitimate and viable transparency on issues around data protection & privacy and algorithmic activities & operations must be adopted. Regulators can adopt for self-regulatory directives or solutions. In case regulatory sandboxes are necessary to be used, there must be separate guidelines (since they are not products or services) for such kinds of technologies by virtue of their use case in the realm of corporate governance.

  4. The transboundary flow of data, based on some commonalities of ethical and quality assessment, can be agreed amongst various countries subject to their data localisation and quality policies. When it comes to Artificial Intelligence technologies, to reduce or detect the impact and aftermath of Artificial Intelligence hype cycles – governments must negotiate on agreeing for an ethical free flow of data and by mapping certain algorithmic activities & operations which affect public welfare on a case-to-case basis.

  5. We propose that the Working Conditions to Determine Artificial Intelligence Hype can be regarded in a consultative sense a framework to intermix competition policy and technology governance concerns, by various stakeholders. We are open to consultation, feedback and alternate opinions.

  6. We also propose that the Model Algorithmic Ethics Standards (MAES) to be put into use, so that some estimations, can be made at a preliminary level as regulatory sandboxes are subject to procurement.


 

The Report is available at VLiGTA.com


Those who are interested in reading the report can find it available at VLiGTA.com.

Price: 200 INR