top of page

Ready to check your search results? Well, you can also find our industry-conscious insights on law, technology & policy at indopacific.app. Try today and give it a go.

90 items found

  • Generative AI and Law Workshop for upGrad

    Our Founder and Managing Partner, Abhivardhan is glad to hold a 2-hour virtual workshop with upGrad on Generative AI and Law. This workshop is a free event to attend virtually. upGrad has stated that they will provide a Certificate upon Completion. Abhivardhan will discuss about nuances related to the use of Generative Artificial Intelligence, and its use in the legal industry, especially when it comes to document analysis and legal research. The workshop will also cover some nuances related to the legal issues around Generative AI tools, especially on prompt engineering, cybersecurity and intellectual property-related issues. Register for the workshop for free at https://www.upgrad.com/generative-ai-law-workshop/ About Abhivardhan, our Founder and Managing Partner Throughout his journey, he has gained valuable experience in international technology law, corporate innovation, global governance, and cultural intelligence. With deep respect for the field, Abhivardhan has been fortunate to contribute to esteemed law, technology, and policy magazines and blogs. His book, "AI Ethics and International Law: An Introduction" (2019), modestly represents his exploration of the important connection between artificial intelligence and ethical considerations. Emphasizing the significance of an Indic approach to AI Ethics, Abhivardhan aims to bring diverse perspectives to the table. Some of his notable works also include the 2020 Handbook on AI and International Law, the 2021 Handbook on AI and International Law and the technical reports on Generative AI, Explainable AI and Artificial Intelligence Hype.

  • New Report: Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003]

    We are more than glad to release another technical report by the VLiGTA team. This report takes a business-oriented generalist approach on AI explainability ethics. We express our gratitude to Ankit Sahni for authoring a foreword to this technical report. This research is a part of the technical report series by the Vidhitsa Law Institute of Global and Technology Affairs, also known as VLiGTA® - the research & innovation division of Indic Pacific Legal Research. Responsible AI has been a part of the technology regulation discourse for the AI industry, policymakers as well as the legal industry. As ChatGPT and other kinds of generative AI tools have become mainstream, the call to implement responsible AI ethics measures and principles in some form becomes a necessary one to consider. The problem lies with the limited and narrow-headed approach of these responsible AI guidelines, because of fiduciary interests and the urge to be reactive towards any industry update. This is exactly where this report comes. To understand, the problems with Responsible AI principles and approaches can be encapsulated in these points: AI technologies have use cases which are fungible There exist different stakeholders for different cases on AI-related disputes which are not taken into consideration Various classes of mainstream AI technologies exist and not all classes are dealt by every major country in Asia which develops and uses AI technologies The role of algorithms in shaping the economic and social value of digital public goods remains unclear and uneven within law This report is thus a generalist and specificity-oriented work, to address & explore the necessity of internalising AI explainability measures into perspective. We are clear with a sense of perspective that not all AI explainability measures can be even considered limited to the domains of machine learning, and computer science. Barring some hype, there are indeed some transdisciplinary and legal AI explainability measures, which could be implemented. I am glad my co-authors from the VLiGTA team did justice to this report. Sanad Arora, the first co-author of this report, has extensively contributed on aspects related to the limitations of responsible AI principles and approaches. He has also offered insights on the issue of convergence of legal and business concerns related to AI explainability. Bhavana J Sekhar, the second co-author has offered her insights on developing AI explainability measures to practice conflict management when it comes to technical and commercial AI use cases. She has also contributed extensively on legal & business concerns pertaining to the enabling of AI explainability in Chapter 3. Finally, it has been my honour to contribute on the development of AI explainability measures to practice innovation management, when it comes to both technical and commercial AI use cases. I am glad that I could also offer an extensive analysis on the socio-economic limits of the responsible AI approaches at present. You can now access the complete report on the VLiGTA App: https://vligta.app/product/promoting-economy-of-innovation-through-explainable-ai-vligta-tr-003/ Recommendations from VLiGTA-TR-003 Converging Legal and Business Concerns Legal and Business concerns can be jointly addressed by XAI where data collected from XAI can be used to address the regulatory challenges and help in innovation, while ensuring accountability on the forefront. Additionally, information from XAI systems can assist in developing and improving specific tailor made risk management strategies and ensure risk intervention at the earliest. Explainable AI tools can rely on prototype models which will have self-learning approaches to adopt and learn model-agnosticexplanations is also highly flexible since it can only access the model’s output. Privacy-aware machine learning tools can also be incorporated into the development of explainable AI tools to avoid possible risks of data breaches and privacy. Compliances may be developed and used for development purposes, including the general mandates that are attributed to them. Conflict Management Compliance by design may become a significant aspect of encouraging the use of regulatory sandboxes and enabling innovation management in more productive ways as possible. In case sandboxes are rendered ineffective, real-time awareness and consumer education must be done, keeping in mind technology products and services accessible and human-centric by design. Risk Management strategies are advised to be incorporated at different stages of AI life cycle from the inception of Data collection and Data training. De-risking AI can involve model risk assessment by classifying AI model based on its risk (High, low, medium) and its contextual usage which will further assist in developers, stakeholders to jointly develop risk mitigation principles according to the level of risk incurred by AI. Deployment of AI explainability measures will require a level of decentralisation where transdisciplinary teams to work closely to provide complete oversight. Risk monitoring should be carried out by data scientists, developers and KMPs to share overlapping information and improve situational analysis of the AI system periodically. Innovation Management The element of trust is necessary and the workflow behind the purpose of data use must be made clear by companies. Even if the legal risks are not foreseeable, they can at least make decisions, which de-risk the algorithmic exploitation of personal & non-personal data, metadata and other classes of data & information. These involve technical and economic choices first, which is why unless regulators come up with straightforward regulatory solutions, companies must see how they can minimise the chances of exploitation and enhance the quality of their deliverables and keeping their knowledge management practices much safer.

  • Arbitrating GST Disputes Arising out of Contractual Arrangements in India

    DISCLAIMER: The contents of this blog article reflect the personal views of the authors alone and do not constitute the views of any of the authors' affiliated organizations. The contents of the blog article cannot be treated as legal advice under any circumstances. The main author of the article is a Senior Associate at Ratan Samal & Associates and an Arbitrator at the Asia Pacific Centre for Arbitration and Mediation & the Indian Institute of Arbitration and Mediation. This article is co-authored by Abhivardhan, Managing Partner at Indic Pacific Legal Research, Founder, VLiGTA and Chairperson & Managing Trustee at the Indian Society of Artificial Intelligence and Law. Introduction The unified indirect tax system of India, viz., the Goods and Services Tax (GST) has entered its sixth year. Despite its insurmountable potential, several disputes continue to increasingly exist. Even though a major portion of the disputes are against adjudication of the GST Department representing a dispute against the sovereign right, power and function of the Government to levy tax or to withhold refund or grant/ deny a tax incentive, an equally significant portion of GST disputes are also pertaining to contractual rights arising out of contracts entered into between parties where the subject matter relates substantially to the shifting of burden of GST, indemnification by the defaulting party to the aggrieved party for non- payment of GST to the Government, GST reimbursement arrangements, tax-sharing arrangements, deemed export disputes and the like. This article argues that even though a significant portion of disputes under the GST law are non- arbitrable as it pertains to disputes with the Government representing the Sovereign power to tax; contractual disputes arising between companies and other forms of entities and legal persons where the subject matter of the dispute pertains to GST are arbitrable. A clear demarcation and identification of the distinction between the two can significantly aid companies, entities and legal persons in correctly contesting their case before the appropriate forum. Identifying Non-Arbitrable Disputes under the Goods and Services Tax Law The GST law is a combination of multiple statutes, operating simultaneously on the respective subject matters as assigned to it, by its ‘charging mechanism’. The substratum of the GST statutes is ‘supply’ wherein tax is levied on the supply of goods or services or both goods and services. Due to the fact that the spirit of cooperative federalism is imbibed within the GST statutes, the Central Goods and Services Tax (CGST) Act, 2017 and the State Goods and Services Tax (SGST) Act, 2017 levy GST on all intra- State supplies of goods or services or both proportionately and in case a transaction takes place within a Union Territory then the CGST Act, 2017 and the Union Territory Goods and Services Tax (UTGST) Act, 2017 apply proportionately. By proportionate application, it is meant that if the rate of GST for a particular supply is 18%, then 9% CGST and 9% SGST or UTGST as the case may be, will apply. As far as inter-State supplies, imports and exports (including refunds thereof read with provisions of the CGST Act, 2017) are concerned, the Integrated Goods and Services Tax (IGST) Act, 2017 applies and there is no proportionate levy of tax since only one statute applies in such forms of supplies. Coming to the non-arbitrable aspects, the Supreme Court of India in Vidya Drolia & Ors. v. Durga Trading Corporation & Ors., (2021) 2 SCC 1 has held that taxation is the sovereign function of the State and is therefore, non- arbitrable. This means that disputes arising out of adjudication u/s 73 or 74, denial of refund u/s 54, denial of input tax credit u/s 16, cancellation of registration u/s 29, rejection of appeals, order of anti- profiteering u/s 171 of the CGST Act, 2017 and like matters where the dispute is against the Goods and Services Tax Department or the Central Government or the respective State Government, the said form of GST dispute will be non-arbitrable. Hence, the appellate route before the quasi-judicial appellate authority followed by the Goods and Services Tax Appellate Tribunal (not yet constituted), followed by the High Court and the Supreme Court will have to be opted, unless there is violation of fundamental rights, principles of natural justice violation, the order passed is wholly without jurisdiction or if the vires of a particular provision(s) of the GST statute(s) or its respective delegated legislation in the form of rules, notifications, circulars and the like are challenged, in the event of which a Writ Petition can be filed before the High Court directly without undergoing the appellate route. Assessing the Arbitrability of Goods and Services Tax Disputes from Contractual Arrangements This part of the article delves into few of the most common forms of contractual arrangements which are reflected in contractual arrangements, often as a part of clauses of the respective contract. Contractual Shifting of the Burden of GST The contractual shifting of the burden of GST is one of the most common forms of clauses which can be seen in several contracts especially in the construction sector and in contracts with the Government and with public sector undertakings and has also been extant under the erstwhile indirect tax laws. However, it is necessary to point out that the incidence of tax under the GST statutes will not change and the legal person liable to pay tax as per the charging mechanism will have to bear the tax with a subsequent contractual right to recover the amount from the other party in case the other party had agreed under the contract to bear such tax. Under GST, it is the supplier of goods or services or both, as the case maybe, who has to pay the tax under the forward charge mechanism. A few exceptions exist where the recipient of goods or services or both, as the case may be, has been made liable to pay tax under the reverse charge mechanism. For example, if in a transaction where the recipient was supposed to pay GST under the reverse charge mechanism has entered into a contract with its supplier that it is the supplier who will have to bear the GST, then in such circumstances, while filing of the monthly returns in FORM-GSTR-3B, the recipient will have to pay the tax amount under reverse charge mechanism and it will not be open for the recipient to insist recovery from the supplier due to the contract. However, after such payment is made by the recipient, the recipient of the supply would be entitled to recover from the supplier in pursuance of the contractual arrangement between them which foists GST liability on the supplier. In the absence of such contractual arrangement, the recipient would have paid the tax without any further rights for recovering the amount from the supplier, but it is only due to the contractual arrangements for shifting the burden of tax, does the recipient have the right to recover the GST amount from its supplier. In the presence of an arbitration clause in a contract where shifting of burden of taxes have been agreed upon, a dispute between the recipient who is foisted liability under the reverse charge mechanism by the respective GST statute with the Government would be non-arbitrable since it would be a right in rem and also representing the sovereign right of the Government to levy and collect tax from the recipient as per the charging mechanism under GST whereas the subsequent dispute between the recipient and the supplier wherein the supplier had agreed to the shifting of burden of tax would be an arbitrable dispute being a right in personam which arises out of a contract. Similarly, in a transaction where GST is to be paid by the supplier under the forward charge mechanism and a contractual arrangement exists between the supplier and the recipient that the supplier will bear the entire GST amount, the recipient can choose to deduct GST and disentitle the supplier from collecting the tax amount from the recipient, resulting in the supplier paying the taxes from its own pockets instead of collecting it from the recipient as would have been the scenario under normal circumstances. Similar to the aforesaid, a dispute between the supplier and the recipient in respect of the deduction of GST amount from the payment would be a right in personam and arbitrable as per the arbitration agreement envisaged in the contract. The Supreme Court of India in Rashtriya Ispat Nigam Limited v. M/s Dewan Chand Ram Saran, (2012) 5 SCC 306 set aside the judgment of the Bombay High Court which had interfered with an Arbitral Award interpreting a clause of the contract which was pertaining to the shifting of burden of Service Tax. In this case, the parties had entered into a contract wherein the contractor who was the service provider was to bear the entire Service Tax amount. In the absence of the contract, the service provider would collect such tax from the service recipient and pay it to the Government Treasury. However, due to the contractual shifting of burden, the service recipient in the instant case deducted the Service Tax component from the payment of consideration. This resulted in the service provider invoking Arbitration against the service recipient wherein the Arbitrator held that as per the contractual terms between the parties the service recipient was correct in deducting the payments of Service Tax as the burden was on the service provider to bear Service Tax. Upon challenge before the Bombay High Court, the Arbitral Award was interfered and set-aside and upon further appeals to the Supreme Court, the Supreme Court held that the Arbitrator had interpreted the contract correctly and the Bombay High Court’s interference with the Arbitral Award was unjustified. The Delhi High Court in Spectrum Power Generation Limited v. Gail (India) Limited, (2022) SCC OnLine Del 4262 was faced with a dispute arising out of a Gas Sale Agreement wherein the petitioner company had invoked arbitration after failed attempts of conciliation and had filed a petition u/s 11(6) of the Arbitration and Conciliation Act, 1996 before the Delhi High Court for the appointment of an Arbitrator. The respondent company’s argument was that the dispute was non-arbitrable since it was pertaining to a dispute of contractual shift of burden of GST and Value Added Tax (VAT) on gas. However, the Delhi High Court held that such disputes were arbitrable and allowed the petition, resulting in the appointment of an Arbitrator by the Delhi High Court u/s 11(6) of the Arbitration and Conciliation Act, 1996. The Bombay High Court in Angerlehner Structural and Civil Engineering Company v. Municipal Corporation of Greater Bombay, (2022) 103 GSTR 336 in an Arbitration Execution Application was also faced with the question as to whether there was contractual shifting of burden of taxes between the parties. The Court held that no such contractual arrangement existed between the parties and the withholding of GST by the recipient was unjustified and accordingly, the recipient was directed to pay the GST amount to the supplier with interest who in turn would deposit it in the Government Treasury. Therefore, the legal principle which emerge from the aforesaid judgments and discussions is that contractual arrangements for shifting the burden of tax are valid forms of contract and in case of any dispute in respect of the same, such disputes are arbitrable as per the arbitration agreement in the contract. Reimbursement and Tax-Sharing Arrangements Parties may even enter into contractual arrangements pertaining to reimbursement of GST and may also enter into GST sharing arrangements and similar to the scenario for contractual shifting of burden of taxes, the legal person chargeable to tax as per the charging mechanism will have to pay GST and in case of a dispute pertaining to contractual clauses of reimbursement of GST and tax-sharing arrangements, arbitration can be invoked as those would be arbitrable disputes. The Delhi High Court in Indian Railway Catering & Tourism Corporation (IRCTC) Ltd. v. Deepak & Co., (2022) 104 GSTR 475, inter alia, upheld the Award passed by the Arbitrator which granted reimbursement of GST with interest. Although the reasoning of the Arbitrator was upheld on the basis of contractual interpretation, the judgment is also indicative of the fact that reimbursement of GST would be an arbitrable dispute as a contractual right in personam. Indemnification of the Recipient by the Supplier for Default in Payment of GST by the Supplier There is an upsurge in disputes pertaining to input tax credit under GST arising because of the fact that the supplier is not paying tax to the Government Treasury. In the normal chain of transactions, the recipient of goods or service (purchaser) pays the consideration amount as well as the amount of GST charged in the tax invoice raised by the supplier (seller) and the supplier is liable to pay such GST collected from the recipient to the Government Treasury. However, in many cases it is being seen that the supplier, despite having collected tax from the recipient is not depositing it in the Government Treasury resulting in recovery action being taken against the supplier as well as the recipient. Even after having discharged its obligations, the recipient is faced with difficulties due to the inaction of the supplier resulting in the ineligibility of the input tax credit for the recipient in pursuance of Section 16(2)(c) of the CGST Act, 2017. This of course, does not apply to instances where the supplier and the recipient are acting in collusion to defraud the Government but applies in cases where the recipient was under the bona fide belief that its supplier is a genuine dealer and despite the consideration and the GST amount having been paid in full and in time by the recipient to the supplier, the supplier does not deposit GST in the Government Treasury. Parties may choose to incorporate clauses in their contract pertaining to their respective transactions where contingent to the recipient purchaser facing any difficulties from the GST Department due to non- payment of GST in the Government Treasury by the supplier, the recipient will be entitled to be indemnified for the demand created against the recipient by the GST Department. Under normal circumstances, even after paying the GST amount in full to the supplier for depositing in the Government Treasury, due to the inaction or the non- compliance by the supplier, the recipient is saddled with having to reverse input tax credit along with interest at the rate of 24% u/s 50(3) of the CGST Act, 2017 and with penalty u/s 122 r.w.s. 73 or 74 of the CGST Act, 2017 as the case may be. This is why having a contingent contractual clause for indemnity can aid the recipient in being indemnified of the input tax credit reversal, interest and penalty amount suffered by it due to the inaction and non- compliance by the supplier to deposit tax to the Government Treasury. It is noteworthy that since the dispute in this respect would be regarding indemnification arising out of a contingent contract between parties to the contract, such a dispute would be arbitrable. Certain supplies under GST have been treated as deemed exports. When a supplier makes a supply of goods to a recipient registered with an Export Promotion Council or a Commodity Board recognized by the Department of Commerce including Export Oriented Units, such supplies would be treated as deemed exports even though such goods do not leave the territory of India. Additionally, for being treated as deemed exports under GST, the goods must also be exported by the recipient registered with an Export Promotion Council or a Commodity Board recognized by the Department of Commerce including Export Oriented Units to export such goods to a place outside the territory of India within 90 days of issuance of the tax invoice by the supplier, the tax invoice issued must contain the GSTIN of the supplier, the shipping bill or the bill of export must contain the tax invoice number, the recipient must transport the goods directly from the port, inland container depot, airport, land customs station or a registered warehouse from where the goods shall be directly exported and copies of shipping bill or bill of export, export manifest, tax invoice and export report must be provided to the supplier as well as the jurisdictional officer. The benefit of a transaction being treated as a deemed export under GST is that the supplier has to pay tax at a concessional rate of tax after collecting such concessional tax amount from the recipient. The benefit of such concessional rate of tax is provided to deemed export supplies since the transaction is being made in the course and furtherance of export that ultimately results in the generation of valuable foreign currency and therefore, no taxes must be exported in the entire chain of export. Coming to the arbitrability perspective, since there are insurmountable conditions to be fulfilled by the recipient, in case of non-compliance by the recipient of any of the conditions, it is the supplier that faces action from the GST Department wherein tax at the full rate is demanded along with interest and penalty. This is capable of causing significant difficulties for the suppliers in deemed export transactions. In case of scenarios where the recipient does not export the goods within 90 days of issuance of tax invoice by the supplier and in case of non-compliance with the conditions of the export related documents being submitted by the recipient to its jurisdictional officer, in the presence of a contractual arrangement mandating the aforesaid requirements, the supplier would be entitled to invoke arbitration alleging breach of the contractual clauses. This would enable the supplier to recover the tax at the full rate along with interest and penalty paid by it against the demand created by the GST Department due to the non- fulfilment of conditions of deemed export by the recipient of such goods. Conclusion There are manifold possibilities of disputes arising out of contractual arrangements pertaining to GST and only the most common forms of disputes arising between parties in this respect have been discussed in the present article. It is evident from the aforesaid discussions that the presence of contractual arrangements under GST are arbitrable in case of disputes or differences arising out of such contractual clauses and that arbitrating such disputes would significantly assist parties in avoiding payment of the tax, interest and penalty liabilities from their own pockets due to the default of the other party in the transaction as the aggrieved party will be able to invoke Arbitration for recovering the said amounts from the defaulting party.

  • Social Media to Recommendation Media: AI & Law Design Perspectives

    In 2022, if we understand the interconnected role of the design behind our mainstream social media applications, and the algorithms used to run them, a new trend has become quite real, which would entertain some intriguing legal questions surrounding areas of concern such as competition law, digital accessibility and technology policy. Social media influencers, content creators and even technology geeks, have noticed this trend that various social media applications, be it Instagram or Twitter or any, are now behaving as recommendation media applications. The 10-second video trends promoted by Tiktok, for example, kind of promoted the algorithmic tendencies of recommending content of some favorable parameters, thereby giving a hard time to YouTube and Instagram. Even Spotify has been affected by the 10-second video trends, making recommendation media the recent version of social media. In this article, the legal and policy challenges around the transition of recommendation media from social media, are discussed. The endeavour behind the article is to declutter the tendency of algorithmic activities behind the rise of recommendation media, and assess how legal dilemmas may arise. The Emergence of Recommendation Media Let us first understand social media in brief terms. It is a digital medium, through which users of the platform “socialise” with each other. We may also say that social media has two important features, which make it characteristically, important - the human element of engagement of users (who are data subjects) and the technology element of the platform itself - the UI/UX, the code, the algorithms and even the stakeholders involved in the life cycle cum maintenance of the platform. The relationship between the technology involved and the human data subject defines the responsible and explainable features of the social media technology as a whole, while we also see the emergence of other forms of incidence which could relate to the platform and its social, political, economic and other relevant forms of use. Now, there are similarities among social media platforms in many ways, as to how are they useful, how they affect civil liberties of their own users, how they put their algorithmic infrastructure into proper use to moderate user content, and many others. As we see with time that the use of algorithms on social mediums, especially the mainstream ones such as Twitter, Instagram, LinkedIn, Facebook and others did drive and create a sphere of discourse and private censorship both. However, the way algorithms functionalists and shape social media discourse has surely changed. Tiktok is a quite important driver of this trend as well considering the fact that the app by introducing Tiktok Music would surely affect the way Spotify controls a significant place in the market. The rise of recommendation media however isn't just driven by the “Tiktok Effect” as we know it. Due to the developments in the United States as far as their domestic issues are concerned, private censorship and algorithms-driven discourse on social media platforms have quite affected international discourse and content creation. The self-regulation approaches of the big technology (FAAMG) companies does affect the knowledge and information economies of the Global South economies where governments in Asia and Africa are questioning the lack of transparency in such self regulation policies, like leadership hierarchies and community standards. This led the existing players promote the concept of recommendation media where parameters rule visibility. To approach this development, the emergence of alternative means of digital media became possible, starting from the United States. Substack, Revue and even Clubhouse represented those “alternatives” as we know. Micheal Mignano explains how recommendation media actually works in an article entitled The End of Social Media: In recommendation media, content is not distributed to networks of connected people as the primary means of distribution. Instead, the main mechanism for the distribution of content is through opaque, platform-defined algorithms that favor maximum attention and engagement from consumers. The exact type of attention these recommendations seek is always defined by the platform and often tailored specifically to the user who is consuming content. For example, if the platform determines that someone loves movies, that person will likely see a lot of movie related content because that’s what captures that person’s attention best. This means platforms can also decide what consumers won’t see, such as problematic or polarizing content. It’s ultimately up to the platform to decide what type of content gets recommended, not the social graph of the person producing the content. In contrast to social media, recommendation media is not a competition based on popularity; instead, it is a competition based on the absolute best content. Through this lens, it’s no wonder why Kylie Jenner opposes this change; her more than 360 million followers are simply worth less in a version of media dominated by algorithms and not followers. Sam Lessin explains this phenomenon, of recommendation with this cycle of content marketing and its cycle, as per this description from his tweet: Now, content creators are stuck in a different kind of a loop, in general - because it might lead to a situation, according to Sam, what we call as the Stage 5 of digital entertainment and content (which may or may not work in the case of knowledge economics that much). Now, algorithms take the larger helm to shape discourses and content driven reach for the users, which can be replicated by algorithmically sourced content, taking over human content, followed by personalized generated content to compete all facets of algorithmically sourced content. It is important to estimate that this cycle could remain a theoretical guess and might not happen soon. However, what is important to realise is that this cycle is worth understanding the way recommendation media could have special repercussions in the way digital media would transform. Ethical and Economic Implications of Recommendation Mediums To understand the ethical repercussions behind the purpose and use of recommendation mediums, it is necessary to understand its economics, in some way. Instagram is a reasonable example to understand the same. To compete with Tiktok’s 10-minute videos, Instagram came up with Instagram Reels, which has created an interesting competitive streak against Tiktok. As of now, Instagram has to adapt with some choices of shaping their own platform, as we know that Meta (or in general FB platforms) more or less has an interface problem, and not an algorithm problem. Here is an excerpt from a screenshot of a Tweet by Sam Lessin: I saw someone recently complaining that Facebook was recommending to them…a very crass but probably pretty hilarious video. Their indignant response [was that] “the ranking must be broken.” Here is the thing: the ranking probably isn’t broken. He probably would love that video, but the fact that in order to engage with it he would have to go proactively click makes him feel bad. He doesn’t want to see himself as the type of person that clicks on things like that, even if he would enjoy it. This is the brilliance of Tiktok and Facebook/Instagram’s challenge: TikTok’s interface eliminates the key problem of what people want to view themselves as wanting to follow/see versus what they actually want to see…it isn’t really about some big algorithm upgrade, it is about releasing emotional inner tension for people who show up to be entertained. There are some ontological changes that recommendation mediums for sure provide to content creators and users, which cannot be ignored. Those important choices, are described as follows: Recommendation Media creates a vertical hierarchy of rankings for any digital post on their platform, while horizontal reach is completely up to the user. Algorithms since drive content originally imply that vertical reach-out through scrolling up endless content is now the new normal. Even platforms like YouTube and Twitter are mainstreaming that in their own league, be it YouTube Shorts, Revue or Twitter Communities. It enables a user to mainstream their content by contributing to multiple flows of content escalation, through any parameter possible. Tiktok as an example shows that it could be a 10-second soundtrack, which Instagram for sure resembles as well. However, there may be some other subtle aspects as well - including the graphics involved, the caption styling, or anything else. While we know that social mediums promote a sense of monoculture in action, which has some economic imprints, recommendation mediums enforce monocultural trends using algorithms, which also makes several IP concerns driven by algorithmic choices and adaptivity of expression which any digital content may ought to have. In some aspects, it might simplify IP (mostly copyright) issues, but closures do not seem to really happen. Recommendation mediums, other than social mediums, for sure do not drive an organic flow of discourses since they are algorithmically driven. It means that the flow of content expression is going to be driven by the recommendation algorithms, which validate or maybe invalidate the content flow. We cannot deny that the technology companies do not have internal policies or strategic approaches towards the algorithms as to how they make these choices. However, at some point, even they cannot control the trends, simply. This statement by Mark Zuckerberg about News Feed on Facebook explains the problem: We really messed this one up. When we launched News Feed and Mini-Feed we were trying to provide you with a stream of information about your social world. Instead, we did a bad job of explaining what the new features were and an even worse job of giving you control of them. I'd like to try to correct those errors now. When I made Facebook two years ago my goal was to help people understand what was going on in their world a little better. I wanted to create an environment where people could share whatever information they wanted, but also have control over whom they shared that information with. I think a lot of the success we've seen is because of these basic principles. We made the site so that all of our members are a part of smaller networks like schools, companies or regions, so you can only see the profiles of people who are in your networks and your friends. We did this to make sure you could share information with the people you care about. This is the same reason we have built extensive privacy settings — to give you even more control over who you share your information with. Somehow we missed this point with News Feed and Mini-Feed and we didn't build in the proper privacy controls right away. This was a big mistake on our part, and I'm sorry for it. But apologizing isn't enough. I wanted to make sure we did something about it, and quickly. Now, it is necessary to understand that the perspective of being on any social medium, unlike online editorial publications, was that most of these mainstream platforms at least were horizontal in vogue and reach. Users, be it individuals, businesses and even governments could act in a horizontal fashion and fathom the organic reach-out of digital content. Interestingly, Instagram now has three choices to make as they are shaping up their own platform: Shift towards ever more immersive mediums (For example - Text to Video to 3D to VR) The Increasing and Penetrable Use of Artificial Intelligence (from AI rankings and recommendations to mere generation) Change in interaction models from user-directed to computer-controlled (from Clicks and Scrolls to Autoplays) This could be an inevitable choice for many digital content platforms, as well as those social mediums which could come into origination in near future. So, yes - there could be ethical problems, which stem from the classical questions of transparency, explainability and responsibility of algorithms. Earlier, the Black Box problem and the lack of transparency was largely driving how AI estimates data subjects and their choices online. Now, it is becoming clearer that the dynamics of expression and reach are going to change, in fundamental ways. To Conclude, Some Legal Dilemmas To conclude, there could be some issues with the rise of recommendation mediums, many of which are obvious, with some being fresh problems: Hostage of Expression and Speech: When algorithms drive content, there could be allegations of curbing freedom of speech and expression, which may be countered by justifying the self-regulation policies and explaining some aspects of algorithm-driven decisions made to moderate and even recommend/invalidate any digital content. These issues have been in the Web2 sphere for FAAMG platforms for long, especially in the US. Solutions have been proposed that there should be proper oversight, audits & compliances must be strengthened, regulators must be sensible in shaping public interests and concerns and means of ADR may be promoted to address subtle and edgy legal disputes. However, Recommendation mediums do something quite explictly, which is driving algorithms at the heart of content and expression flow in their platforms. Competition/Antitrust and Corporate Governance Issues: While algorithms driving content and physical realities could be an important dilemma, and in the realm of sectorial implications of algorithmic activities and operations within mainstream digital platforms have been recognised, due to some real lack of research on linking digital realities with physical realities in a legal aspect of understanding - it has been hard to resolve the issues, which already exist in the Web2 sphere. Yes, recommendation mediums may affect markets and increase their fragility, leading to a pile of competition law issues. Has it become more certain to assess the problems and conclude as to what legal problems could come up, in the market economies? It has become easier to do so because issues of corporate governance, their horizontal impact, and the economics of regulation could clearly come on radar. However, nation-states must approach competition law differently, because they lack in establishing sectoral implications of algorithmic activities and operations properly. India is surely an example where to address Amazon India, they do require specific amendments to the Competition Act, 2002, to promote ex-ante regulation over digital markets. For now, even if we take the European Union’s AI Act as a pivotal reference, we can at least conclude that sectoral regulation coupled by sheltering decentralised approaches to promote the ethics of responsible artificial intelligence, would surely help us in demystifying the challenges that recommendation mediums bring up in future. Governments would surely attempt to ask for audits and ensure that recommendation mediums as intermediaries comply within the legal schemes. However, the lack of clarity cannot be justified by a huge lack of legal acumen in even governing digital markets, be it the US or any emerging economy like India.

  • The European Union Artificial Intelligence Act: A Glance

    The 27-nation group has introduced the first AI regulations in the world two with a focus on limiting dangerous but narrowly targeted applications. Lately we have witnessed the increased role of AI in our day to day lives, and it becomes important to regulate the AI models to ensure the integrity and security of Nations. Chatbots and other general-purpose AI systems received very little attention before the coming of ChatGPT, which further reinstated the importance to regulate such models before it creates turbulence in the World Economy. The EU Commission published a proposal for an EU Artificial Intelligence Act in back April 2021, which provoked a heated debate in the EU Parliament amongst political parties, stakeholders, and EU Member States, leading to thousands of amendment proposals. The EU Parliament has approved the passage of the AI Act, which definitely evokes issues of implementation with respect to the AI legislation. In the European Parliament, the provisional AI Act would need to be approved by the joint committee, then debated and voted on by the full Parliament, after which the AI Act is adopted into law. The objectives of the European Union Artificial Intelligence Act are summarised as follows: address risks specifically created by AI applications propose a list of high-risk applications set clear requirements for AI systems for high risk applications define specific obligations for AI users and providers of high risk applications propose a conformity assessment before the AI system is put into service or placed on the market propose enforcement after such an AI system is placed in the market propose a governance structure at European and national level. Defining AI The definitions offered by the participating governments are summarised in the FCAI report. Despite the fact that there is "no single definition" of artificial intelligence, many efforts have been made in that direction. Many attempts have been made to define the term as it will determine the scope of the Legislation. Also, it has to strike the balance between being too narrow to exclude the certain types of AI that needs regulation and too broad a definition risks sweeping up common algorithmic systems that do not produce the types of risk or harm However, the concept in the AI Act is the first definition of AI for regulatory reasons. Earlier definitions of AI appeared in frameworks, guidelines, or appropriations language. The definition that is finally established in the AI Act is likely to serve as a benchmark for other AI policies in other nations, fostering worldwide consensus. According to Article 3(1) of the AI Act, an AI system is “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” Risk-based approach to regulate AI A "proportionate" risk-based approach is promised by the AI Act, which imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. The AI Act divides risk into four categories: unacceptable risk, high risk, limited risk, and low risk. These categories are targeted at particular industries and applications. One important topic under discussion by the Parliament and the Council will be the regulation and classification of applications at the higher levels, specifically those deemed to be unacceptably risky, such as social scoring, or high risk, AI interaction with with children in the context of personal development or personalised education. The EU AI Act lays out general guidelines for the creation, commercialisation, and application of AI-driven systems, products, and services on EU soil. The proposed regulation outlines fundamental guidelines for artificial intelligence that are relevant to all fields. Through a required CE-marking process (CE marking indicates that a product has been assessed by the manufacturer and deemed to meet EU safety, health and environmental protection requirements), it establishes requirements certification of High-Risk AI Systems. This pre-market compliance regime also applies to datasets used for machine learning training, testing, and validation in order to guarantee equitable results. The Act aims to formalise the high requirements of the EU's trustworthy AI paradigm, which mandates that AI must be robust in terms of law, ethics, and technology while upholding democratic principles, human rights, and the rule of law. If we talk about India, The Companies Act 2013, lays down the compliance that needs to be met by a company but currently AI Models or the providers do not come under the ambit. India is not planning to develop AI regulatory plans at this point of time but taking inspiration from the EU legislation, it can ensure strict compliance measures for the upcoming players in the industry by taking a cue. This risk-based based pyramid (Figure 1) is combined with a contemporary, layered enforcement mechanism in the draught Artificial Intelligence Act. This implies, among other things, that applications with a low risk are subject to a laxer legal framework, while those with a high risk are prohibited. As danger rises between these two ends of the spectrum, rules become harsher. These range from light, externally assessed compliance requirements throughout the life cycle of the application to strict, non-binding self-regulatory soft law impact evaluations coupled with codes of conduct. Ban on the use of facial biometrics in law enforcement Some Member States want to exclude from the AI Regulation any use of AI applications for national security purposes (the proposals exclude AI systems developed or used “exclusively” for military purpose). Germany has recently argued for ruling out remote real-time biometric identification in public spaces but allowing retrospective identification (e.g., during the evaluation of evidence), and asks for an explicit ban on the use of AI systems substituting human judges, for risk assessments by law enforcement authorities and for systematic surveillance and monitoring of employee performance. AI-related revision of the EU Product Liability Directive (PLD) In EU, manufacturers are subject to strict civil law liability under the PLD for damages resulting from defective products, regardless of negligence. To integrate new product categories arising from digital technologies, like AI, a modification was required. The PLD specifies conditions under which a product will be believed to be "defective" for the purposes of a claim for damages, including the presumption of a causal link if the product is proven to be defective and the damage is ordinarily consistent with that defect. With regard to AI systems, the revision of the PLD aims to clarify that: AI systems and AI-enabled goods are considered “products” and are thus covered by the PLD; and when AI systems are defective and cause damage to property, physical harm or data loss, the damaged party can seek no-fault compensation from the provider of the AI system or from a manufacturer integrating the system into another product. providers of software and digital services affecting the functionality of products can be held liable in the same way as hardware manufacturers; manufacturers can be held liable for subsequent changes made to products already placed on the market, e.g., by software updates or machine learning; and Talking in Indian context, the Consumer Protection Act talks of product liability and marked an end of the buyer beware doctrine and the introduction of seller beware as the new doctrine governing the Consumer Protection Act. Section 84 of the Act enumerates the situations where a product manufacturer shall be liable in a claim for compensation under a product liability action for a harm caused by a defective product manufactured by the product manufacturer. But this doesn't apply to AI models currently running in India and keeping in mind the future needs, we must ensure provisions on Protection of consumers on priority basis. Impact on Businesses AI has enormous potential for progress in both technology and society. It is transforming how businesses produce value across a range of sectors, including healthcare, mining, and financial services. Companies must handle the risks associated with the technology if they want to use AI to innovate at the rate necessary to stay competitive and maximise the return on their AI investments. Businesses who are experiencing the greatest benefits from AI are much more likely to say that they actively manage risk than those whose outcomes are less promising. As per the Provisions of the Act, it includes fines of up to €30 million or 6 percent of global revenue, making penalties even heftier than those incurred by violations of Regulation Act. The use of prohibited systems and the violation of the data-governance provisions when using high-risk systems will incur the largest potential fines. All other violations are subject to a lower maximum of €20 million or 4 percent of global revenue, and providing incorrect or misleading information to authorities will carry a maximum penalty of €10 million or 2 percent of global revenue. Although enforcement rests with member states, as is the case for GDPR, it is expected that the penalties will be phased in, with the initial enforcement efforts concentrating on those who are not attempting to comply with the regulation. The regulation would have extraterritorial reach, meaning that any AI system providing output within the European Union would be subject to it, regardless of where the provider or user is located. Individuals or companies located within the European Union, placing an AI system on the market in the European Union, or using an AI system within the European Union would also be subject to the regulation. Endnote The unique legal-ethical framework for AI expands the way of thinking about regulating the Fourth Industrial Revolution (4IR) which includes the coming of cutting-edge technology in the form of Artificial Intelligence, and applying the proposed laws will be a completely new experience. From the first line of code, awareness is necessary for responsible, trustworthy AI. The future of our society is being shaped by the way we develop our technologies. Fundamental rights and democratic principles are important in this vision. AI impact and conformance evaluations, best practices, technological roadmaps, and conduct codes are essential tools to help with this awareness process. These technologies are used to monitor, validate, and benchmark AI systems by inclusive, multidisciplinary teams. Ex ante and life-cycle audits will be everything. The new European rules will forever change the way AI is formed. Not just EU, but in the coming days, other countries too would be in need to set-up a regulatory framework on AI and this GDPR would definitely guide them.

  • AI Regulation and the Future of Work & Innovation

    Please note: this article is a long-read. The future of work and innovation, both are some of the most sensitive areas of concern in the times when the fourth industrial revolution is happening, despite a gruesome pandemic. With future of work, societies and governments fear which jobs would exist, and which would not. Artificial Intelligence and Web3 technologies create a similar perception and hence, fear over such dilemmas. Fairly, the focus point of fear is not accurate or astute enough. The narrative across markets is about Generative AI and automation taking over human labour and information-related jobs. In fact, a hilarious update has come up that even Generative AI prompters are considered to be "a kind of a profession". Now, when one looks at the future of innovation, of course, it is not necessary that every product, system or service resembles a use case to disrupt jobs and businesses that way. In fact, it could be considered that marketing technology hypes is "business friendly". However, it does not work and fails to promote innovative practices in the technology industry, especially the AI industry. In this article, I have offered a regulation-centric perspective on certain trends related to the future of work and the future of innovation with the use of artificial intelligence technologies in the present times. The article also covers on the possibility of Artificial General Intelligence to affect the Future of Work and Innovation. The Future of Work and the Future of Innovation The Future of Work and the Future of Innovation are two closely related concepts that are significantly impacted by the advancement of Artificial Intelligence (AI). The Future of Work refers to the changing nature of employment and work in the context of technological advancements, including AI. It encompasses the evolving skills required for jobs, the rise of automation, the growing prevalence of remote work, and the impact of AI on job displacement and creation. AI has already begun to disrupt certain industries, such as manufacturing and transportation, by automating routine and repetitive tasks. While this has the potential to increase efficiency and productivity, it also raises concerns about job displacement and the need for reskilling and upskilling for the workforce. In the future, AI is likely to have an increased if not significant impact on the job market. Certain professions, such as those that require analytical thinking, creativity, and emotional intelligence, are expected to be in high demand, while other jobs that are easily automated may be at risk of disappearing. It's important to note that the impact of AI on the job market is complex and will vary depending on factors such as industry, geographic location, and job type. It is more about how humans become more skilled and up-to-date. The Future of Innovation refers to the new opportunities and possibilities for creating and advancing technology with the help of AI. AI has the potential to revolutionize many fields, from healthcare to transportation, by enabling more efficient and effective decision-making and automation. AI can be used to analyze vast amounts of data, identify patterns and insights, and provide predictions and recommendations. This can be used to optimize business processes, enhance product development, and improve customer experiences. Additionally, AI can be used to solve complex problems and accelerate scientific research, leading to new discoveries and innovations. However, it's important to note that AI is not a silver bullet and has its limitations. AI algorithms are only as good as the data they are trained on, and biases and errors can be introduced into the system. Additionally, AI raises concerns about privacy, security, and ethical considerations that need to be carefully addressed. Estimating Possible "Disruptions" In Figure 2, a listing is provided which explains, from a regulatory standpoint, how would artificial intelligence could really affect the future of work and innovation. Now, this is not an exhaustive list, and second, some points may overlap for both future of work and innovation respectively. Let's discuss all these important points and deconstruct the narrative and realities around. These points, are based on my insight into the AI industry and its academia in India, Western countries and even China. Job requirements will become complicated in some cases, simplified in other cases Any job requirement that is posted by an entity or a government or an individual, is not reflected merely by the pay-grade / monetary compensation it offers. Money could be a factor to assess how markets are reacting and how the employment market deserves better pay. Nevertheless, the specifics of work, and then the special requirements, explain how job requirements would change. For sure, after two industrial revolutions, the quality of life is set for a change everywhere, even if the Global South countries are trying to grow. For India and the Global South, adaptation may happen if a creative outlook towards skill education is used to focus on creating those jobs and their skill sets which would stay and matter. Attrition in employment has been a problem, but could be dealt properly. To climb the food chain, enhancing both technical and soft skills is an undeniable must As job requirements are gradually upscaling their purpose, climbing the food chain is a must for people. One cannot stay limited to a 10-year old approach of doing tasks under their work experience since there is a chance of having some real-life instances of disruption. Investing in up-skilling would be helpful. More technology will involve more human involvement in unimaginable areas of concern One may assume that using artificial intelligence or any disruptive tech product, system, component or service would lead to a severe decrease in human involvement. For example, let us assume that no-code tools are developed like FlutterFlow and many more. One may create a machine learning system which recommends what to code (already happening) to reduce the work of full-stack developers. However, people forget to realise that there would be additional jobs to analyse specifics and suggest relevant solutions. In any opportunity created by the use and inclusion of artificial intelligence, some after-effects won't last. However, some of them could grow and stay for some time. The fact that AI hype is promoted in a manner lacking ethical responsibility, shows how poorly markets are understood. In addition, this is why the US markets were subject to clear disruptions which could not last long, and India also has been a victim of this, even if the proportions are not that large as those of the US. While climbing food chain is inevitable, many at the top could go down - affecting the employment market Many (not most) of the top stakeholders in the food chain, in various aspects - jobs, businesses, freelancing, independent work, public sector involvement - would have to readjust their priorities because this is an obvious trend to look out for. Some market changes could be quick, while some may not be that simple to ignore. Derivative products, jobs, systems, services and opportunities will come and go regularly As discussed in an article on ChatGPT and its Derivative Products, it is obvious that multiple kinds of derivative products, jobs, systems, services and opportunities will be created. They would come, rise, become hyped and may either stay or go. To be clear, the derivatives we are discussing here are strictly related to the use of artificial intelligence technologies, that create jobs / opportunities / technological or non-technological systems of governance / products / services. Let us take Figure 3 in context. If we assume that Product A is an AI product, and based on some feedback related to Product A, 3 things happen: (1) a Derivative Product of Product A is created); (2) a Job or Opportunity called B is created; and (3) a Job or Opportunity called C is created - then, the necessity of having such opportunities related to the production of "A" and its derivative involves the creation of two systems - E and F. Why are these systems created? Simple. They are used for handling operations related to the production, maintenance and other related tasks, related to Product A and its derivative. The systems could be based on AI / technology or could not involve much technological prowess. Naturally, one of them (in this case System E), along with Job/ Opportunity C become stable use cases which make sense. They are practical and encouraging. This could further inspire the creation of a Product D, if possible. Although the process and choice systems I have explained in the previous paragraph is a simplistic depiction of production and R&D issues. This whole process in real life could take 2-5 years or even 5-10 years, depending on how the process is going on. Academic research in law and policy will remain downgraded until adaptive and sensible approaches are adopted with time Here is an excerpt from an article by Daniel Lattier for Intellectual Takeout, which explains the state of reading social science research papers in developed countries and overall: About 82 percent of articles published in the humanities are not even cited once for five years after they are published. Of those articles that are cited, only 20 percent have actually been read. Half of academic papers are never read by anyone other than their authors, peer reviewers, and journal editors. Another point which Daniel makes is this: Another reason is increased specialization in the modern era, which is in part due to the splitting up of universities into various disciplines and departments that each pursue their own logic. One unfortunate effect of this specialization is that the subject matter of most articles make them inaccessible to the public, and even to the overwhelming majority of professors. In fact, those who work in the law and policy professions, could survive if they belong to the industry side of things. Academics after COVID across the world have lost the edge and appetite to write and contribute research in law, social sciences and public policy. Just because few people are able to still do it does not justify any trend. Now, take these insights in line with the disruptions that AI may cause. If you take Generative AI, some universities across the world including in India, have banned the use of ChatGPT and other GAN/LLM tools: According to a Hindustan Times report, the RV University ban also applies to other AI tools such as GitHub Co-Pilot and Black Box. Surprise checks will be conducted and students who are found abusing these engines will be made to redo their work on accounts of plagiarism. The reason it is done is not just plagiarism. The academic industry is lethargic and lacks social and intellectual mobility in law and policy - which is a global problem and not just an India problem. There might be exceptional institutions, but they are less than those who are not offering enough. Now, imagine that if people are not even skilled at a basic level in their areas of law and policy, then automating tasks or the algorithmic use of any work would easily make them vulnerable and many professionals would have to upgrade their skills once they get the basics clear. In fact, it is governments and companies across the world who are trying hard to stay updated with the realities of the artificial intelligence market and produce stellar research, which includes the Government of India and some organisations in India. To counter this problem - certain things, for sure can be done: Embrace individual mobility and brilliance by focusing on excellence catering mobilisation Keep the pace and create skill-based learning; the academia in India is incapable to create skill opportunities in law and policy - unless institutions like Indian Arbitration & Mediation Council, the CPC Analytics and others step up, which they fortunately do Specialisation should not be used as an excuse to prevent people from learning; education could be simulated in a sense which makes more people aware and skilled in a sensible and self-aware way Access to resources is a critical issue which needs to be addressed because it is hilarious to see that AI systems have access to multiple research books and works, but human researchers in the Global South (and India) suffer discrimination and do not get access to research works (while publication via Scopus and others has become literally costly and impossible) Skill institutions must be separately created; they could be really helpful in addressing the risks of having disruptive technologies, from a future of work perspective R&D would have to be rigorous, risk and outcome-based in technology and related sectors The hype related to Generative AI products and the call to impose a moratorium on AI research beyond GPT4 or GPT 5 for 6 months explains why big tech companies must not own the narrative and market of artificial intelligence research & commercialisation. Ron Miller discusses about the potential of the Generative AI industry to be an industry for small businesses for TechCrunch: “Every company on the planet has a corpus of information related to their [organization]. Maybe it’s [customer] interactions, customer service; maybe it’s documents; maybe it’s the material that they published over the years. And ChatGPT does not have all of that and can’t do all of that.” [...] “To be clear, every company will have some sort of a custom dataset based on which they will do inference that actually gives them a unique edge that no one else can replicate. But that does not require every company to build a large language model. What it requires is [for companies to take advantage of] a language model that already exists,” he said. The statement quoted above emphasises the importance of rigorous and outcome-based Research and Development (R&D) in the technology sector. It highlights that every company possesses a unique corpus of information that can be leveraged to gain a competitive edge. This corpus of information may be customer interactions, documents, or any material published by the organization over the years. It is suggested that companies do not need to build their own large language model to leverage this corpus of information. Instead, they can take advantage of existing language models, such as ChatGPT, to gain insights and make informed decisions. The approach recommended is for companies to focus on using existing resources effectively, rather than reinventing the wheel. This can help companies save time and resources while still gaining valuable insights and improving their competitive position. However, to effectively leverage these resources, companies need to have rigorous R&D processes in place. This means focusing on outcomes and taking calculated risks to drive innovation and stay ahead of the competition. By doing so, companies can ensure that they are utilising their unique corpus of information to its fullest potential, and staying ahead in the ever-changing technology landscape. Here is an intriguing tweet from Pranesh Prakash, respected technology law expert and researcher, on the impact of AI and jobs in the South Asian region (the Indian subcontinent). I find the points he raised quite intriguing when we take an emerging market like India (or Bangladesh) into perspective. Here is a summary of what he refers to: One cannot have an accurate prognostication about how generative Al would affect jobs. In certain cases, the knowledge sector in an outsourcing destination could increase, while in certain cases it decreases. Mentally-oriented jobs (Knowledge/Arts/Music-sector jobs, etc.) will be affected first, and not manual labour jobs (for obvious reasons). The presence of Generative Al, and other forms of neural net-based models could be omnipresent, diffused and sometimes, dissimulated as if it is just invisible as he says. All the four points in the original tweet are valid. Issues related to knowledge and their epistemological and ontological links could really affect the mentally-oriented jobs, in any way possible. In some cases it could be a real disruptor where technological mobility is helpful, while in certain cases, it might not be useful, but could be found responsible for mere information overloads, and even epistemic trespassing (read the paper written by Nathan Ballantyne). In p. 373 of the paper, Nathan has made a valid point about how narrowed analysis could drive philosophy questions quite counterproductively. "[Q]uestions in philosophy may become hybridized when bodies of empirical fact, experimental evidence, and empirically-driven theories are recognized to be relevant to answering those questions. As a matter of fact, the era of narrowly analysis-driven philosophy represents an anomaly within the history of philosophy." Taking cue from Nathan's paper and Pranesh's points, it is essential to point out that how Generative AI from an epistemic & ontological aspect could be a fragile tool of use. The risk and ethics-based value that any generated proprietary information and the algorithmic activities & operations of these tools would hold, will be subject to scrutiny. So, any improbable or manipulative or dissimulated epistemic feedback one takes from such tools when it comes to their decision making practices not only causes hype which generates risks but also could affect knowledge societies and economies. Of course, the human element of responsibility is undeniable. This is why having a simplistic, focused and clear SOP (standard of procedure) for using such Generative AI tools could help in assessing what impact do these tools really have. Now that we have covered some genuine concerns on the effect of AI on the future of work and innovation, it is necessary to analyse the role of artificial general intelligence. The reason I have covered the role of AGI is this - human motivation is appropriately tracked via methods of behavioural economics. AGI and the ethics of artificial intelligence represent a narrative around human motivation and assumed machinic motivations. The problem however is that most narrow AI we know lack explainability (beyond the usual black box problem) because how they learn is not known at all. In fact, it was just recent that scientists somehow figured out how Generative AI tools learn and understand at a limited degree, still incapable to human brain (thereby not tapping up to the potential of the "Theory of Mind" in AI Ethics). Hence, through this concluding section of the article, I have addressed a simple question: "Is AGI significant to be considered an imperative for regulation when it comes to the future of work and the future of innovation?" I hope this section would be interesting to read. Whether AGI would Disrupt the Future of Work & Innovation In this concluding part, I have discussed the potential role of artificial general intelligence (AGI), as to whether it can really affect the future of work and innovation. For starters, in artificial intelligence ethics, we assume that Artificial General Intelligence (AGI) refers to the hypothetical ability of an artificial intelligence system to perform any intellectual task that a human can do. As per AI ethicists, AGI would be capable of learning and adapting to any environment, just as humans do. It would have the ability to reason, solve problems, make decisions, and understand complex ideas, regardless of the context or circumstances. In addition, AGI (allegedly) would be able to perform these tasks without being explicitly programmed to do so. It would be able to learn from experience and apply that knowledge to new situations, just as humans do. This ability to learn and adapt would be a crucial characteristic of AGI, as it would enable it to perform a wide range of tasks in a variety of contexts. So, in simple terms, the narrative on AGI refers to safety and risk recognition. On this aspect, Jason Crawford for The Roots of Progress, refers to the 1975 Asilomar Conference and explains how risk recognition and safety measures could be developed. The excerpt from the article is indeed insightful to understand how AGI works: A famous example of this is the 1975 Asilomar conference, where genetic engineering researchers worked out safety procedures for their experiments. While the conference was being organized, for a period of about eight months, researchers voluntarily paused certain types of experiments, so that the safety procedures could be established first. When the risk mitigation is not a procedure or protocol, but a new technology, this approach is called “differential technology development” (DTD). For instance, we could create safety against pandemics by having better rapid vaccine development platforms, or by having wastewater monitoring systems that would give us early warning against new outbreaks. The idea of DTD is to create and deploy these types of technologies before we create more powerful genetic engineering techniques or equipment that might increase the risk of pandemics. Now, the idea behind DTD is to proactively address potential risks associated with new technologies by prioritizing the development of safety measures and strategies. By doing so, it aims to reduce the likelihood of harm and promote responsible innovation. Rohit Krishnan in Artificial General Intelligence and how (much) to worry about it for Strange Loop Canon, makes a full-fledged chart explaining how AGI as a risk would work out: If one reads the article and looks through this amazingly curated mind map by Krishnan, it is obvious to notice that the risk of implicating any artificial general intelligence is not so simple to be estimated. The chart itself is self-explanatory, and hence I would urge the readers to go through this brilliant work. I would like to highlight the core argument that Krishnan puts, which lawyers and regulators must understand if they fear about the hype behind artificial general intelligence. This excerpt is a long read: We need whatever system is developed to have its own goals and to act of its own accord. ChatGPT is great, but is entirely reactive. Rightfully so, because it doesn’t really have an inner “self” with its own motivations. Can I say it doesn’t have? Maybe not. Maybe the best way to say is that it doesn’t seem to show one. But our motivations came from hundreds of millions of years of evolution, each generation of which only came to propagate itself if it had a goal it optimised towards, which included at the very least survival, and more recently the ability to gather sufficient electronic goods. AI today has no such motivation. There’s an argument that motivation is internally generated based on whatever goal function you give it, subject to capability, but it’s kind of conjectural. We’ve seen snippets of where the AI does things we wouldn’t expect because its goal needed it to figure out things on its own. [...] A major lack that AI of today has is that it lives in some alternate Everettian multiversal plane instead of our world. The mistakes it makes are not wrong per se, as much as belonging to a parallel universe that differs from ours. And this is understandable. It learns everything about the world from what its given, which might be text or images or something else. But all of these are highly leaky, at least in terms of what they include within the. Which means that the algos don’t quite seem to understand the reality. It gets history wrong, it gets geography wrong, it gets physics wrong, and it gets causality wrong. The author argues that the lack of motivation in current AI systems is a significant limitation, as AI lacks the same goal optimisation mechanisms that have evolved over hundreds of millions of years in biological organisms. While there is an argument that motivation is internally generated based on the goal function provided to AI, it remains conjectural. Additionally, the author notes that AI makes mistakes due to its limited understanding of reality, which could have implications for the development and regulation of AI, particularly with the potential risks associated with the development of AGI. Therefore, the narrative of responsible AI emphasises the importance of considering ethical, societal, and safety implications of AI, including the development of AGI, to ensure that the future of work and innovation is beneficial for all. Now, from a regulatory standpoint, there is a growing concern that AI tools that are based on conjecture rather than a clear set of rules or principles may pose accountability challenges. The lack of clear motivation and inner self in AI makes it difficult for regulators to hold AI systems accountable for their actions. As the author suggests, AI of today lacks the ability to have its own motivations and goals, which are essential for humans to propagate and survive. While AI algorithms may have goal functions, they are subject to capability and may not be reliable in all scenarios. Additionally, AI's mistakes are often due to its limited understanding of reality, which can result in errors in history, geography, physics, and causality. Regulators may struggle to understand the motivation aspect behind AI tools, as they are often based on complex algorithms that are difficult to decipher. This makes it challenging to establish culpability in cases where AI tools make mistakes or cause harm. In many cases, regulators may not even be aware of the limitations of AI tools and the potential risks they pose due to being limited. To conclude, an interesting approach to address such concerns and further understand (maybe to even come up with self-regulatory methods if not measures) the fear of artificial general intelligence - is perhaps doing epistemic and ontological analysis in legal thinking. In the book, Law 3.0, Roger Brownsword (Professor of King's Law College, who I had interviewed for AI Now by Indian Society of Artificial Intelligence and Law) had discussed about the ontological dilemmas if technology regulation becomes technocratic (or in simple terms, too much automatic), which justifies the need to be good at epistemic and ontological analysis: With rapid developments in AI, machine learning, and blockchain, a question that will become increasingly important is whether (and if so, the extent to which) a community sees itself as distinguished by its commitment to governance by rule rather than by technological management. In some smaller-scale communities or self-regulating groups, there might be resistance to a technocratic approach because compliance that is guaranteed by technological means compromises the context for trust – this might be the position, for example, in some business communities (where self-enforcing transactional technologies are rejected). Or, again, a community might prefer to stick with regulation by rules because rules (unlike technological measures) allow for some interpretive flexibility, or because it values public participation in setting standards and is worried that this might be more difficult if the debate were to become technocratic. [...] Law 3.0, is more than a particular technocratic mode of reasoning, it is also a state of coexistent codes and conversations. [...] Law 3.0 conversation asks whether the legal rules are fit for purpose but it also reviews in a sustained way the non-rule technological options that might be available as a more effective means of serving regulatory purposes. In future analyses for Visual Legal Analytica, or a VLiGTA report, perhaps such question on developing epistemic and ontological analyses could be approached. Nevertheless, on the future of work and innovation, it can be safely concluded that disruption is not the problem, but not understanding the frugality of disruption could be. This is where careful and articulate approaches are needed to analyse if there are real disruptions in the employment market or not. Perhaps there are legible corporate governance and investment law issues which could be taken under regulatory oversight apart from limited concerns on the "black box problem", which again remains obscure and ungovernable, without epistemic and ontological precision on the impact of narrow AI technologies.

  • The Twitter-Microsoft Legal Dispute on API Rules

    Please note: this is a Policy Brief by Anukriti Upadhyay, former Research Intern at the Indian Society of Artificial Intelligence and Law. In a 3-page letter to Satya Nadella, Twitter's company, X Corp. had stated that Microsoft had violated an agreement over its data and had declined to pay for that usage. And in some cases, Microsoft had used more Twitter data than it was supposed to. Microsoft also shared the Twitter data with government agencies without permission, the letter said. To sum up, Twitter is trying to charge Microsoft for its data which has earned huge amount of profit to Microsoft. Mr. Musk, who bought Twitter last year for $44 billion, has said that it is urgent for the company to make money and that it is near bankruptcy. Twitter has since then introduced new subscription products and made other moves to gain more revenue. Also, in March, the company had stated it would charge more for developers to gain access to its stream of tweets. Elon Musk and Microsoft have had a bumpy relationship recently. Among other things, Mr. Musk has concerns with Microsoft over OpenAI. Musk, who helped found OpenAI in 2015, has said Microsoft, which has invested $13 billion in OpenAI, controls the start-up’s business decisions. Of course, Microsoft has disputed that characterisation. Microsoft’s Bing chatbot and OpenAI’s ChatGPT are built from what are called large languages models, or LLMs, which build their skills by analysing vast amounts of data culled from across the internet. The letter to Satya Nadella does not specify if Twitter will take legal action against Microsoft or ask for financial compensation. It demands that Microsoft abide by Twitter’s developer agreement and examine the data use of eight of its apps. Twitter has hired legal services which seeks report by June on how much Twitter data the company possesses, how that data was stored and used, and when government-related organizations gained access to that data. Twitter’s rules prohibit the use of its data by government agencies, unless the company is informed about it first. The letter adds that Twitter’s data was used in Xbox, Microsoft’s gaming system; Bing, its search engine; and several other tools for advertising and cloud computing. “the tech giant should conduct an audit to assess its use of Twitter's content.” Twitter claimed that the contract between the two parties allowed only restricted access to the twitter data but Microsoft has breached this condition and has generated abnormal profits because of using Twitter’s API. Currently, there are many tools available (from Microsoft, Google, etc.) to check the performance of AI systems, but there is no regulatory oversight. And that is why, experts believe that companies, new and old, need to put more thought into self-regulation. This dispute has highlighted the need to keep a check on the utilization of data by companies to develop their AI models and regulate them. Data Law and Oversight Concerns In this game of tech giants to win the race of AI development, the biggest impact is always bestowed upon the society. Any new development is prone to attract illegal activities that can have a drastic effect on the society. Even though the Personal Data Protection Bill is yet to become law, big tech firms like Google, Meta, Amazon and various e-commerce platforms are liable to be penalised for sharing users’ data with each other if consumers flag such instances. Currently in India, under the Consumer Protection Act, 2019, the department can take action and issue directions to such firms. Since the data belongs to a consumer, if the consumer feels that their data is being shared amongst firms without their express consent, they are free to approach us under the Consumer Protection Act. If we look at the kind of data which is shared between firms, any search on Google by a person leads to the same feeds being shown on Facebook. This means that user data is being shared by big tech firms. In case the data is not shared with the express consent of users concerned, they can approach the Consumer Protection Forums. The same is relevant to the Twitter-Microsoft dispute, wherein the data used by the latter was put up by the Twitter users on their twitter account and the same was getting used by Microsoft without the user’s consent. If we analyse WhatsApp's data sharing policies for example, Meta has stated that it can share business data with Facebook. But at the same time, the Competition Commission of India has objected to this as a monopolistic practice and the matter is in court. Consumers have the right to seek redressal against unfair / restrictive trade practices or unscrupulous exploitation of consumers. Protecting personal data should be an essential imperative of any democratic republic. Once it becomes law, citizens can intimate all digital platforms they deal with to delete their past data. The firms concerned will then need to collect data afresh from users’ and clearly spell out the purpose and usage. They will be booked for data breach if they depart from the purpose for which it was collected. Data minimisation, purpose limitation and storage limitation are the hallmarks which cannot be compromised with. Data minimisation means firms can only collect the absolute minimum required data. Purpose limitation will allow them to use data only for the purpose for which it has been acquired. With storage limitation, once the service is delivered, firms will need to delete the data. With the rapid development of AI, a number of ethical issues have cropped up. These include: the potential of automation technology to give rise to job losses the need to redeploy or retrain employees to keep them in jobs the effect of machine interaction on human behaviour and attention the need to address algorithmic bias originating from human bias in the data the security of AI systems (e.g., autonomous weapons) that can potentially cause damage While one cannot ignore these risks, it is worth keeping in mind that advances in AI can - for the most part - create better business and better lives for everyone. If implemented responsibly, artificial intelligence has immense and beneficial potential. Investment and Commercial Licensing AI has been called the electricity of the 21st century. While the uses and benefits of AI are exponentially increasing, there are challenges for businesses looking to harness this new technological advancement. Chief among the challenges are: The ethical use of AI, Legal compliance regarding AI and the data that fuels AI, Protection of IP rights and the appropriate allocation of ownership and use rights in the components of AI. Businesses also need to determine whether to build AI themselves or license it from others. Several unique issues impact AI license agreements. In particular, it is important to address the following key issues: “IP ownership and use rights, IP infringement, Warranties, specifically performance promises and Legal compliance.” Interestingly, IP treaties simply have not caught up to AI yet. While aspects of AI components may be protectable under patents, copyrights, and trade secrets, IP laws primarily protect human creativity. Because of the focus on human creation, issues may arise under IP laws if the AI output is created by the AI solution instead of a human creator. Since the IP laws do not squarely cover AI, as between an AI provider and user, contractual terms are the best way to attempt to gain the benefits of IP protections in AI license agreements. How Does it Affect the Twitter-Microsoft Relationship Considering this issue, the parties could designate certain AI components as trade secrets. Protect AI components by: limiting use rights; designating AI components as confidential information in the terms and conditions; and restricting use of confidential information. Include assignment rights in AI evolutions from one party or the other. Determine the license and use rights the parties want to establish between the provider and the user for each AI component. Clearly articulate the rights in the terms and conditions. The data sharing agreement must cover which party will provide and own the training data, prepare and own the training instructions, conduct the training, and revise the algorithms during the training process and own the resulting AI evolutions. As for data ownership, the parties should identify the source of the data and ensure that data use complies with applicable laws and any third-party data provider requirements. Ownership and use of production data for developing AI models must be set out in the form of terms and conditions which party provides and which party owns the production data that will be used. If the AI solution is licensed to the user on-premises (the user is running the AI solution in the user’s systems and environment), it is likely that the user will supply and own the production data. However, if the AI solution is cloud-based, the production data may include the data of other users. In a cloud situation, the user should specify whether the provider may use the user’s data for the benefit of the entire AI user group or solely for the user’s particular purposes. It is important to note that limiting the use of production data to one user with an AI solution may have unintended results. In some AI applications, the use of a broader set of data from multiple users may increase the AI solution’s accuracy and proficiency. However, counsel must weigh the benefits of permitting a broader use of data against the legal, compliance, and business considerations a user may have for limiting use of its production data. When two or more parties are each contributing to the AI evolutions, the license agreement should appoint a contractual owner. The parties must then determine who will own AI evolutions or whether AI evolutions will be jointly owned, which presents additional practical challenges. The use of AI presents ethical issues and the organizations must consider how they will use AI and define principles and implement policies regarding the ethical use of AI. One portion of the AI ethical use consideration is legal compliance, which is another issue that is more challenging for AI than for traditional software or technology licensing. AI-based decisions must satisfy the same laws and regulations that apply to human decisions. AI is different from many other technologies because AI can produce legal harms against people and some of that legal harm might not only violate ethical norms, but may also be actionable under law. It is important to address legal compliance concerns with the provider before entering into an AI license agreement to determine which party is responsible for compliance. Some best practices that could be adopted, are proposed as follows: To deal with legal compliance issues in investment and licensing, companies can conduct diligence on data sharing to determine if there are any legal or regulatory risk areas that merit further inquiry. Develop policies around data sharing and involve the various stakeholders in the policy-making process to ensure that thoughtful consideration is given about when it is appropriate to use the data and in what contexts. Implement a risk management framework that includes a system of ongoing monitoring and controls around the use of AI. Consider which party should obtain third-party consents for data use due to potential privacy and data security issues. AI is transforming our world rapidly and without much oversight. Developers are free to innovate, as well as to create tremendous risk. Very soon leading nations will need to establish treaties and global standards around the use of AI, not unlike current discussions about climate change. Governments will need to both: Establish laws and regulations that protect ethical and productive uses of AI. Prohibit unethical, immoral, harmful, and unacceptable uses. These laws and regulations will need to address some of the IP ownership, use rights, and protection issues discussed in this article. However, these commercial considerations are secondary to the overarching issues concerning the ethical and moral use of AI. In line with the increased attention on corporate responsibility and issues like diversity, sustainability, and responsibility to more than just investors, businesses that develop and use AI will need policies and guidance against which the use of AI should be assessed and utilised. These policies and guidance are worthy of board-level attention. Technology lawyers who in these early days assist clients with AI issues must monitor developments in these areas and, wherever possible, act as facilitators and leaders of thoughtful discussions regarding AI. Also, adapting the precautionary measures will save a lot of legal cost for the companies and will ensure that the data is not misused or oversued.

  • A Legal Prescription on Inductive Machines in AI

    Artificial intelligence is booming the industry, but the question remains about the regulation as this is only a precaution that can put constraints on innovation. For example, a government report in Singapore highlighted the risks posed by AI but concluded that ‘it is telling that no country has introduced specific rules on criminal liability for artificial intelligence systems. Being the global first-mover on such rules may impair Singapore’s ability to attract top industry players in the field of AI[1].’ These concerns are well-founded. As in other areas of research, overly restrictive laws can stifle innovation or drive it elsewhere. Yet the failure to develop appropriate legal tools risks allowing profit-motivated actors to shape large sections of the economy around their interests to the point that regulators will struggle to catch up. This has been particularly true in the field of information technology. For example, social media giants like Facebook monetized users’ personal data while data protection laws were still in their infancy[2]. Similarly, Uber and other first-movers in what is now termed the sharing or ‘gig’ economy exploited platform technology before rules were in place to protect workers or maintain standards. As Pedro Domingo once observed, people worry that computers will get too smart and take over the world; the real problem is that computers are too stupid and have already taken over[3]. Much of the literature on AI and the law focuses on a horizon that is either so distant that it blurs the line with science fiction or so near that it plays catch-up with the technologies of today. That tension between presentism and hyperbole is reflected in the history of AI itself, with the term ‘AI winter[4]’ coined to describe the mismatch between the promise of AI and its reality. Indeed, it was evident back in 1956 at Dartmouth when the discipline was born. To fund the workshop, John McCarthy and three colleagues wrote to the Rockefeller Foundation with the following modest proposal: [W]e propose for 2 months and 10 men needed for the study of artificial intelligence will be carried out in the summer of 1956 ……… The study was on the conjecture nature of learning where the machines should be made intelligent to stimulate it. In this study, an attempt will be made to find out how machines use language for the concept to solve problems, reserved for humans. We think that significant advancement can be made and only a selected group of people will work on the summer project.” The innovation in the field of AI was started a long time ago but there were no precautions and regulations to put the use of AI in control. Every entity on the planet Earth can agree to the term that AI can be more fearful than one’s thought. Just as the statement by the AI robot Sofia “she plans to take over the human being and their existence. Moreover, the website run by AI shows the last picture of humans as very degraded beings. As said in the statement by Pablo Picasso[5] “the new mechanical brains are useless, they only provide an answer that was taught to them” As countries around the world struggle to capitalize on the economic potential of AI while minimizing avoidable harm, a paper like this cannot hope to be the last word on the topic of regulation. But by examining the nature of the challenges, the limitations of existing tools, and some possible solutions, it hopes to ensure that we are at least asking the right questions. As it is said every space in nature and physics needs to be fulfilled otherwise it would create a hole -a black hole. The paper "Neurons Spike Back: A Generative Communication Channel for Backpropagation" presents a new approach to training artificial neural networks that is based on an alternative communication channel for backpropagation. Backpropagation is the most widely used method for training neural networks, and it involves the use of gradients to adjust the weights of the network. The authors propose a novel approach that uses spikes as a communication channel to carry these gradients. The paper begins by introducing the concept of spiking neural networks (SNNs) and how they differ from traditional neural networks. SNNs are modelled after the way that biological neurons communicate with each other through spikes or action potentials. The authors propose using this communication mechanism to transmit the gradients during backpropagation. But before that we need to understand what is deep learning and the neural networks and deep neural networks. Inductive & Deductive Machines in Neural Spiking Inductive machines are also known as unsupervised learning machines. They are used to identify patterns in data without prior knowledge of the output. Inductive machines make use of a clustering algorithm to group similar data together. An example of an inductive machine is the self-organizing map (SOM). SOMs are used to create a two-dimensional representation of high-dimensional data. For example, if you have a dataset consisting of several features such as age, gender, income, and occupation, an SOM can be used to create a map of this data where similar individuals are placed close together. On the other hand, deductive machines are also known as supervised learning machines. They are used to learn from labeled data and can be used to make predictions on new data. An example of a deductive machine is the multi-layer perceptron (MLP). MLPs consist of multiple layers of interconnected nodes that are used to classify data. For example, if you have a dataset consisting of images of cats and dogs, an MLP can be trained on this data to classify new images as either a cat or a dog. Neural spiking is the process of representing information using patterns of electrical activity in the neurons of the brain. Inductive and deductive machines can both be used to model neural spiking, but they differ in their approach. Inductive machines can be used to identify patterns in the spiking activity of neurons without prior knowledge of the output. Deductive machines, on the other hand, can be used to predict the spiking activity of neurons based on labeled data. How Deep Learning + Neural Networks Work Deep learning is a subset of machine learning that utilizes artificial neural networks to learn from large amounts of data. Neural networks, in turn, are models that are inspired by the structure and function of the human brain. They are capable of learning and recognizing patterns in data, and can be trained to perform a wide range of tasks, from image recognition to natural language processing. At the heart of a neural network are nodes, also known as neurons, which are connected by edges or links. Each node receives input from other nodes and computes a weighted sum of those inputs, which is then passed through an activation function to produce an output. The weights of the edges between nodes are adjusted during training to optimize the performance of the network.[6] In a deep neural network, there are typically many layers of nodes, allowing the network to learn increasingly complex representations of the data. This depth is what sets deep learning apart from traditional machine learning approaches, which typically rely on shallow networks with only one or two layers. Deep learning has been applied successfully to a wide range of tasks, including computer vision, natural language processing, and speech recognition. One of the most well-known applications of deep learning is image recognition, where deep neural networks have achieved state-of-the-art performance on benchmark datasets such as ImageNet. However, deep learning also has some limitations. One of the main challenges is the need for large amounts of labeled data to train the networks effectively. This can be a significant barrier in areas where data is scarce or difficult to label, such as medical imaging or scientific research. Another limitation of deep learning is its tendency to be overfitted to the training data. This means that the network can become too specialized to the specific dataset it was trained on and may not generalize well to new data. To address this, techniques such as regularization and dropout have been developed to help prevent overfitting. Despite these limitations, deep learning has had a significant impact on many areas of research and industry. In addition to its successes in computer vision and natural language processing, deep learning has also been used to make advances in drug discovery, financial forecasting, and autonomous vehicles, to name a few examples. One of the reasons for the success of deep learning is the availability of powerful hardware, such as GPUs, that can accelerate the training of neural networks. This has allowed researchers and engineers to train larger and more complex networks than ever before, and to explore new applications of deep learning. Another important factor in the success of deep learning is the availability of open-source software frameworks such as TensorFlow and PyTorch. These frameworks provide a high-level interface for building and training neural networks and have made it much easier for researchers and engineers to experiment with deep learning. Spiking Neural Networks A spiking neural network (SNN) is a type of computer program that tries to work like the human brain. The human brain uses tiny electrical signals called "spikes" to send information between different parts of the brain. SNNs try to do the same thing by using these spikes to send information between different parts of the network. SNNs work by having lots of small "neurons" that are connected together. These neurons can receive input from other neurons, and they send out spikes when they receive enough input. The spikes are then sent to other neurons, which can cause them to send out their own spikes. SNNs can be used to do things like recognize images, control robots, and even help people control computers with their thoughts. They can also be used to study how the brain works and to build computers that work more like the brain[7]. The basic structure of an SNN consists of a set of nodes, or neurons, that are interconnected by synapses. When a neuron receives input from other neurons, it integrates that input over time and produces a spike when its activation potential reaches a certain threshold. This spike is then transmitted to other neurons in the network via the synapses. There are several ways to implement SNNs in practice. One common approach is to use rate-based encoding, where information is represented by the firing rate of a neuron over a certain time period. In this approach, the input to the network is first converted into a series of spikes, which are then transmitted through the network and processed by the neurons.[8] One example of an application of SNNs is in image recognition. In a traditional neural network, an image is typically represented as a set of pixel values that are fed into the network as input. In an SNN, however, the image can be represented as a series of spikes that are transmitted through the network. This can make the network more efficient and reduce the amount of data that needs to be processed. Another example of an application of SNNs is in robotics. SNNs can be used to control the movement of robots, allowing them to navigate complex environments and perform tasks such as object recognition and manipulation. By using SNNs, robots can operate more efficiently and with greater accuracy than traditional control systems. SNNs are also being explored for their potential use in brain-computer interfaces (BCIs). BCIs allow individuals to control computers or other devices using their brain signals, and SNNs could help improve the accuracy and speed of these systems. One challenge in implementing SNNs is the need for specialized hardware that can efficiently process and transmit spikes. This has led to the development of neuromorphic hardware, which is designed to mimic the structure and function of the brain more closely than traditional digital computers. Despite these challenges, SNNs are a promising area of research that has the potential to improve the efficiency and accuracy of a wide range of applications, from image recognition to robotics to brain-computer interfaces. As researchers continue to explore the capabilities of SNNs, we can expect to see new and innovative applications of this technology emerge in the years to come. The authors then present the results of experiments that compare their approach to traditional backpropagation methods. They demonstrate that their method achieves comparable results in terms of accuracy but with significantly lower computational cost. They also show that their method is robust to noise and can work effectively with different types of neural networks. Overall, the paper presents a compelling argument for the use of spiking neural networks as a communication channel for backpropagation. The proposed method offers potential advantages in terms of computational efficiency and noise robustness. The experiments provide evidence that the approach can be successfully applied to a range of neural network architectures. References [1] Penal Code Review Committee (Ministry of Home Affairs and Ministry of Law, August 2018) 29. China, for its part, included in the State Council’s AI development [2] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power [3] Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World [4] AI is whatever hasn’t been done yet.’ See Douglas R Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (Basic Books 1979) 601. [5] William Fifield, ‘Pablo Picasso: A CompInterviewrview’ (1964) [6]NeuronsSpikeBack.pdf (mazieres.gitlab.io) [7] https://analyticsindiamag.com/a-tutorial-on-spiking-neural-networks-for-beginners/ [8] https://cnvrg.io/spiking-neural-networks/

  • New Report: Deciphering Regulative Methods for Generative AI, VLiGTA-TR-002

    Generative AI has become an industry topic of utmost vogue, that perhaps we see a period of intellectual trespassing on this piece of technology. There is nothing surprising about it, since the technology itself has huge computational capabilities and represents a new form of technology portability, in the age of artificial intelligence. While some investors and start-up entrepreneurs may claim the rise of generative AI as a measure to develop early-stage AGI (artificial general intelligence), we believe the real picture is too complex to be side-lined or presumed. In this report, all of the authors have provided a wider picture of the generative AI landscape in the context of the global economy. We have adopted a classification-based approach, where we have sorted out some of the most mainstream use cases of generative AI tools, and provided ontological categories to such applications. Yashudev Bansal, Research Intern, Indian Society of Artificial Intelligence and Law, has contributed to the overview of the generative AI landscape and offered amazing insights on the use cases of large language models for use cases such as legal management and drafting. Kapil Naresh, Founder, Juriaide, has offered profound insights and a proper dissection of various legal issues related to (1) Proprietary Issues related to Intellectual Property Law, (2) Data Quality, Privacy & Content Issues, (3) Pseudonymous Disruption and (4) Digital Public Infrastructure. It has been an honour to provide insights and exploration of legal issues such as (1) Artificial Intelligence Hype, (2) Product-Service Classifications, (3) Unclear Derivatives & Derivatives of Derivatives, (4) Proprietary Issues related to Intellectual Property Law and (5) Digital Public Infrastructure. To conclude, it has been my pleasure to develop this report after weeks of deliberation among the authors of this report, and my VLiGTA Research Team. I express my special regards to Sanad Arora, Junior Research Associate at VLiGTA and Vagish Yadav, Advocate, the High Court of Allahabad, for their moral support. You can read an overview of the report here. We are grateful to Rodney D Ryder, Founding Partner, Scriboard for authoring a Foreword to this technical report. Anyone who is interested to discuss about the nuances of the report, can contact us at vligta@indicpacific.com. The report is now available on the VLiGTA.App: https://vligta.app/product/deciphering-regulative-methods-for-generative-ai-vligta-tr-002/

  • Lawyering in a Multi-polar World

    Lawyering is not limited to litigation. The diversity of professional opportunities for legal professionals exist in various forms and it will only increase in the information age. Now, in this article, let us understand the concept of a multi-polar world order (in International Relations (IR)), and what implications does a multipolar world, have on the legal profession as well as the field of law. We are already seeing some trends in the modern world, which will be discussed in this article. What is a Multi-polar World Order? In international relations, scholars consider a multi-polar world as the most raw and exhaustive form of the realpolitik as we know. In international politics, a realpolitik refers to the political realities which shape existing domestic, intergovernmental and global institutions, be it state actors or non-state actors. Decades ago, we could not anticipate that several actors involved in money laundering can use cryptocurrencies to further their hawala transactions. If it were not for the COVID pandemic, the world could not have sought the purpose of a digital cyberspace and the necessity of force majeure clauses in contracts. As the world is still healing from the after-effects of the pandemic, the trade impact of the US-China trade war in 2018, followed by this pandemic and the Ukraine-Russia conflict in Eastern Europe has been scathing, without any doubt. Does it affect the way legal professions work? Indeed it does. However, it is necessary to have a deeper enquiry on how the multi-polar world affects the legal field. As we understand the theories of power and influence, we must have heard terms like - a unipolar world, a bi-polar world and so on. Well, a multipolar world is quite interesting, because many notions of international relations, especially those dominated by the scholars of US Foreign Policy have been challenged by the emergence of this concept. As per the diagram above, in the information age, we see the following trends clearly showing the emergence of a multipolar world: Legal systems across the world adapt with the disruption of different industry sectors and their all-time impact on human lives, at both domestic and transnational levels. The concept of alliances and ententes does not materialise in a proper fashion because the power dynamics of a multipolar world empowers the smaller allies of a larger power to take distinctive (if not hostile) paths of economic, political and diplomatic engagement. This leads countries to form their own sovereign decisions and concerns, based on their own understanding and acceptance of legal systems and principles; Many settled questions of law in various legal systems are subject to cleavage because of the case-by-case, industry-by-industry impact of digital technologies (especially Web2 technologies). In the Global South countries, however, where many legal questions are not properly settled, newer questions of law emerge which further complicate or integrate the situation (depends on how the regulators, judges and legislators address the problem); Countries either align with some major powers (for example the US or China), become non-aligned or become multi-aligned. Two key phenomenon come into being - justifying strategic autonomy and interplaying strategic hedging. The categorisation is furthered as follows: If they have been aligned and stay aligned - the legal systems already influenced by the major geopolitical players in that alliance are in vogue with the major powers’ legal systems, norms and principles. Even in contentious areas such as commerce and trade (for example), some parity can be observed. If they have been aligned and have a distinctive approach - their legal systems start becoming compatible to those of the major powers’ legal systems. If they have been non-aligned - their legal systems stay largely neutral and unchanged by the emerging legal norms and methods across the globe (which is a rather unrealistic or unsustainable situation). If they have become multi-aligned - their legal systems optimise the risks attached to the sovereign decisions and approaches they agree with. In the language of international law, their state practices do not bear a hostile character, yet hedge the risks associated with the distinctive legal and policy positions they strive to take. The question of bearing hostility and abruptness on legal and public policy points of view for governments, is hard to address, despite the fact that most of such questions can be addressed within the language of international law. Yet, political power guides the realpolitik, especially how sovereignty is defined, which is why it could become a political question as to how legal systems conflicting with one another resolve the situation. Multilateral institutions are subject to question and review, where the following trends are visible: They adapt with time and bear institutional cum policy transformation; or They do not adapt and render their components and subsidiaries toothless without any potential or scope of action; or They are used by countries in groups to negotiate and establish backdoor negotiations to make incremental use of the existent systems by making small changes to (if not restore but) maintain the political purpose of the institution. Since many (if not most) legal systems are subject to policy disruption, leading to a rise in grey areas of legal importance, hard laws and their notion in jurisprudence remains incomplete due to their being inflexible. The role of Soft Law becomes important as legal and policy prescriptions provided through help out and encourage legal innovation. Now, in the next section, it is discussed how lawyering and legal practice has shaped itself in the wake of the multi-polar nature of the realpolitik, strictly, in the legal profession. Unsettled Notions, Emerging Lawyers For sure, when pandemics or any natural disaster come, it is presumed that huge disruptions happen. However, the change must stick with time and ensure its own worth. In the legal profession, the multi-polar nature of the realpolitik had already become mainstream since 2008. However, the significant changes in this realm takes years to settle. One of the biggest achievement of the multi-polar world, which is still US-dominated, has been the emergence of new legal opportunities. The transformation of technology law as a field for example, furthered into creating a set of requirements in the legal domain, especially in the developed countries. In India, however, the change has not been quite rapid at a macro level, except maybe in the Tier-1 cities to a limited extent. Another trend which has become quite apparent is the role of information economy. Due to the disruptions brought by the pandemic, many law professionals who were accustomed with the “traditional” forms of work, are now harnessing the potential of the digital world. Content creation may be considered an obvious answer to this point, but it does not end there. In-house legal work, knowledge management, product development (especially through agile management, for e.g., scrum master) and others. Due to the increasing and quite newfound importance of soft law, i.e., self-regulatory initiatives that a company undergoes, it is apparent that in certain ways, the notion that law is top-down has been affably challenged. The new set of legal opportunities have a consultative and exhorting characteristic, which may saturate the importance of courts and tribunals. It does not however mean that every form of alternative dispute resolution gains maximal importance. Every tool of ADR has its own sector-specific, stakeholder-specific importance, which if utilised at best, would help governments and other stakeholders across the world to bear the tremors that the multi-polar world order bring upon municipal legal systems, across the world. In short, there is no doubt that legal systems across the world are trying to adapt with the geopolitical conditions which shape the socio-economic, logistic and even administrative factors that affect polities globally. However, there is a limit to adapt with the systems, and without understanding how international relations in conceptual aspect has transformed so much, the adaptation of the surroundings would render the systems and legal instruments weak and dysfunctional, in many ways and cases. In the next section, this problem is discussed. The IR Perspective on Legal Systems: Limits of Adaptation Legal systems represent the actuality of political thinking germinated in proper legal concepts and principles agreed upon in countries across the world, democratic or not. Now, from a scholarly perspective, the multi-polar world order as it exists also represents the state of the “modern world” as we know. This world order resembles the world like a pandora box, which means that uncertainties rise with time. For example, a flawed assumption which many people have about the turbulent times in Europe due to the ongoing Ukraine-Russia conflict is that the modern world, which created norms, institutions and principled regulation (at least on paper) is subject to some irreparable damage. They however forget to realise that Europe’s history repeats in many ways, every century. The way the European Union (not continental Europe) has handled the situation in Eastern Europe has been emblematic of how political leaders in Europe make decisions. For example, it could be a moral argument that a set of countries can be anyhow ejected out of SWIFT to isolate them from the “global” financial system. However, countries in Asia, Latin America and Africa would not find this approach comfortable. Nevertheless, political visions generally have to be constructive and self-explanatory. They shape legal decisions. However, inconsistency in bearing consequences does affect the constructed “rules-based” international order, within the multi-polar world, which should ask questions whether countries are interested in preserving the current world order or not. From a legal angle, a breakdown of trust and norms is inevitable in many domains. A simple example is the Paris Accords of 2015. In line with the 2015 Accords, only India abided by the GHG emission limits so far, so well as a party to the Accords. Yet, the principle of common but differentiated responsibility has been left astray in practice by countries in the Global North. The fragility of the rules-based international order, despite bearing a first-principles approach towards shaping international legal thinking is shaped by asking whether consequences are foreseeable. Laws work when power is distributed and imbibed within them so that they become kinetic. Geography however becomes the playing field, which is where jurisdictions come in. Maybe legal systems across the world should become anti-fragile, where they solidify their policy object and become stable to govern. At a macro level, these subtleties generally do not seem much visible because any change driven by legal systems either can be incremental due to the exigencies or could be too swift. The practical meaning of trust in a legal process, as well as the principles has also changed due to the multipolar order. Yes, the stable way in which trust was easy to transact in legal systems has been affected, and maybe it could not return the old way it used to be. However, the importance of finding multiple legal pathways has for sure increased the chances of addressing peculiar and hard legal disputes (maybe not in every sector/domain of law, but at least in some of the crucial ones) in the most suitable way. Mapping the Possibilities of Legal Innovation Interestingly, there are immense possibilities of legal innovation, which could happen due to the nature of the multi-polar world order. As discussed in the first section of the article, countries adapt with systems in their own suitable ways. This diagram above explains some common and uncommon indicators that may show signs of legal innovation. For example, the conceptual understanding of a legal dispute does not necessarily change. However, in the most unusual ways, sometimes, legal disputes or problems transform the operability of legal tools and instruments. Another example could be where governments do not necessarily preclude the stakeholders from becoming a part of the process but facilitate only those stakeholders, who have some purposive value. India is an impressive example where SEBI has recently joined the account aggregator framework. To conclude, it can be stated that this multipolar world order in the 21st century, unlike other times, is more evolved and despite uncertainties - understandable. The world has become mature and interconnectedness has made engaging with the realpolitik more subtle, secure and sensible.

  • Why India Needs Mandatory Mediation

    This article is co-authored by Tara Ollapally, CAMP Arbitration & Mediation Practice. Introduction: Tend and Befriend Responses to Conflict Conflicts are ubiquitous, unavoidable, and almost always uncomfortable. An inevitable consequence of human interaction, conflict, if managed well can be a source of innovation, creativity, growth and meaningful relationships. Any conflict originates from differences. Differences in ideas, values, or perceptions of facts. These differences if not handled well will lead to disagreements, and disagreements if not respectfully managed will lead to disputes which eventually could lead to all-out conflict. If it is escalation of the differences that causes conflicts, it is also inversely true that to resolve conflicts we must de-escalate the situation to resolve them more efficiently. The importance of handling a conflict at the earliest stems from the intrinsic link between cause and consequence.[1] The primary reason for conflicts is the urge to protect something or someone deeply attached to the conflicting parties.[2] Christopher Moore gives a particularly easy understanding of different types of conflicts, since resolving the different types of conflict will require different approaches. He calls it the ‘Circle of Conflict’[3]. As per Moore, conflicts are divided into 5 types, Value Conflicts, Relationship Conflicts, Structural Conflicts, Interest Conflicts, and Data Conflicts. Responses to Conflict Neurobiology research, first understood and described by Walter Cannon in 1932 has understood the human stress response to most commonly be Fight, Flight, Freeze[4] - to get aggressive and fight, to run away from the conflict or to freeze and not take any action hoping for the situation to pass.[5] Recent research from UCLA has shed light on another common response to stress – Tend and Befriend[6]- to build a connection between the conflicting parties, allowing for vulnerability and understanding.[7] This research shows that humans have used social relationship not only as a basic accommodation to the exigencies of life, but also as a primary resource for dealing with stressful circumstances.[8] In this article we share that Mediation as a dispute resolution process promotes the Tend and Befriend response. To holistically address disputes, systems must be designed to evoke this natural human response. A mature legal system that acknowledges building bridges and fostering relationships as a way our species responds to conflict will make mediation a recognised process in its dispute resolution system. Mediation as a way to enhance the Tend and Befriend response Formal legal systems are traditionally an adversarial process wherein conflicting parties are set up as adversaries and a determination of right/ wrong is made by a neutral third person. This process triggers the fight response in conflict. Mediation, as a dispute resolution process, is designed around enhancing collaboration and brings two conflicting parties together to understand, dialogue and reach an amicable solution. It creates a conducive environment whereby the parties are able to form a connection and build on it. It triggers the response under stress to affiliate and connect.[9] The ‘tend and be-friend’ approach is built on this response, where human beings come together to protect themselves. Mediation provides instrumental social support that involves providing tangible assistance as part of a social network of mutual assistance and obligations[10]. Although collaborative processes were ingrained in our traditional social system, 300 years of the formal court system has greatly impacted the collaborative response in conflict. The wise old person in the village who the community turned to and evoked the “tend and befriend’ response was replaced by the powerful village head who incited the response to fight. Formal systems that were built on the Anglo Saxon model completely replaced traditional systems that promoted dialogue, preserved relationships and focussed on win/win outcomes. To nurture and reacquaint ourselves to the tend and befriend response, strong action is needed. The Mediation Bill, 2021 which is currently under consideration at the Parliament proposes mandatory mediation for civil and commercial disputes. We welcome this step and believe that if well implemented, it could provide the impetus to develop a whole new way to resolve disputes – a way that is not only inherently natural but also badly needed in our country today. Mandatory Mediation for India “Constitutional morality is not a natural sentiment. It has to be cultivated." - B.R. Ambedkar, Annihilation of Caste India has consistently used strong laws to drive social change - whether it was the 1843 Indian Slavery Act that abolished slavery and helped changed minds about this abominable practice or The Hindu Child Marriage Restraint Act, 1929 replaced by the prohibition of Child Marriage Act, 2006 that prohibited child marriage and imposed sanctions for the same or the The Child Labour (Prohibition and Regulation) Act 1986[11] that prohibited the employment of children under 14 years or the Protection of Women from Domestic Violence Act, 2005; India has successfully used strong laws to bring about social transformation. To encourage a collaborative mind set, mediation must be strongly encouraged through legislation. A culture of ‘mediation first’ can be effectively promoted through policy. Countries round the world have successfully experimented with mandatory mediation models to not only reduce burden on courts but also encourage behavioural change when responding to conflict. Mandatory Mediation Internationally Italy serves as one of the most leading examples of a successful mandatory mediation law and policy. Voluntary mediation was first introduced as an option to disputants in Italy in 2003 but was hardly used. In 2010 Italian lawmakers introduced mandatory mediation legislation, recognising a clear reluctance by parties to engage in mediation and to address the heavily overburdened courts. Legislative Decree No. 28/2010 required mandatory mediation for certain kinds of disputes.[12] Before a filing in court, parties and lawyers are required to engage in an initial mediation session with an ability, thereafter, to easily opt out of mediation. Tax reliefs were offered to parties who engaged in the mediation process, and it was to be quadrupled if an agreement was achieved. This mandatory initial mediation session model has not only drastically increased the number of cases that attempted and settled in mediation but also recorded a substantial decrease in the number of court filings.[13] In Singapore, Mediation is divided into court-annexed and private. In 2010, the State Courts increased the use of mediation in civil disputes by adopting the 'ADR Form at the Summons for Directions' stage. Both attorneys and clients are required to sign a document certifying that they have explored ADR possibilities and indicating their decision regarding the same. In 2012, a "presumption of ADR" was implemented, which requires all civil cases to be automatically directed to mediation or other types of ADR unless one or more parties opted out. Refusing to employ ADR for reasons considered unacceptable by the registrar results in financial fines under Rules of Court Order[14]. Mediation in the European Union has also had more success when it involves elements of mandatory nature.[15] Turkey introduced mandatory mediation for certain categories of disputes and has recorded a drop of up to 70% in court filings in those categories[16]. Greece, and the UK are also using strong mandatory mediation policies to increase the culture of collaboration and reduce pendency in courts. India is in desperate need of multiple solutions to address the crisis of 4.7 crore cases pending in our courts[17]. An efficient [18] process that promotes a culture of dialogue and respectful understanding must be a choice for every Indian. India attempted mandatory mediation by amending the Commercial Courts Act, 2015, which did not yield desired results. Unfortunately it did not include a strong sanction for non-appearance and provided an exception for cases that needed interim relief. This became the Achilles heel in the law and rendered it practically useless. The draft Mediation Bill 2021 that is currently pending before the Indian Parliament proposes mandatory mediation for all civil and commercial cases before the institution of a suit. If it is drafted in a manner that ensures a strong push towards mediation but also allows for disputants to easily access the courts after a meaningful initial attempt, we are creating the possibility of a mediation first culture that will reduce court filings and promote peace. Needless to say, strong professionals who understand the process and are skilled to facilitate dialogue and resolution is a non-negotiable element in making this policy a success. Conclusion As a human species, we now know that our human brain is capable of evoking a response of tend and befriend under stress. This response stimulates the evolved neocortex part of our brain where rational decisions and creative problem solving is possible[19]. As a legal system we are in desperate need of options and alternatives – our courts, the only option for dispute resolution in India, are facing an impossible case load that is only increasing. As a society, our ability to dialogue, understand each other and collaborate is essential for us to be able to solve our most urgent problems on which our survival depends. Laws play a significant role in influencing behavioural change. A law that encourages dialogue and collaboration of the disputants and promotes an efficient process that finds quick, sustainable resolution seems like a win/win option for India. We welcome India’s move to introduce mandatory mediation. All eyes now, on a well drafted law that will get the disputant to the mediation table but also preserves every Indian’s fundamental right to access to justice. References [1] Sriram Panchu, Mediation: Practice and Law (The Path to Successful Dispute Resolution), 3rd Edition. [2] Beer and Packard, The Mediator’s Handbook, 4th Edition. [3] Christopher Moore, The Mediation Process: Practical Strategies for Resolving Conflict, 3rd., (San Francisco: Jossey-Bass Publishers, 2004) [4] Canon 1932 [5] Shelley E. Taylor, Laura Cousino Klein, Brian P. Lewis, Tara L. Gruenewald, Regan A. R. Gurung, and John A. Updegraff, Biobehavioral Responses to Stress in Females: Tend-and-Befriend, Not Fight-or-Flight, Psychological Review 2000, Vol. 107, No. 3, 411-429 (https://scholar.harvard.edu/marianabockarova/files/tend-and-befriend.pdf) [6] ibid [7] Derba Gerardi, Perspectives on Leadership, The American Journal of Nursing, (September 2015), Vol 115 No 9, 61. [8] Shelley E. Taylor, Tend and Befriend Theory, Handbook of Theories of Social Psychology. Sage Publications, 2011. [9] Shelley E. Taylor, Tend and Befriend: Biobehavioural Bases of Affiliation under Stress, Current Directions under Psychological Science, (December 2006), Vol 15 No 6, 273. [10] Shelley E. Taylor, Tend and Befriend Theory, Handbook of Theories of Social Psychology. Sage Publications, 2011. [12] Disputes related to condominiums, property, division of goods (or partition), family-business covenants and agreements, wills and inheritance, leases, loans, business rents, medical and paramedical malpractice, libel, insurance, and banking and financial contracts. Legislative Decree No. 28 of 4th March 2010, Italy. [13] Leonardo D’ Urso, Italy’s ‘Required Initial Mediation Session’: Bridging The Gap between Mandatory and Voluntary mediation https://www.adrcenterfordevelopment.com/wp-content/uploads/2020/04/Italys-Required-Initial-Mediation-Session-by-Leonardo-DUrso-5.pdf [14] Code of Ethics and Basic Principles of Court Mediation, available at http://www.subccourts.gov.sg, under “Civil Justice Division, Court Dispute Resolution/Mediation”. [15] Giuseppe De Palo; Romina Canessa, Sleeping - Comatose Only Mandatory Consideration of Mediation Can Awake Sleeping Beauty in the European Union, 16 Cardozo J. Conflict Resol. 713 (2014). [16] Tuba Bilecik, Turkish Mandatory Mediation Expands Into Commercial Disputes, http://mediationblog.kluwerarbitration.com/2019/01/30/turkish-mandatory-mediation-expands-into-commercial-disputes/ [17] Over 4.70 crore cases pending in various courts: Govt https://economictimes.indiatimes.com/news/india/over-4-70-crore-cases-pending-in-various-courts-govt/articleshow/90447554.cms?from=mdr [18] In private mediation nearly 70% of cases settle within 3 months. In court mediation programs, specifically at the Bangalore Mediation Centre of the Karnataka High Court the settlement rate is 66% in 90 days (Strengthening Mediation in India: A Report on Court-Connected Mediations, Vidhi Centre for Legal Policy Table 8) [19] Cloke, K., 2013. Bringing Oxytocin into the Room: Notes on the Neurophysiology of Conflict About the Author Mohit Mokal is a Senior Associate, Mediation at CAMP Arbitration & Mediation Practice and Tara Ollapally is the Co-Founder & Mediator at CAMP Arbitration & Mediation Practice. The opinions expressed in this article are those of the authors. They do not purport to reflect the opinions or views of Indic Pacific Legal Research LLP or its members.

  • Algorithmic Pricing & International Trade Law

    Coca Cola’s utility increases on a hot day compared to other days which was noticed as a profit maximization opportunity and in 1999, the company considered the implementation of temperature-sensitive vending machines that would alter the price of the product as per the intensity of heat in its surroundings[1]. This sums up the idea of dynamic pricing, which various corporations have been employing while retaining the demand-supply equilibrium in the nucleus. Algorithms, on the other hand, are just one of the extremely effective means to harness the concept of dynamic pricing which translates to Algorithmic Pricing. It is applied for constantly altering the offer price depending upon the consumers’ personalized data sets, derived mostly by past activities and recent trends[2]. In the most generalist sense, Algorithms can be traced back to the time before the invention of computers. Simply put, it denotes a set of rules that ought to be repeated, in a specified sequence to accomplish the given task. With the overall technological revolution, Algorithms have become more and more sophisticated due to the application and evolution of Big Data, Machine Learning and Artificial Intelligence (AI), which have made it particularly efficient in creating useful data repositories, making predictions, decision making and achieving goals within time. Especially e-commerce websites like Amazon and Flipkart have been able to achieve exponential growth due to this inexpensive piece of technological innovation. The present-day Algorithms are capable of performing complex tasks like predictive analysis which helps a corporation to appraise consumer behaviour, forecast risks, highlight the new competition and estimate demand, all at once and synchronised to capitalise on profits. Governments have also been using it to gauge criminal behavioural patterns, intensities and possible occurrences of the crime, by applying it to different variables like period and location. Due to the expansive nature of Algorithm Pricing, it has been subject to scrutiny in recent years. Domestic regulators, as well as International Organisations, have been conducting research into possibilities of intrusions into privacy, business ethics and competition regulations caused by such Algorithms. This analysis will critically examine the regulatory concerns associated with Algorithm Pricing in isolation while acknowledging that growth of the e-commerce industry on whole has indeed led to innovation, connectivity and overall progress. It is a policy issue: International Trade Law The body of International Trade Law consists of treatise, principles and customs which regulate the transactions of two or more private sector parties belonging to different states. The World Trade Organization (WTO) and the Organization of Economic Cooperation and Development (OECD) have been expressing concerns of violation of privacy rights and competition policies[3]. How the WTO and OECD give teeth to these concerns is yet to be seen since at International level and in private contracts, these organisations can only emphasise States to take action depending on their accessions. For instance, the prices of essential goods are regulated as per the General Agreement on Tariffs and Trade (GATT) which is consensual[4]. Although the silver lining is that these organisations play a vital role in establishing standards for the conduct of business which will be dealt with in detail in further paragraphs. International organisations and technological innovations have always gone hand in hand, as for instance, the United Nations Commission on International Trade Law (UNCITRAL) played an important role during the internet regulation phase wherein its efforts helped recognise electronic contracts and records, in policies of individual states. In 1998, WTO along with the support of other States, granted a moratorium on duties for products of electronic transmissions, thereby facilitating easy trade in digital products[5]. The Algorithm Pricing mechanisms adopted by e-commerce companies gather huge amounts of consumer and competitors’ data which is then sorted out to understand seasonal preferences and then flash appropriate offer prices. The international community is specifically concerned since a major chunk of the population is unaware that such processes are being employed to gather their data and use it contrary to their interests. The European Union (EU) Courts’ technological approach asserts the highest importance to interpret provisions in a way, so as to further the goal of consumer welfare. If such processes cannot be accommodated into a specialised law, then it can be accommodated into general welfare provisions through the channel of interpretation. The functioning of this Algorithm Pricing (AP) goes against consumer welfare as the consumer is subjected to imperfect knowledge about the prices being charged to him. A strong welfare setting would require that a consumer knows, not only what he is being charged but also what other consumers, with identical/similar status, are being charged. One would argue that similar friction prevails in an offline setting as well, but the underlying point here is that consumers lack preliminary knowledge of dynamic pricing on e-commerce platforms. Consumers must be expressly made aware of the variables which determine a personalised price setting for different individuals or just the fact that dynamic pricing exists[6]. It is difficult to prove the illegality of the use of the algorithms itself or the manner in which they are being processed, but it is necessary to investigate the nature of actions like targeted advertisements, collection of consumer data and preferences, disclosures made, responsiveness to price offers and past purchases trajectories. Ultimately, it is for the respective States to decide, as a matter of policy, the extent to which these intrusions should be allowed. It becomes an increasingly important issue since foreign companies’ activities related to data mining in the domestic sphere can lead to remittance of vital information out of the country which, in exceptional circumstances, may lead to strategic concerns for a state. Competition Concerns Simply by selling the same products at different prices to different people is not illegal. Issues arise only when two or more market players expressly agree to raise, reduce or stabilise their prices, in order to defeat natural competition[7]. One would say that in the absence of an express agreement, Algorithmic Pricing could at the most, lead to tacit collusion with the competitor to alter the price, which is outside the purview of Competition law. Regulators here have raised concerns that the sheer vastness of capabilities of Algorithmic Pricing can lead to an understanding, whereby it can teach itself to manipulate prices anti-competitively. Hence, even without an agreement or human intervention, Big Data analytics might supplement reactive pricing in a way which will eventually teach itself to arrive at a cooperative equilibrium with other competitors' algorithms[8]. In effect, this cooperative behaviour leads to conscious and express coordination which at all times, prevents unsatisfied demand and excess supply. Such processes then deprive the consumer of the lower price which it could have gained from the result of the competition despite the e-commerce platform being able to afford a reduction in price below the threshold. The OECD has recognised these activities as potentially illegal and asked to keep a check on developments in AI to determine if it can take business decisions which will finally bring Algorithmic pricing under regulatory scrutiny[9]. Another concern is that due to these Algorithms processing data every second, they act as a watchdog to quickly identify a new entrant or competition in the market. Such identification is then used to offer competitive prices which the entrant can never afford to match. There is no denying the fact that consumers derive huge benefits from competitive pricing but the big picture shows that those benefits are not derived equally by all consumers, which goes to the root of collective justice[10]. Establishment of standards International Organisations are in a better position to establish standards to deal with exploitative aspects of Algorithm Pricing. The following reasons suggest why such Algorithms are unfair which may not necessarily be illegal but surely essential for setting standards. The rationale for establishing standards lies in the fact that Algorithms being used for pricing are neither transparent nor ubiquitous which makes it unfair and highly violative of the well-established social conventions one would follow in an offline retail setting. It is a matter of moral legitimacy that a transaction must not only involve consensus but also informed consent, which is absent in an online transaction where a consumer would not have indulged with the seller if it was aware that the e-commerce platform waived off its willingness to lower the prices by colluding. Data-driven Algorithms assess consumer information at a granular level which in contrast is not available to the consumer leading to plausible exploitation. When one consumer is offered a price for a product, it should act as a threshold for other consumers to determine the value of the product. Consumers could have come to another valuation (lower) for the same product which was denied to them without knowledge of the same. Due to the opaqueness of the Algorithm Pricing strategies, behavioural economics suggests that consumers are being treated unfairly. Privacy concerns related to these activities are huge, as everything from past behavioural activities to location tracking to capacity and willingness to pay is tapped, stored and utilised at the expense of the consumer itself making Algorithm Pricing a fit case for regulation by well-defined standards[11]. Transparency Transparency serves as a separate calling for regulation of Algorithm Pricing as it is this aspect from which all other issues relating to Algorithmic Pricing arise. Generally, transparency for such a technological process can be achieved with an adequate evaluation of the software system, monitoring its activity and subjecting it to self-regulation as a reporting entity. Although the ground reality is not so simple, considering that transparency was to be imposed then even submission of complex and long algorithmic program codes would be extremely challenging to decipher in order to unwrap some real substance of unfairness[12]. Proving illegality will altogether be more difficult as, in a sandbox environment, achieving the result of the desired price to prove guilt will depend on all requisite variables falling perfectly in the right place and time in a sequential and synchronised manner. There is little realisation of the fact that providing source codes will not lead to transparency unless the rationale behind particular codes are made known to the authorities which is an impossible task as companies have a stronger right to secure their trade secrets[13]. Regulators also need to be mindful about the fact that any policy brought does not become too conducive for the existence of the industry itself. Conclusions Some of the possible regulations which can be adopted are restricting volatility of price fluctuation by introducing cap limits, tapping into competitor’s data could be declared to be illegitimate and regulation of the algorithm designs from the inception. Although these restrictive policies can be adopted, the regulators, domestically and internationally, will have to take a cautious approach to avoid excessive regulations and undertaking supervisory burden for an industry which has generated great wealth for the society. An appropriate regulatory structure should involve definite laws which could be information technology law, competition law or intellectual property rights. If several agencies need to be involved then a structure of coordination to avoid overlapping needs to be in place. International Trade Law should fortify its substantive laws to cover situations dealing with foreign companies in other jurisdictions. Ultimately, the regulators do not have concrete proofs of privacy or competition law breaches but have serious concerns about the possibility of illegalities being committed by Algorithmic Pricing. Regulators are ready with cavalry to engage in meaningful regulation of Algorithmic Pricing but which innovation will be the last straw to break the camel’s back is to be seen. References 1[1] Seele P, Dierksmeier, Mapping the Ethicality of Algorithmic Pricing: A Review of Dynamic and Personalized Pricing, J Bus Ethics (2019), https://doi.org/10.1007/s10551-019-04371-w [2] Thomas Gehrig, Oz Shy and Rune Stenbacka, 'A Welfare Evaluation of History-Based Price Discrimination' (2012) 12 Journal of Industry, Competition & Trade [3] CPI Talks, Interview with Antonio Gomes of the OECD, May 2017, https://www.competitionpolicyinternational.com/cpi-talks-interview-with-antonio-gomes-of-the-oecd/ [4] WTO, Substance of Accession Negotiations, Handbook on Accession to WTO, https://www.wto.org/english/thewto_e/acc_e/cbt_course_e/c5s2p3_e.htm [5] World Trade Organization, How Do We Prepare for Technology Induced Reshaping of Trade, World Trade Report 2018, https://www.wto.org/english/res_e/publications_e/wtr18_4_e.pdf [6] Christopher Townley, Eric Morrison & Karen Yeung, Big Data and Personalised Price Discrimination in EU Competition Law, King’s College London, Legal Studies Research Paper Series, Paper No, 2017-38, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3048688 [7] Algorithm Pricing and its Effect on Competition Law, Society of International Trade and Competition Law, https://nujssitc.wordpress.com/2018/02/23/algorithm-pricing-and-its-effect-on-competition-law/ [8] Refer Supra Note 5 [9] OECD, Algorithms and Collusion – Background Note by the Secretariat, Directorate for Financial Enterprise Affairs, DAF/COMP(2017), https://one.oecd.org/document/DAF/COMP(2017)4/en/pdf [10] Refer Supra Note 6 [11] Refer Supra Note 5 [12] Deven R Desai and Joshua Kroll, Trust but Verify: A Guide to Algorithms and the Law, Harvard Journal of Law and Technology Vol 31, https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech1.pdf [13] Refer Supra Note 9

Search Results

bottom of page