Search Results
105 items found
- Navigating Generative AI Investments: Unleashing Potential and Tackling Challenges
The author is a Research Intern at the Indian Society of Artificial Intelligence and Law, as of September 2023. Introduction In recent years, the landscape of artificial intelligence (AI) has been significantly reshaped by the emergence of generative AI start-ups. These ventures, driven by innovative algorithms and cutting-edge technologies, have unlocked the potential for machines to autonomously produce content, thereby revolutionising various industries. However, the intersection of generative AI and investments raises multifaceted issues that warrant close examination. Generative AI is a subset of artificial intelligence enabling machines to produce content resembling human creations, encompassing text, images, music, and more. Its applications span creative content generation, design enhancement, data synthesis, and problem-solving across various sectors. It significantly aids artists, writers, and designers in content creation while also driving breakthroughs in healthcare, scientific research, and data analysis, previously deemed unattainable. As per the study conducted by McKinsey and Company, latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually and has the ability to substantially increase labour productivity across the economy. This article delves into the intricate terrain of risks and rewards entailed in investments within the realm of generative AI start-ups. Through meticulous evaluation, the feasibility of diverse applications will be analysed, while delving into the underlying factors shaping the contrast between complimentary and premium usage models. Imbued with an examination of economic intricacies and the landscapes of investment, the article's intent is to furnish a holistic comprehension of the manifold challenges and prospects unfurled by generative AI start-ups, while endeavouring to illuminate their potential ramifications on the broader expanse of the AI landscape. The sustainability of generative AI projects is examined, considering the shift from traditional funding models to innovative strategies that resonate with user preferences. The article concludes by underscoring the intricate interplay between funding, innovation, and societal impact, shaping generative AI's trajectory in a dynamic landscape. The Role of Consistent Funding in Developing Generative AI Developing generative AI faces substantial challenges, primarily stemming from demanding computational needs and intricate data privacy concerns. The intensive computing requirements, driven by complex algorithms and neural networks, create cost barriers and accessibility issues for start-ups and researchers with limited resources. This hampers meaningful generative AI development, especially in regions lacking adequate infrastructure. Additionally, data privacy intricacies arise from the creation and manipulation of training data, presenting ethical dilemmas and regulatory compliance hurdles. Scarcity of suitable training datasets further compounds the challenge, hindering the effectiveness of generative AI models. Amidst these challenges, consistent funding emerges as a vital catalyst. Overcoming the computational intensity obstacle requires substantial financial investment in advanced hardware. Similarly, acquiring and managing diverse and relevant data comes with costs related to data compliance and privacy regulations. Additionally, the scarcity of skilled professionals in this complex field necessitates competitive salaries to attract and retain talent. Ensuring steady funding not only addresses these challenges but also maintains a continuous trajectory of research and development, fostering the creation of effective and ethical generative AI systems. Investment Challenges and Concerns The rapid expansion of generative AI presents a complex landscape fraught with significant apprehensions. Amid the potential societal advantages, a looming surge of start-up failures emerges, propelled by a proliferation of generic AI start-ups, spurred by the interests of venture capitalists. A central concern is the conspicuous absence of distinctive product offerings, as a multitude of enterprises plunge into the field devoid of ground breaking solutions. This dearth in differentiation and value proposition becomes pronounced, especially within text generation, where renowned tools such as OpenAI's offering reign supreme. This predicament poses a formidable challenge to start-ups operating in the Business-to-Consumer (B2C) domain, grappling with feeble customer-solution alignment. Concurrently, this underscores the ascendancy of Business-to-Business (B2B) applications intricately interwoven with enterprise operations. The efficacy of generative AI in content creation is undebatable, yet it cedes ground to classification algorithms in the realm of pattern detection and anomaly identification, casting a shadow of mistrust in production environments. Prioritising start-ups that focus on addressing specific challenges rather than accentuating their technological prowess emerges as a clarion call for emphasizing pragmatic value. Moreover, the establishment and vigilant stewardship of corporate protocols attain paramount significance, safeguarding data privacy and upholding the sanctity of sensitive corporate information. This assumes heightened importance, considering the inherent risks of inadvertently exposing intellectual property while training publicly accessible Large Language Models using proprietary data. Striking an optimal equilibrium between embracing technology's potential and discerning tangible returns warrants meticulous resource allocation, as undue hesitancy risks relegation to the fringes of competitiveness vis-à-vis counterparts leveraging the evolving capacities of AI to reshape industries. The Freemium Model and Financial Viability Amidst the surge of investments into generative AI start-ups, a critical concern materializes – the sustainable viability of projects within this burgeoning sphere. A subset of these ventures, though adorned with technological marvels, grapples with challenges of practical applicability and enduring value. This critical juncture necessitates astute project selection, ensuring simultaneous strides in technological advancement and quantifiable gains for investors and society at large. Moreover, the financial robustness of key stakeholders in the generative AI sector remains a cause for vigilance. Despite substantial infusions of capital, these start-ups remain susceptible to the spectre of financial instability, possibly compromising the security of investors. This heightened concern gains significance considering the broad accessibility of generative AI to the public, resulting in the widespread utilization of AI-generated content without the need for premium services. As a result, inquiries arise regarding the long-term financial sustainability of start-ups operating within this domain. Moreover, the dynamic evolution of the AI landscape necessitates a deeper examination of the trajectory of AI-powered models such as ChatGPT, particularly from an economic standpoint. As an increasing number of users veer towards gratuitous usage over premium subscriptions, the conundrum of perpetuating funding streams for advancing AI technologies acquires renewed prominence. This intricate duality accentuates the underlying challenges tied to investment dynamics, resonating not only within the realms of pioneering start-ups but echoing across the broader expanse of the AI ecosystem. Future of AI-Powered Models and Funding The future trajectory of AI-powered models is poised for a paradigm shift, presenting a landscape rich with possibilities and intricate challenges. The innovation encapsulated by AI-powered models, exemplified by the likes of ChatGPT, holds the potential to reshape industries and human interaction with technology on an unprecedented scale. However, this potential future is not devoid of uncertainties, particularly in the context of funding models that sustain these ground-breaking advancements. One pivotal concern shaping the future of AI-powered models revolves around the sustenance of funding streams. As these models progress in sophistication and utility, the question of how to secure adequate funding becomes increasingly critical. The traditional funding model that relies on premium subscriptions or paid services encounters hurdles in a landscape where the preference for free access is prevalent among users. The challenge is underscored by the delicate balance between democratizing access to AI-powered capabilities and ensuring the financial viability of the platforms offering them. In light of concerns about a lack of subscribers for premium models of generative AIs, the future lies in innovative monetization strategies that resonate with users, while maintaining the financial health of the AI-powered model ecosystem. The evolving funding landscape for AI-powered models reflects the imperative to adapt to changing user preferences while driving innovation. Beyond traditional venture capital channels, novel funding avenues such as corporate partnerships, government grants, and community-driven initiatives are gaining prominence. These diversified funding mechanisms not only reflect the increasing recognition of AI's transformative potential but also signal a more democratized approach to funding, aiming to align the interests of developers, investors, and users. The future of AI-powered models and their funding hinges on the ability to navigate this intricate landscape, where sustainability and innovation are delicately balanced to foster progress in AI while addressing the challenges posed by shifting user dynamics. Sustainable Funding Solutions for Generative AI As the demand for generative AI solutions surges, the quest for sustainable funding avenues gains paramount importance. Exploring innovative strategies can not only mitigate financial challenges but also align with broader environmental and societal goals. Leveraging existing large generative models emerges as a pragmatic solution to streamline resources. Rather than embarking on costly and time-consuming model creation from scratch, companies can capitalize on pre-existing models and fine-tune them to meet specific needs. This approach not only saves time but also taps into the high-quality outputs of models that have been trained on expansive datasets. By building upon the foundations already established, start-ups can channel their resources more efficiently, enabling significant cost reductions while maintaining the quality and efficacy of their generative AI solutions. Efficiency extends beyond output quality to energy conservation. The resource-intensive nature of generative AI can translate into substantial energy consumption, triggering both economic and environmental concerns. Employing energy-conserving computational methods stands as a pivotal solution in this realm. Techniques like pruning, quantization, distillation, and sparsification allow companies to optimise their models, reducing energy consumption and the associated carbon footprint. Such initiatives align with sustainable practices, not only minimising operational costs but also positioning generative AI ventures as environmentally responsible actors. This dual benefit extends beyond immediate funding considerations, resonating with stakeholders who prioritize environmentally conscious practices, thereby potentially attracting more support and investment from environmentally-conscious investors. Additionally, the principle of resource optimization can be extended to model and resource reuse. Generative AI inherently possesses the capacity to generate diverse and novel outputs using the same model, reducing the need for constant model creation or data acquisition. By repurposing existing models and resources for various applications, companies can drastically cut costs and expedite development timelines. This reuse-driven approach not only yields financial efficiencies but also supports sustainable development, as it minimizes unnecessary duplication of efforts and resources. Furthermore, aligning generative AI initiatives with Environmental, Social, and Governance (ESG) objectives can act as a strategic approach to sustainable funding. By demonstrating how generative AI contributes to addressing societal challenges, such as waste reduction, healthcare enhancement, or educational empowerment, companies can attract investors and customers who are increasingly attuned to ESG concerns. This alignment not only reflects a commitment to responsible innovation but also widens the pool of potential supporters, fostering financial sustainability that echoes societal impact. Conclusion In the realm of generative AI, the fusion of innovation and investment unfolds a landscape rich with potential and complexities. The surge of generative AI ventures underscores the significance of funding models that sustain their growth and development. The future of AI-powered models holds transformative promise, underscored by their ability to reshape industries and human interactions. However, the path to this future is marked by the need for adaptive funding mechanisms. The delicate equilibrium between democratising access to AI capabilities and ensuring financial sustainability necessitates innovative monetisation strategies. The shift towards diversified funding avenues, encompassing corporate partnerships, government grants, and community-driven initiatives, not only acknowledges the transformative potential of AI but also aligns with a more inclusive funding approach. As generative AI journeys towards sustainability and innovation, its challenges and solutions reflect the broader evolution of the AI landscape. By capitalising on existing models, optimising energy consumption, reusing resources, and aligning with ESG goals, sustainable funding avenues can be cultivated. These strategies not only address immediate financial considerations but also echo the commitment to responsible innovation. As generative AI ventures unfold, it is this delicate interplay between funding, innovation, and societal impact that will shape their trajectory, weaving a future where technology thrives in harmony with the needs of society. In the dynamic and ever-evolving landscape of generative AI investments, an adept navigation of challenges and harnessing of opportunities hold the key to unleashing the full potential of this transformative technology.
- New Report: Reinventing & Regulating Policy Use Cases of Web3 for India, VLiGTA-TR-004
This is the first report on legal & policy aspects related to Web3 technologies, developed by VLiGTA, the research & innovation division of Indic Pacific Legal Research. In this report, we have offered a comprehensive overview of state of Web3 policy & governance outlooks in India. The report addresses the state of India’s successful Digital Public Infrastructure, and examines the state of technology governance as well. Further, the report focuses on several kinds of blockchain consensus algorithms, and the issues related to transitioning from using Web2 system infrastructure to Web3 system infrastructure. Sanad Arora’s contributions in Chapter 2 are remarkable. Akash Manwani’s contributions in Chapters 5 and 6 are unique, specific and relevant to the current discourse. It has been my honour to contribute to Chapters 3, 4 and 6, to offer informed perspectives and analyses. We have offered thought models and suggestions in the form of use cases of Web3 in areas such as data portability, voting, supply management, decentralised exchanges and zero-knowledge taxes. With this general technical report, we hope to offer more contributions in India’s Web3 policy space, in future. You can find a glance of the report here. You can access the complete report on the VLiGTA App. Price: 400 INR The conclusions and recommendations provided in the report are described here as well. Conclusion The choice between centralized and decentralized technology infrastructures should be made thoughtfully, considering the specific needs and objectives of each application. Decentralized approaches offer greater transparency and data integrity but may require careful scalability planning. On the other hand, centralized models can provide efficiency and centralized control but may face challenges related to transparency and accountability. Now, we already see that at Union and State levels, India is trying to develop and provide scalable and sustainable Web3 solutions. For sure, the CDAC proposals of a National Blockchain Service & the Unified Blockchain Network, termed under Blockchain-as-a-Service (BaaS) are ambitious and clear about their objectives. This report concludes with two kinds of recommendations – general and specific. We have offered tailor-made and practical recommendations, which may be workable and could be adapted with. Recommendations from VLiGTA-TR-004 Mitigating Structural Limitations Continuously assess the Digital Public Infrastructure (DPI) to identify and address structural limitations as Web3 technologies are integrated. Adopt a Web 2.5 approach that combines the strengths of both Web2 and Web3 ecosystems to mitigate potential limitations. Collaborate with stakeholders to develop strategies for a seamless transition to Web3 technologies while ensuring the DPI's robustness. Using certain blockchain consensus algorithms could certainly be helpful for the Government to invent taxonomies of governance, compliance and transparency when DPI components are built on chains. Utilizing open source Web3 and Web2 technologies in conjugation would change the economics of infrastructure-related solutions offered by the Government of India under India Stack, and even the proposed National Blockchain Service & the Unified Blockchain Network. The National Blockchain Service (NBS) and Unified Blockchain Network (UBN) proposals present significant opportunities for enhancing data management, tax filing, voting systems, and global supply chain tracking within India. Leveraging blockchain technology can contribute to increased transparency, security, and efficiency across these domains. Governance Clarity and Risk Mitigation Establish clear policies and governance frameworks for the adoption of Web3 infrastructure, encompassing both political and technical aspects. Ensure that decision-making processes are agile and effective, even in the face of complex challenges (polycrisis). Safeguard against policy paralysis and potential disruptions to administrative and regulatory systems through proactive risk mitigation measures. Decentralised Exchanges (DEXs) for Government-to-Government Transactions The prospective advantages arising from DEX implementation in the sphere of inter-ministerial and governmental financial operations are substantial. DEXs may be endowed with interoperability capabilities, enabling seamless fund transference amongst diverse governmental departments. This enhancement would serve to elevate the efficiency of financial transactions, concurrently mitigating the specter of fraudulent activities. DEXs could be harnessed as a facilitative medium for the exchange of Indian Central Bank Digital Currencies (CBDCs) among various government entities. This envisioned application stands to foster the adoption of CBDCs and streamline governmental financial management. DEXs hold the potential to abate the inherent risks associated with bureaucratic participation in financial transactions by endowing them with a secure and transparent conduit for fund exchanges. Consequently, governmental personnel could redirect their focus towards policy formulation and execution. DEXs can assume either an open-ended or close-ended configuration, affording governmental authorities the prerogative to select the most pertinent model aligned with their specific requisites. An open-ended DEX would grant unrestricted participation, while a close-ended variant would restrict access to authorized users. The degree of centralization, decentralization, and federalization of DEXs may fluctuate contingent upon the system's unique architectural design. Centralized DEXs would be subject to sole-entity control, whereas decentralized counterparts would operate within a network of nodes. Federalized DEXs would emerge as an amalgamation of these paradigms, featuring government-operated nodes and privately controlled nodes. DEXs are adaptable for deployment either in retail contexts, catering to individuals and commercial entities in fund exchange, or in government-to-government scenarios, serving as the conduit for intergovernmental fund transfers. However, judicious scrutiny and meticulous tailoring of the DEX system's specifications are imperative to ensure seamless alignment with the government's distinct exigencies. Consequently, the government should engage in a comprehensive feasibility assessment concerning the prospective integration of such a system in the immediate future. Purposeful Choices for Web3 Adoption Embrace a technology-neutral approach to accommodate various Web3 use cases while aligning with India's policy vision. Focus on development-oriented strategies that leverage Web3 technologies to address societal and economic challenges. Encourage socio-technical mobility by fostering an environment where both public and private sectors can adapt and innovate with Web3 tools. Leverage leapfrogged access points to ensure that Web3 technologies are accessible and beneficial to a broad spectrum of the population. National Blockchain Service (NBS) Implementation Implement the NBS infrastructure as per the proposed four-part regional approach to enhance accessibility and efficiency. Ensure seamless integration of the NBS with existing government systems like India Stack and Web2 DPI solutions for improved service delivery. Data Portability Consider leveraging blockchain technology to enable data portability, following the principles of data fluidity. Explore the use of Decentralized Applications (DApps) or Decentralized Autonomous Organizations (DAOs) for data portability within a blockchain framework. Categorize data based on risk levels to determine the extent of portability. Zero-Knowledge Taxes (ZKT) Develop a secure and tamper-proof blockchain network for ZKT implementation, ensuring data privacy. Consider adopting blockchain consensus algorithms for generating and verifying zero-knowledge proofs. Assess the cost and security factors in choosing between trusting third-party applications or a decentralized blockchain network for ZKT. Decentralized Voting Evaluate the potential benefits of blockchain-based decentralized voting, including enhanced transparency and security. Consider the adoption of cryptographic credentials for voters to ensure anonymity and authentication. Weigh the trade-offs between centralized and decentralized voting systems, taking into account specific use cases and the level of trust in centralized entities. Global Supply Chain Tracking Implement blockchain-powered supply chain tracking to improve transparency and traceability. Leverage blockchain to verify product authenticity and ethical certifications, providing consumers with trusted information. Carefully assess the choice between centralized and decentralized supply chain tracking based on the desired level of control and transparency. Get access to the complete report at 400 INR. Access the full report at https://vligta.app/product/reinventing-regulating-policy-use-cases-of-web3-for-india-vligta-tr-004/
- Zero Knowledge Systems in Law & Policy
Despite the market volatility attributable to cryptocurrencies, the scope of Web3 technologies and their business models is yet unexplored, especially in the Indian context. Few companies like Polygon, Coinbase India, Binance and others are addressing that. In this article, the purpose of Zero Knowledge System as a method to conduct cryptographic proofs is explored, and some policy questions on whether some ideas and assertions of ZKS can be integrated into the domains of law & policy are addressed, considering the role of India as a leader of the Global South. The Essence of Zero Knowledge in Web3 To begin in simple terms, a Zero Knowledge System is based on probabilistic models of proof verification and not deterministic models. It is one of the methods in cryptography used for entity authentication.. Let us understand it with the help of a diagram. Imagine for a moment that you may be required to prove something to somebody. Anyone in obvious terms would say that to prove anything, something has to be revealed. Let us say you have to prove people that "I have something K in possession" without showing K in possession. Now, taking directly this into the digital context, it means that you have to prove that you have K without showing K to the person. In that case, you are a prover, and the person who is asking for a proof is a verifier. Such a system, through which you prove something without revealing the key information it is known as a Zero Knowledge System. Now, every Zero Knowledge System (ZKS) has three important features. First, the rules of use of the system must be adhered, and the statement of proof must be true, so that the verifier does not require any third-party means to get the validity. Second, the idea is not to achieve a 100% convincing and true statement but to prove to the verifier that the statement has a probability to be true. In many cases of ZKS, it may not be possible to prove a statement of proof to be 100% / exactly true in real life. Third, the verifier would not know the key information behind the proof statement made by the prover. The essence of having such a systemic effort is simple. When public and private blockchains under a distributed ledger system are used, cryptography may help in finding out the relevant details of the people who were involved in the cryptocurrency transactions, in the case of a public blockchain. However, the effort of ZKS is to remove identifiable information as the means of verification. In fact, in July 2022, Polygon, one of the most ambitious Web3 ecosystem companies, from Bengaluru (and Singapore) declared that they have developed a Zero-Knowledge Scaling Solution, which is fully compatible with Ethereum. In this update, it is explained how the solution works: The ZK proof technology works by batching transactions into groups, which are then relayed to the Ethereum Network as a single, bulk transaction. The 'gas fee' for the single transaction is then split between all the participants involved, dramatically lowering fees. For developers of payment and DeFi applications, Polygon zkEVM's high security and censorship resistance makes it a more attractive option than other Layer 2 scaling solutions. Unlike Optimistic roll-ups where users have to wait for as long as seven days for deposits and withdrawals, zk-Rollups offer faster settlement and far better capital efficiency. [...] Polygon zkEVM is a Layer 2 scaling solution that enables developers to execute arbitrary transactions, including smart contracts off-chain rapidly and inexpensively while keeping all proofs and data provenance on the secure Ethereum blockchain. In addition, Polygon had published a thesis on democratising ZKS. Recently, an infographic was published by Polygon about zkEVM: Now, in terms of understanding probability theory in maths, Zero Knowledge Proofs may be distinguished into three variants, despite their bi-products of use could be multiple. Perfect Zero Knowledge (Pzk) Statistical Zero Knowledge (Szk) Computational Zero Knowledge (Czk) Pzk implies that when the proof of knowledge shows exact probability distribution of the likelihood, as if a simulator does. Szk happens to be when the simulator and the proof system's probability distributions are just statistically close, and not the same. The case of Czk applies when an algorithm is unable to distinguish between the proof system and simulator's distributions. This shows that simulations and proof systems when are tested, the chaos of bringing identifiable key information is out of options, and the process of verification is enabled. Recently, Cloudfare had also developed Zero Knowledge Proofs for Private Web Attestation with Cross/Multi-vendor Hardware. In this diagram above, it is explained how Cloudfare's WebAuthn feature works within the frame of Zero Knowledge Systems. This is not a public-level use case, because such a functionality is possible to be used in close-ended institutions where trust is high, such as financial institutions. Plus, the servers and certificate chains along with the hardware security key are close ended. This at least justifies another possibility of using ZKS. Now, the purpose of this article is to propose and check if the mathematical conception of ZKS, could be applicable in law and public policy. A design thinking approach has been applied to address this. In the next sections, the possibilities of integrating ZKS into law and public policy domains are addressed. Legal Systems with Zero Knowledge In law, you can divide the basis of integrating ZKS in two forms - hard law and soft law. Let us address hard law first. Hard law systems are defined by the model of positive law, top-down governance and a regulatory landscape which reflects public interests. Zero Knowledge and Hard Law Now, the transformation of modern legal instruments shows that top-down governance, justified by addressing rule of law concerns, matter. It may be assumed in an ordinary way that Zero Knowledge Systems are suitable to soft law governance and regulating propositions and may not fit into the realm of hard law. However, we have to understand that the same was proposed about Web2 technologies. Interestingly, in technology and IP law domains, that integration began with legal reforms in the realm of telecom law. Accepting a definition and some basic understanding of information and communication technologies (ICT) was important since that created a space of opening up to new legal understandings. The concept of cyberspace, an integral aspect of Web1 and Web2, is understood through multiple kinds of legal fiction, which may be even attributed to how international space law evolved. In addition, due to certain unsustainable Web3 business activities (FTX for example) - there are certain ideas in the realm of Web3, which must be harmonised for good. This is why we have to revisit two things before moving on to Soft Law - (1) making Web3 habitual to the hard law instruments and systems (which is Law 2.0); and (2) making an enriched and mature pathway to formalise the transition from Web2 to Web3 as an infrastructure, as well as a social ecosystem. Now, we could have opted for analysing the integration of distributed ledger (or blockchains) into law and public policy domains. However, limiting Law and Web3 research perspectives to crypto is unnecessary and unjust since the domain of Web3 offers innumerable possibilities. Also, there are multiple emerging methods of cryptography, from Proof-of-Work to Proof-of-Acceptance. Choosing Zero Knowledge Protocols / Proofs / Systems is a unique choice due to its special features and the logical uniqueness of the concept itself. Making Web3 Habitable to Law 2.0 A Zero Knowledge Proof signifies that nothing which is identifiable and subject to validation, is disclosed to the verifier. In a hard law system, this may be considered contrarian to systems and their regulatory and judicial bodies, who consider that the proofs must be backed by things which are tangibly disclosed. Of course, Zero Knowledge Systems by design if are imposed bluntly like this on Law 2.0, would not work. The reason is that both ZKS and Law 2.0 as it exists, are not building interoperability and compatibility. Now, there is an interesting example from Firozabad, Uttar Pradesh, where the Uttar Pradesh Police has implemented a Public Grievance Management System for the city of Firozabad. Here is a Twitter thread by Sandeep Naliwal, the co-Founder of Polygon. The basic premise behind the purpose of a blockchain-enabled grievance registration system is that FIRs are registered online and police authorities cannot deny that the grievances are registered. No lower-level officers can claim nothing was registered, and this could be regarded as a reformist move. Let us also understand how the Public Grievances Redressal System works as described by the Firozabad Police. This diagram clearly explains how FPGMS implemented by the Firozabad Police works. Now, such innovations, no wonder, are appreciated. However, these solutions are too generic, and yet only address some basic issues related to our systems. Although some district / state authorities may prefer such kinds of solutions, from a policy perspective, they are merely symbolic and nothing else. Yet, if such frugal innovations are preferred, it is appropriate. In addition, it may be assumed that using blockchains as such for these solutions could be a direct method to resolve many things, which is not true. Now, solutions like discussed above, may also be applied through Zero Knowledge Systems, where let us say certain public-to-government systems of engagement are designed in such a way that if treat an individual (not necessarily in the case of grievance redressal only but also in various cases) may choose one among a set of defined Zero Knowledge Protocols (ZKP) to engage with the government, while authentication (or entity verification) is done through ZKP where if the government is the verifier, then they can get probabilistic results to check if the proofs explain. However, such governance solutions may be unnecessary in application where technical expertise to access data and metadata for evidentiary and internal evaluation become necessary. Plus, you would also require algorithmic solutions to ensure this to happen, which again, thanks to the black box problem, could make things problematic. This means that Zero Knowledge Protocols cannot be used as outliers like that. Yes, when it comes to government identification documents, such as Aadhar, PAN and others, then at some critical level of urgency or due diligence, Zero Knowledge Systems can be enforced to ensure parity and privacy of individuals. However, there is another aspect of Zero Knowledge Systems which may be integrated into the legal domain. Let us say that a regulator has to designate levels of engagement with stakeholders, parties to a regulatory dispute or their counterparts, and they wish to develop certain Zero Knowledge Proofs where verification is essential to the level of engagement. In that case, it could be made possible. Let us break this proposition into 3 forms: (1) stakeholders; (2) parties to a regulatory dispute; and (3) their counterparts. In case (1), let us say a competition regulator has to designate the level of engagement of the stakeholders. The rationale is clear: they would like to optimise engagement levels to designate necessities and priorities (and not to block the stakeholders from even engaging). If engagement is limited to analysis of comments and suggestions, then ZKP is not required. However, if the engagement is multi-sectoral, where stakeholders are same or different, or their focus areas converge - only to make things complicated, then ZKP can be applied to designate certain level-playing criteria for the stakeholders, such that multiple horizontal-level ZKPs (whose purposes of use intersect a lot) can be created. Many times, it is stated that multiple public-level stakeholders such as members of the media, civil society, etc., are found leaking critical information about any negotiations or consultations, deliberately. Although, some level of transparency is good (even there ZKPs may be designated), multiple horizontal-level ZKPs can be used to keep the stakeholders intimated that their proofs are under consideration and probabilistic grounds may be internalised, accordingly. However, this might work for internal and closed engagement. Yet, this is a proposition, which may be subject to thinking. In case (2), it would not be appropriate to use ZKP to hide evidence or necessary information to be subject to disclosure. To make things interesting, we can apply one aspect of ZKP here. Let us say there is critical information which cannot be disclosed by a party. Then, the regulator can estimate the information which the party concerned has refused to disclose. In such a case, ZKPs may be used to garner certain probabilistic insights indirectly from the party concerned. This may not be useful until the key aspect of validation is clearer, but it could work if thought out well. In case (3), regulators and their counterparts in other countries, sometimes due to sovereign interests or national security or secrecy concerns may suffer from a deadlock to engage and share relevant knowledge. In that case, to address the deadlock, cooperative Zero Knowledge Protocols may be created to generate trust-based engagement to break the deadlock. Here, probability may help regulators making decisions and encompassing their own approaches to take things forward or still. Another aspect of using ZKP could be to encapsulate trust as a "channel" of engagement on certain critical issues, like nuclear deterrence and others. It is proposed that in a multi-polar world, where trust, metadata, knowledge and information can easily be weaponised, instead of being utterly protectionist or hawkish, governments may develop a "language" of zero knowledge-based engagement in certain affairs. ZKP could also be workable in the case of "AI Diplomacy". Interestingly, Corneliu Bjola had written on Diplomacy in the Age of Artificial Intelligence for the UAE's Emirates Diplomatic Academy. The diagram above from Bjola's paper on AI and Diplomacy clearly explains how structured and unstructured decisions may be logically dealt with. Here, ZKP can help a lot to designate what Zero Knowledge Proofs are designed, and how in a vertical / oblique hierarchy, they are established within a government functionary. This can also be understood from Figure 8, referring to the Social Informatics of Knowledge Embodiment. The hierarchy decided to designate an AI Robotic System is interesting. It starts from being a cooperator, until coopetition becomes a reality. Since the knowledge required at multiple levels, differs with purpose and human cognition is extended even at the top level, ZKP may be useful to create indispensable connectivity between kinds of knowledge, their sharing and evaluation-related viabilities. Here is an interesting diagram from Zoravar Daulet Singh's Power and Diplomacy: India’s Foreign Policies during the Cold War (2018), which can also be taken for reference to see where ZKP can be pushed through. If we compare the diagrams in Bjola's paper, the use of ZKP could be applied, to mitigate lack of coherence amidst behaviour patterns, which are congruent, consistent yet unlikely and incongruent that affect decision-making. Validation matters, so building alternative correlations among the kinds of behaviours could be possible. The Web2-to-Web3 (2to3) Transition through Law 2.0 Achieving and contributing the Web2 to Web3 transition in systems and ethics, within the framework of Law 2.0 could be an interesting and pertinent proposition, if we can use Zero Knowledge Systems for the same purpose. For closed systems, as the case with Cloudfare was explained, verification could be considered when the key information required is embedded in the closed systems and institutions. For open systems, for example, a digital public square, convergences can be achieved by technological hedging. Legal systems have to recognise the ontological and practical purpose of these multiple horizontal-level efforts and recognise their value. Now, Law 2.0 implies a harmonious and naturalised integration of technologies into the legal fiction. The impact of such integration could be positive as governance priorities may shape up quite suitably. Balaji Srinivasan's The Network State (2022) discusses the concept of Network State in that aspect, quite clearly. Zero Knowledge and Soft Law When it comes to Soft Law, Zero Knowledge Systems can easily be integrated, due to the nature of Law 3.0 as a proposed field. Validation can be achieved among self-regulating companies which can be then addressed by the government at a centralised level. From a theory point-of-view, ZKP may not be needed to achieve complete decentralisation, since centralisation is a part of governance considerations. Now, let us estimate where such validation-requiring Zero Knowledge Proofs can be used. For starters, ZKP can be easily used to build peer-to-peer self-regulating standards. Taking cue from Law 2.0 on Regulator's levels of engagement, while certain critical information is not visible or disclosed, the Protocols can be established to analyse the horizontal-level impact of the self-regulating standards already proposed by the government. Since the legal interpretation exists, ZKP enables to provide peer-to-peer company-related insights, through probability. Obviously, not all standards can be enforced directly and making interpretation complicated by not having an informed or optimised legislative intent can be mitigated, from a procedural aspect. Another use of ZKP could be possible in the fintech industry to avoid predatory retail and credit loan offers from being recommended, which depends on the Central Bank (in India's case, the RBI). Unbundling Policy Dynamics with Zero Knowledge Now, as compared to law, policy dynamics are amorphous in nature. In addition, while policy dynamics are intersectional to multiple domains, related or unrelated, political consensus & motivation shape political trust. Zero Knowledge Systems can be used to generate policy innovations beyond governance mechanisms and digital public infrastructure. Let us then address political trust quickly. In politics, people-to-government engagement is a generic aspect of building trust. Political trust can also be built by endorsing public-private partnerships and cooperative societies, since the commercial focus may crystallise the avenues of political consensus. Then, substantive propositions and solutions can also germinate trust. Since interconnectedness is a tangible element of Web2 technologies and their necessities, ZKP can be used to protect that interconnectedness or interoperability, whichever the objective deems fit. The reason is that interoperability may not imply absolute cohesion of data and information, while interconnectedness implies that a mesh of counter-dependencies or codependency exists. We can see this aspect behind the proposition of zero-knowledge taxes made by Matthew Niemerg for Yahoo Finance: “Zero-knowledge taxes” describes a situation in which taxes can be filed and verified with zero-knowledge proofs. This could operate through a trusted, third party application that analyzes a user’s wallets and calculates taxable events, resulting in a net summary of the individual’s taxes for the year. That summary tax payment, along with the proof itself, is submitted to the regulating entity, which can verify through the proof that the tax summary is accurate without needing to see every transaction leading up to the summary. Although there are multiple issues and risks with the model because while privacy matters, you can read Sanad Arora's article on Central Bank Digital Currencies to understand where do the privacy concerns lie and can be managed. Another example of applying Zero Knowledge Proofs in policy could be protecting information to promote nuclear disarmament talks. To avoid revealing information about the composition and configuration of the cubes, bubbles created in this manner were added to those already preloaded into the detectors. The preload was designed so that if a valid object were presented, the sum of the preload and the signal detected with the object present would equal the count produced by firing neutrons directly into the detectors – with no object in front of them. The experiment found that the count for the “true” pattern equaled the sum of the preload and the object when neutrons were beamed with nothing in front of them, while the count for the significantly different “false” arrangements clearly did not. Conclusion This article consists of propositions, which are theoretical. The reason is that Zero Knowledge Systems require more testing in real life, from a case-to-case perspective. At least through the efforts of this article, we can think and revisit our approaches to decentralising and centralising digital governance. Comments and counterpoints will be appreciated.
- The Digital Personal Data Protection Act & Shaping AI Regulation in India
As of August 11, 2023, the President of India has given assent to the Digital Personal Data Protection Act (DPDPA), and it is clear that the legal instrument after its notification in the Official Gazette, is notified as a law. Now, there have been multiple briefs, insights and infographics which have been reproduced and published by several law firms across India. This article thus focuses on the key provisions of the Act, and explores how it would shape the trajectory of AI Regulation in India, especially considering the recent amendments in the Competition Act, 2002 and the trajectory for the upcoming Digital India Act, which is still in the process. You can read the analysis on the Digital India Act as proposed in March 2023 here. You can also find this complete primer of the important provisions of the Digital Personal Data Protection Act here, which have been discussed in this article. We urge you to download the file as we have discussed provisions which are described in this document. General Review of the Key Provisions of the DPDPA Let's begin with the stakeholders under this Act. The Digital Personal Data Protection Act, 2023 (DPDP Act) defines the following stakeholders and their relationships: Data Principal: The individual to whom the personal data relates. Consent Manager: A person or entity appointed by a Data Fiduciary to manage consents for processing personal data. Data Protection Board (DPB): A statutory body established under the DPDP Act to regulate the processing of personal data in India. Data Processor: A person or entity who processes personal data on behalf of a Data Fiduciary. Data Fiduciary: A person or entity who alone or in conjunction with other persons determines the purpose and means of processing of personal data. Significant Data Fiduciary: A Data Fiduciary that meets certain thresholds, such as for example, having a turnover of more than INR 100 crores or processing personal data of more than 50 million data principals. However, it is to be noted that no specified threshold has been defined in the Act, as of now. The relationships among these stakeholders are as follows: The Data Principal is the owner of their personal data and has the right to control how their data is processed. The Consent Manager is responsible for managing consents for processing personal data on behalf of the Data Fiduciary. The DPB is responsible for regulating the processing of personal data in India. It has the power to investigate complaints, issue directions, and impose penalties. The Data Processor is responsible for processing personal data on behalf of the Data Fiduciary in accordance with the Data Fiduciary's instructions. The Data Fiduciary is responsible for determining the purpose and means of processing personal data. They must comply with the DPDP Act and the directions of the DPB. A Significant Data Fiduciary has additional obligations under the DPDP Act, such as appointing a Data Protection Officer and conducting data protection impact assessments. Data Protection Rights Now, while the Act clearly has a set of rights for Data Principals and obligations attached to Data Fiduciaries, which is discussed further. However, a lot of the provisions in the Act, contain the clause "as may be prescribed". This means a lot of the provisions will remain subject to delegated legislation, which makes sense, because the Government could not integrate every aspect of data regulation and protection into the Act and could only propose specific and basic provisions, which could make sense, from a multi-stakeholder and citizen perspective. Now, like the General Data Protection Regulation in the European Union, the rights of a Data Principal are clearly defined in Sections 11-14 of the Act, stated as follows: Right to access information about personal data which includes: a summary of personal data identities of Data Fiduciaries and Data Processors who have been shared the same any other related information related to the Data Principal and the processing itself Right to: correction of personal data completion of personal data updating of personal data and erasure (deletion) of personal data Right to grievance redressal which has to be readily available Right to nominate someone else to their exercise their data protection rights under this Act, as Data Principals There are no specific parameters or factors defined when it comes to the Right to be Forgotten (erasure of personal data). Hence, we can expect some specific guidelines and circulars to address this issue, along with industry-specific interventions, for example, by the RBI in the fintech industry. Now, the provisos containing a list of duties of a Data Principal are referred to for obvious reasons. That is done for a reflective perspective to estimate policy and ethical perspectives on the Data Protection Board's internal expectations. Like the Fundamental Duties, these duties also do not have any binding value, nor does it affect the data-related jurisprudence in India, especially on matters related to this Act. However, those duties could be invoked by any party to a data protection-related civil dispute for the purposes of interpretation and to elaborate on the purpose of the Act. Nevertheless, invoking the duties of Data Principals has a limited impact. Legitimate Use of Personal Data The following are considered as "legitimate use" of personal data by a Data Fiduciary: Processing personal data for the Government with respect to any subsidy, benefit, service, certificate, licence or permit prescribed by the Government. For example: to let people avail benefits of a government scheme or programme through an App, personal data would have to be processed Processing personal data to: Fulfil any obligation under any law in force or Disclose any information to the State or any of its instrumentalities This is subject to the obligation that processing of personal data is being done in accordance with the provisions regarding disclosure of such information in any other law Processing personal data in compliance with: Any judgment or decree or order issued in India, or Any judgment or order relating to claims of a contractual or civil nature based on a law in force outside India When a Data Principal voluntarily offers personal data to the Data Fiduciary (a company, for example). This is applicable when it has not been indicated at all that the Data Fiduciary does not have consent to process data This is therefore a negative obligation on the Data Fiduciary (a company, for example). If consent is not granted by indication, then data cannot be processed There are other broad grounds as well, such as national security, sovereignty of India, disaster management measures, medical services and others. Major Policy Dilemmas & Challenges with DPDPA Now, there are certain aspects on the data protection rights in this Act, which must be understood. Now, publicly available data as stated in the Section 3 of this Act, will not be covered by the provisions of this Act. This means that if you post something on social media (for example), or give prompts to generative AI tools, then they are not covered under the provisions of this Act in India, which is not the case in Western countries and even China overall. Since different provisions refer to the Data Protection Board having powers of a civil court on specific matters, under the Civil Procedure Code of 1908, and that the orders of the Appellate Tribunal under this Act, are executable as a civil decree, it clearly - and obviously signifies that most data protection issues would be commercial and civil law issues. In other countries, the element of public duty (emanated from public law) comes in. This also shows clearly that in the context of public law, India is not opening its approach to regulate the use of artificial intelligence technologies at macro and micro scales yet. I am certain this will be addressed in the context of high-risk and low-risk AI systems in the Digital India Act. On the transnational flow of data and the issue of building bridges and digital connectivity between India and other countries, the Act gives unilateral powers to the Government to restrict flow of data whenever they find a ground to do so. This is why nothing specific as to the measures have been described by the Government yet, because of the trade negotiations on information economy between India and stakeholders such as the UK, the European Union and others, which useless get stuck. In fact, this is a general problem across the board for companies and governments around the world for the simple reasons - (1) the trans-border flow of data is a trade law issue, requiring countries to render diplomatic negotiations, without reaching at a consensus, due to the transactional aspect of it; (2) data protection law, which is a subset of technology law, has a historical inference to the field of telecommunications law, which is why the contractual and commercial nature of trans-border data flow since being related to telecom law, may not arrive at conclusions. This is relatable to the poignant issue of moratoriums on digital goods and services under WTO Law, which is subject to discussion in future WTO Ministerial Conferences. Here is an excerpt from the India & South Africa's joint submissions on 'E-commerce Moratoriums': What about the positive impacts of the digital economy for developing countries? Should these not also be taken into account in the discussion on losses and the impact of the moratorium? After all, it is often said that new digital technologies can provide developing countries with new income generation opportunities, including for their Micro and Small and Medium Sized Enterprises (MSMEs). [...] Further, ownership of platforms is the new critical factor measuring success in the digital economy. The platform has emerged as the new business model, capable of extracting and controlling immense amounts of data. However, with ‘platformisation’, we have seen the rise of large monopolistic firms. UNCTAD’s Digital Economy Report (2019) highlights that the US and East Asia accounts for 90 percent of the market capitalization value of the world’s 70 largest digital platforms. Africa and Latin America’s share together is only 1 percent. Seven ‘super platforms’ – Microsoft, Apple, Amazon, Google, Facebook, Tencent and Alibaba – account for two-thirds of total market value. In particular, Africa and Latin America are trailing far behind. Also, startups have been given exemptions from certain crucial compliances under this Act. While this may be justified as a move to promote the Digital India and startup ecosystem in India, and some may argue that it is against creating a privacy-compliant startup ecosystem, another aspect which is ignored by most critics of this Act (formerly a Bill), is the sluggishness and hawkishness of the bureaucratic mindset behind ensuring compliances. Maybe, this gives some room to ensure a flexible compliance environment, if the provisions are used reasonably. Plus, how would this affect fintech companies when it comes to data collection-related compliances would have to be seen. Although it is clear that the data protection law, for its own limits, will not supersede fintech regulations and other public & private law systems. This means, the fintech regulations on data collection and restrictions on the use of it, will prevail over this Data Protection law. For Data Fiduciaries, if they would have to collect data every time, they would have to give a notice every time when they request consent from a Data Principal. It is argued rightfully that merely having a privacy policy would not matter. since there would be multiple instances of data collection in an app / website interface in multiple locations of the app / website. Here is an illustration from the Act, which explains the same. X, an individual, opens a bank account using the mobile app or website of Y, a bank. To complete the Know-Your-Customer requirements under law for opening of bank account, X opts for processing of her personal data by Y in a live, video-based customer identification process. Y shall accompany or precede the request for the personal data with notice to X, describing the personal data and the purpose of its processing. Interestingly, the Act defines obligations for Data Fiduciaries, but not Data Processors, which seems strange. Or, it could be argued that the Government would like to keep the legal issues between the Data Fiduciary and their assigned Data Processors, subject to contractual terms. We must remember that for example, in Section 8(1) of the Act, the Data Fiduciaries are required to comply with the provisions of the Act (DPDPA), "irrespective of any agreement to the contrary or failure of a Data Principal to carry out the duties provided under this Act" considering any processing undertaken by the Data Processor. Now, the issue that may arise is - what happens if the Data Processor makes a shoddy mistake? What if the data breach is caused by the actions of the Data Processor despite due dilligence by the Data Fiduciary? This makes the role of Data Processors more of a commercial law issue or dilemma when contracts are agreed upon, instead of making it a civil or public law issue, in the context of the Act. Finally, the Act introduces a new concept known as the "consent manager." Now, as argued by Sriya Sridhar - such a conceptual stakeholder could be related with one of the most successful stakeholder systems created in the RBI's fintech regulation framework, that i.e., Account Aggregators (AAs). Since the DPDPA would not have precedence over fintech regulations of the Reserve Bank of India, for example - and the role of data protection itself could be generalised and tailor-made subject to the best industry-centric regulatory practices, Consent Managers not being Data Fiduciaries, would be helpful for AAs as well. Some aspects related to the inclusion of artificial intelligence technology in the context of Consent Managers is discussed in the next section of this article. The next section of this article covers all aspects covered related to the use of artificial intelligence in the Digital Personal Data Protection Act, 2023. Key Definitions & Provisions in the DPDPA on Artificial Intelligence Here are some definitions in Section 2 of the Act, which must be read and understood, to begin with: (b) “automated” means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data; (f) “child” means an individual who has not completed the age of eighteen years; (g) “Consent Manager” means a person registered with the Board, who acts as a single point of contact to enable a Data Principal to give, manage, review and withdraw her consent through an accessible, transparent and interoperable platform; (h) “data” means a representation of information, facts, concepts, opinions or instructions in a manner suitable for communication, interpretation or processing by human beings or by automated means; (i) “Data Fiduciary” means any person who alone or in conjunction with other persons determines the purpose and means of processing of personal data; (j) “Data Principal” means the individual to whom the personal data relates and where such individual is — (i) a child, includes the parents or lawful guardian of such a child; (ii) a person with disability, includes her lawful guardian, acting on her behalf; (k) “Data Processor” means any person who processes personal data on behalf of a Data Fiduciary; (n) “digital personal data” means personal data in digital form; (s)(vii) every artificial juristic person, not falling within any of the preceding sub-clauses; (t) “personal data” means any data about an individual who is identifiable by or in relation to such data; (x) “processing” in relation to personal data, means a wholly or partly automated operation or set of operations performed on digital personal data, and includes operations such as collection, recording, organisation, structuring, storage, adaptation, retrieval, use, alignment or combination, indexing, sharing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction; Now, with reference to Figure 2, the four most important definitions with respect to artificial intelligence, are in the Section 2, especially sub-sections (b), (s)(vii) and (x). The definition of the term "automated" clearly states that "automated" means any digital process capable of operating automatically in response to instructions given or otherwise for the purpose of processing data. This means that AI systems that are capable of making decisions without human intervention are considered to be "automated" for the purposes of the Act. Of course, this recognition was impliedly done, as the integration of AI systems in data processing is a long-known reality. However, the wording makes it meticulously clear. This definition is broad enough to encompass a wide range of AI systems, including: Machine learning systems: These systems are trained on large amounts of data to learn how to make predictions or decisions. Once they are trained, they can make these decisions without human intervention. Natural language processing systems: These systems can understand and process human language. They can be used to generate text, translate languages, and answer questions. Computer vision systems: These systems can identify and track objects in images and videos. They can be used for tasks such as facial recognition and object detection. It would be intriguing to observe how this plays out when the Digital India Act is released, since the Act is proposed to cover high-risk, medium-risk and low-risk AI systems. Artificial Juristic Person Furthermore, the definition of "every artificial juristic person" as defined in the sub-section 2(s)(vii) of the Act is interesting, considering that the Act uses the word "person" at least 30+ times, which is obvious. is important because it helps to clarify what types of AI systems are considered to be "legal persons" for the purposes of the law. The definition states that "artificial juristic person" means every artificial juristic person, not falling within any of the preceding sub-clauses. This means that AI systems that are not explicitly defined in the preceding sub-clauses, such as companies, firms, and associations of persons, may still be considered to be "artificial juristic persons" if they have the capacity to acquire rights and incur liabilities. The wording is important to notice simply because it allows the Act to apply to AI systems that are not traditionally considered to be "legal persons." This is important because AI systems are becoming increasingly sophisticated and are capable of making decisions that have a significant impact on people's lives. By classifying AI systems as "legal persons," the Act helps to ensure that these systems are held accountable for their actions and that they are subject to the same legal protections as humans. It could be argued that the definition of "artificial juristic person" in the DPDPA would evolve, as AI technology continues to develop and the integration of assessing AI-related persona issues come up for law and policy stakeholders to address. To add, the definition of an artificial juristic person, clearly reeks of offering an ad hoc (or specific) understanding of legal recognition or legal affirmation, which could be granted to AI systems. This is in line with the ISAIL Classifications on Artificial Intelligence, especially the CEI Classification Method. As per the classifications defined in the 2020 Handbook on AI and International Law, the CEI Method could classify AI as a Concept, an Entity or an Industry. The reference to "artificial juristic persons" can be directly alluded to the classification of an AI system as a Juristic Entity. Here is an excerpt from the 2020 Handbook (pages 45 and 47), explaining the concept of a Juristic Entity in the case of artificial intelligence: On the question of the entitative status of AI, under jurisprudence, there can be 2 distinctions on a prima facie basis: (1) the legal status; and (2) the juristic status. […] In both the cases, it is suitable to establish the substantive attributes of AI both as legal and juristic entities. There can be disagreements on the procedural attributes here due to the simple reasons that there at procedural levels, it is not practically possible to have similar legal standards of different kinds of products and services which involve AI directly or indirectly. Here are some examples of AI systems that could be considered to be "artificial juristic persons" under the Act: Self-driving cars: These cars are capable of making decisions about how to navigate roads and avoid obstacles without human intervention. Virtual assistants: These assistants can understand and respond to human language, and they can be used to perform a variety of tasks, such as booking appointments, making travel arrangements, and playing music. Chatbots: These bots can engage in conversations with humans, and they can be used to provide customer service, answer questions, and even write creative content. Also read The Legal "Status" of AI: How, why and where AI as a Consent Manager? Nevertheless, let's examine where the use of the term "artificial juristic persons" gets intriguing. Let's begin with the concept of a "Consent Manager". Of course, Section 2(g) states that a Consent Manager is a person registered with the Data Protection Board of India acting as the single point of contact to enable a Data Principal to give, manage, review and withdraw her consent through an accessible, transparent and interoperable platform. This means that a Consent Manager can be any person, including an individual, a company, or an AI system. However, in order to be registered with the Board, a Consent Manager must meet certain requirements. In the context of AI, the definition of "Consent Manager" could be interpreted to mean that an AI system could be registered as a Consent Manager. However, it is important to note that the AI system must meet the same requirements as any other Consent Manager, such as having the necessary technical expertise and experience to manage consent effectively. The function of the Consent Managers could also be explained in the context of the following sub-sections of Section 5 of the Act: (7) The Data Principal may give, manage, review or withdraw her consent to the Data Fiduciary through a Consent Manager. (8) The Consent Manager shall be accountable to the Data Principal and shall act on her behalf in such manner and subject to such obligations as may be prescribed. (9) Every Consent Manager shall be registered with the Board in such manner and subject to such technical, operational, financial and other conditions as may be prescribed. Now, Section 5(7) states that the Data Principal may give, manage, review or withdraw her consent to the Data Fiduciary through a Consent Manager. This means that a Data Principal can use an AI Consent Manager to manage their consent to the processing of their personal data by a Data Fiduciary. Meanwhile, it is interesting to notice that Section 5(8) states the Consent Manager shall be accountable to the Data Principal and shall act on her behalf in such manner and subject to such obligations as may be prescribed. This means that the AI Consent Manager must be designed and used in a way that ensures that it is acting in the best interests of the Data Principal. This includes being transparent about how it is using personal data and being able to explain its decisions to the Data Principal. Finally, Section 5(9) states that every Consent Manager shall be registered with the Board in such manner and subject to such technical, operational, financial and other conditions as may be prescribed. This means that any AI Consent Manager that wants to operate in India must be registered with the Data Protection Board (DPB). The DPB will set out the technical, operational, financial and other conditions that AI Consent Managers must meet in order to be registered. Here are some specific ways that AI could be used to support the functions of Consent Managers: Automating consent management: AI could be used to automate the process of giving, managing, reviewing and withdrawing consent. This would make it easier for Data Principals to control their personal data and it would also reduce the risk of human error. Providing personalised consent experiences: AI could be used to personalize the consent experience for each Data Principal. This would involve understanding the Data Principal's individual needs and preferences and tailoring the consent process accordingly. Ensuring transparency and accountability: AI could be used to ensure that consent is transparent and accountable. This would involve tracking how consent is given, managed, reviewed and withdrawn, and it would also involve providing Data Principals with clear and concise information about how their personal data is being used. Additionally, the AI system must be designed in a way that ensures that it is acting in the best interests of Data Principals. This means that the AI system must be transparent about how it is using personal data and it must be able to explain its decisions to Data Principals. Now, on the rights of Data Principals (data subjects) for grievance redressal, the role of AI as Consent Managers could become interesting. Section 13(1) of the Act states that a Data Principal shall have the right to have readily available means of grievance redressal provided by a Data Fiduciary or Consent Manager in respect of any act or omission of such Data Fiduciary or Consent Manager regarding the performance of its obligations in relation to the personal data of such Data Principal or the exercise of her rights under the provisions of this Act and the rules made thereunder. This means that a Data Principal can use an AI Consent Manager to file a grievance if they are unhappy with the way their personal data is being handled by a Data Fiduciary. Meanwhile, the Section 13(2) states that the Data Fiduciary or Consent Manager shall respond to any grievances referred to in sub-section (1) within such period as may be prescribed from the date of its receipt for all or any class of Data Fiduciaries. This means that the AI Consent Manager must be designed and used in a way that ensures that it can respond to grievances in a timely and effective manner. Here are some use cases of AI in Consent Management, which could be looked upon: Personalised consent experiences: AI can be used to personalise the consent experience for each individual user. This can be done by understanding the user's individual needs and preferences, and tailoring the consent process accordingly. For example, AI could be used to suggest relevant consent options to users, or to provide users with more detailed information about how their data will be used. Automated consent management: AI can be used to automate the process of giving, managing, reviewing and withdrawing consent. This can make it easier for users to control their data, and it can also reduce the risk of human error. For example, AI could be used to send automatic reminders to users about their consent preferences, or to automatically revoke consent when a user no longer uses a particular service. Ensuring transparency and accountability: AI can be used to ensure that consent is transparent and accountable. This can be done by tracking how consent is given, managed, reviewed and withdrawn, and by providing users with clear and concise information about how their data is being used. For example, AI could be used to create a audit trail of consent activity, or to generate reports that show how users' data is being used. Grievance redressal: AI can be used to support the grievance redressal process. This can be done by automating the process of filing and tracking grievances, and by providing users with clear and concise information about the status of their grievance. For example, AI could be used to create a chatbot that allows users to file grievances without having to speak to a human representative, or to generate reports that show how grievances are being resolved. Compliance with regulations: AI can be used to help organisations comply with regulations related to consent management. This can be done by tracking consent activity, generating reports, and providing users with clear and concise information about how their data is being used. For example, AI could be used to create a dashboard that shows how an organisation is complying with the General Data Protection Regulation (GDPR), or to generate reports that show how users' data is being used in accordance with the California Consumer Privacy Act (CCPA). Processing as Defined in the Act The term “processing” in relation to personal data, includes the following: It is a wholly or partly automated operation or set of operations performed on digital personal data, and includes operations such as: collection, recording, organisation, structuring, storage, adaptation, retrieval, use, alignment or combination, indexing, sharing, disclosure by transmission, dissemination or otherwise making available, restriction, erasure or destruction; Now, in the context of digital rights management (DRM), and the use of artificial intelligence technology through Data Processors in CMS-based platforms, would have to be observed carefully. There are certain activities which could easily be covered by automated intelligence systems, or those AI systems, which have narrow use cases, like collection, recording, storage, organisation and others. Since there is nothing clearly stated on the role of Data Processors and the burden is on the Data Fiduciary to ensure compliance with the Act, it would now be a matter of contract, as to how will companies do contracts with Data Processors to ensure compliance, and redressal of matter among themselves, especially on the limited & specific use of AI. Nevertheless, the processing capabilities of any AI system, which is preceded by their computational capabilities (for example, Generative AI systems), it would be necessary to see how the prescribed regulations, bye-laws, circulars and industry-based self-regulatory & regulatory measures would work. For Data Processors who use artificial intelligence systems, they would have to clarify the use of AI in their contracts, for sure to keep things explained about liability and accountability issues, by the arrangement of the contract they would naturally have with Data Fiduciaries. Role of AI Usage in Shaping Rights of Data Principals The rights of data principals under the Sections 11 to 14 of the DPDP Act are important in the context of the commercial and technical use cases of AI applications, especially those of generative AI applications. Let's decipher that. Section 11: The right to obtain information about personal data that is being processed by a data fiduciary is essential for data principals to understand how their data is being used by AI applications. This information can help data principals to make informed decisions about whether or not to use an AI application, and it can also help them to identify and address any potential privacy concerns. However, to complement the instance discussed of AI as Consent Managers, or the involvement of AI in consent management, the role of technology-enabled and human-monitored elements of processing of personal data would have to be explained. Section 12: The right to correct, complete, update, or erase personal data is also important in the context of AI applications. This is because AI applications can often make mistakes when processing personal data, and these mistakes can have a significant impact on data principals. For example, an AI application that is used to make lending decisions could make a mistake and deny a loan to a data principal who is actually eligible for the loan. The data principal's right to correct the mistake is essential to ensuring that they are not unfairly discriminated against. Section 13: The right to have readily available means of grievance redressal is also important in the context of AI applications. This is because AI applications can be complex and it can be difficult for data principals to understand how their data is being used. If data principals believe that their rights under the DPDP Act have been violated, they should be able to easily file a complaint with the data fiduciary or consent manager. Section 14: The right to nominate another individual to exercise one's rights under the DPDP Act is also important in the context of AI applications. This is because AI applications can be used to collect and process personal data about individuals who are not able to exercise their own rights, such as children or people with disabilities. The right to nominate another individual to exercise one's rights ensures that these individuals' rights are still protected. In addition to the rights listed above, data fiduciaries that use generative AI applications must also take steps to safeguard the privacy of data principals. This includes using appropriate security measures to protect personal data, and ensuring that generative AI applications are not used to create content that is harmful or discriminatory. Here are some specific safeguards that data fiduciaries can implement to protect the privacy of data principals when using generative AI applications: Implement access controls to restrict who can access personal data. Use anonymisation techniques to remove personally identifiable information from personal data. Monitor generative AI applications for bias and discrimination. Educate data principals about their privacy rights. Conclusion & Emerging Policy Dilemmas Overall, this Act is not a disappointing piece of legislation from the Union Government. However, it is not a groundbreaking legislation, as India's political and technological viewpoints on data and AI regulation are still emerging. This legislation is clearly emblematic of the fact that merely having data protection laws do not ensure regulatory malleability and proficiency to tackle data-related issues, in commercial, technology and public laws. Regulatory subterfuge in matters of data law could easily happen when laws are not specific and not rooted enough to be mechanical. The DPDPA is mechanically suitable as a law, and considering India's digital economy-related trade negotiations at the WTO and beyond, the law will suffice its sourced and general purpose. Of course, the law would be challenged in the Supreme Court, and upcoming bye-laws, regulations and circulars under this Act's provisions would be subject to transformation. However, the competition law and trade law-centric approach towards data regulation and digital connectivity is not shifting to purely civil law and public law issues anytime soon. In the case of artificial intelligence and law, there are certain legal and policy dilemmas that are surely going to emerge: The Rise of International Algorithmic Law is inevitable No matter what is argued about this perspective that over-focus on data-related trade issues leads to deviation from larger data law issues, the proper way to resolve and quantify problem-solving legal and policy prescriptions in the case of data law, could come from developing a soft law approach to a newer field of global governance, and international law, i.e., International Algorithmic Law. Here is an excerpt on the definition of International Algorithmic Law from my paper on the same: The field of International Law, which focuses on diplomatic, individual and economic transactions based on legal affairs and issues related to the procurement, infrastructure and development of algorithms amidst the assumption that data-centric cyber/digital sovereignty is central to the transactions and the norm-based legitimacy of the transactions, is International Algorithmic Law. It could be easily argued that data law issues must be addressed by default, and there is no doubt that it should be done. However, the data protection laws, lack that legal consortium of understanding that could tackle and address the economics behind data colonialism and exploitation. Domestic regulators would also have to develop economic law tools which are principled and rules-based, because regulation by enforcement and endless reliance on trade negotiations alone would never help if a privacy-centric digital economy has to achieved at domestic levels. Hence, beyond certain general compliances and issues where the Data Protection laws across the world can have impact, developing regulatory tendencies around the anthropomorphic use of artificial intelligence could surely be the best way forward. I would even argue that having AI-related 'treaties' could be possible as well. However, those 'treaties' would not be about some comic book utopia, or a sci-fi movie utopia on political control. It could be about basic ethics issues, data processing issues, or issues related to optimal explainability of the algorithms and their neural networks, and models. It could be like a treaty due to its purpose-based legal workflows and use cases. Blending Legal Prescriptions of Data Jurisprudence & Intellectual Property Law Now, this is a controversial suggestion, but in VLiGTA-TR-002, our report on Generative AI applications, I had proposed that in certain intellectual property issues, the protections offered to proprietary information produced by Generative AI applications could be justified by companies to manufacture the consent of data principals at every stage of prompting, by virtue of the technology by design. In such a case, I had proposed that in the case of copyright law, invoking data protection law could be helpful. Now, considering market trends, I would state that data protection law could also be invoked in the case of trade secrets. Here is an excerpt from the report (page 127): Regulators would need to develop a better legal recognition regime, where based on the nature of use cases, copyright-related concerns could be addressed or averted. In this case, we have to consider the role of data protection and privacy laws, when it comes to the data subject. However, the legal position to invoke data protection rights of the Data Principals, to address the justification of invoking IP rights of proprietary information by technology companies has to be done to achieve specific remedies. For AI developers and data scientists, they would have to address the issue of bias-variance tradeoff, when it comes to their AI models, especially the large language models. Here is an excerpt from an article from Analytics India Magazine: Bias and variance are inversely connected and it is practically impossible to have an ML model with a low bias and a low variance. When we modify the ML algorithm to better fit a given data set, it will in turn lead to low bias but will increase the variance. This way, the model will fit with the data set while increasing the chances of inaccurate predictions. The same applies while creating a low variance model with a higher bias. [...] Models like GPT have billions of parameters, enabling them to process vast amounts of data and learn intricate patterns in language. However, these models are not immune to the bias-variance tradeoff. Moreover, it is possible that the larger the model, the chances of showing bias and variance is higher. [...] To tackle underfitting, especially when the training data contains biases or inaccuracies it is important to include as many examples as possible. [...] On the other hand, over-explanation to models to perfectly align with human values can lead to an overfit model that shows mundane and results that represent only one point of view. This often happens because of RLHF, the key ingredient for LLMs like OpenAI’s GPT, which has often been criticised to be too politically correct when it shouldn’t be. To mitigate overfitting, various techniques are employed, such as regularisation, early stopping, and data augmentation. LLMs with high bias may struggle to comprehend the complexities and subtleties of human language. They may produce generic and contextually incorrect responses that do not align with human expectations. To conclude, the economics of AI explainability, in the case of India's Digital Personal Data Protection Act, can be developed by the market in India. If we achieve the economics that makes AI explainable, accountable and responsible enough, which enables sustainable business models, then a lot could be achieved on the front on data protection ethics and standards to enable a privacy-compliant ecosystem.
- Generative AI and Law Workshop for upGrad
Our Founder and Managing Partner, Abhivardhan is glad to hold a 2-hour virtual workshop with upGrad on Generative AI and Law. This workshop is a free event to attend virtually. upGrad has stated that they will provide a Certificate upon Completion. Abhivardhan will discuss about nuances related to the use of Generative Artificial Intelligence, and its use in the legal industry, especially when it comes to document analysis and legal research. The workshop will also cover some nuances related to the legal issues around Generative AI tools, especially on prompt engineering, cybersecurity and intellectual property-related issues. Register for the workshop for free at https://www.upgrad.com/generative-ai-law-workshop/ About Abhivardhan, our Founder and Managing Partner Throughout his journey, he has gained valuable experience in international technology law, corporate innovation, global governance, and cultural intelligence. With deep respect for the field, Abhivardhan has been fortunate to contribute to esteemed law, technology, and policy magazines and blogs. His book, "AI Ethics and International Law: An Introduction" (2019), modestly represents his exploration of the important connection between artificial intelligence and ethical considerations. Emphasizing the significance of an Indic approach to AI Ethics, Abhivardhan aims to bring diverse perspectives to the table. Some of his notable works also include the 2020 Handbook on AI and International Law, the 2021 Handbook on AI and International Law and the technical reports on Generative AI, Explainable AI and Artificial Intelligence Hype.
- New Report: Promoting Economy of Innovation through Explainable AI [VLiGTA-TR-003]
We are more than glad to release another technical report by the VLiGTA team. This report takes a business-oriented generalist approach on AI explainability ethics. We express our gratitude to Ankit Sahni for authoring a foreword to this technical report. This research is a part of the technical report series by the Vidhitsa Law Institute of Global and Technology Affairs, also known as VLiGTA® - the research & innovation division of Indic Pacific Legal Research. Responsible AI has been a part of the technology regulation discourse for the AI industry, policymakers as well as the legal industry. As ChatGPT and other kinds of generative AI tools have become mainstream, the call to implement responsible AI ethics measures and principles in some form becomes a necessary one to consider. The problem lies with the limited and narrow-headed approach of these responsible AI guidelines, because of fiduciary interests and the urge to be reactive towards any industry update. This is exactly where this report comes. To understand, the problems with Responsible AI principles and approaches can be encapsulated in these points: AI technologies have use cases which are fungible There exist different stakeholders for different cases on AI-related disputes which are not taken into consideration Various classes of mainstream AI technologies exist and not all classes are dealt by every major country in Asia which develops and uses AI technologies The role of algorithms in shaping the economic and social value of digital public goods remains unclear and uneven within law This report is thus a generalist and specificity-oriented work, to address & explore the necessity of internalising AI explainability measures into perspective. We are clear with a sense of perspective that not all AI explainability measures can be even considered limited to the domains of machine learning, and computer science. Barring some hype, there are indeed some transdisciplinary and legal AI explainability measures, which could be implemented. I am glad my co-authors from the VLiGTA team did justice to this report. Sanad Arora, the first co-author of this report, has extensively contributed on aspects related to the limitations of responsible AI principles and approaches. He has also offered insights on the issue of convergence of legal and business concerns related to AI explainability. Bhavana J Sekhar, the second co-author has offered her insights on developing AI explainability measures to practice conflict management when it comes to technical and commercial AI use cases. She has also contributed extensively on legal & business concerns pertaining to the enabling of AI explainability in Chapter 3. Finally, it has been my honour to contribute on the development of AI explainability measures to practice innovation management, when it comes to both technical and commercial AI use cases. I am glad that I could also offer an extensive analysis on the socio-economic limits of the responsible AI approaches at present. You can now access the complete report on the VLiGTA App: https://vligta.app/product/promoting-economy-of-innovation-through-explainable-ai-vligta-tr-003/ Recommendations from VLiGTA-TR-003 Converging Legal and Business Concerns Legal and Business concerns can be jointly addressed by XAI where data collected from XAI can be used to address the regulatory challenges and help in innovation, while ensuring accountability on the forefront. Additionally, information from XAI systems can assist in developing and improving specific tailor made risk management strategies and ensure risk intervention at the earliest. Explainable AI tools can rely on prototype models which will have self-learning approaches to adopt and learn model-agnosticexplanations is also highly flexible since it can only access the model’s output. Privacy-aware machine learning tools can also be incorporated into the development of explainable AI tools to avoid possible risks of data breaches and privacy. Compliances may be developed and used for development purposes, including the general mandates that are attributed to them. Conflict Management Compliance by design may become a significant aspect of encouraging the use of regulatory sandboxes and enabling innovation management in more productive ways as possible. In case sandboxes are rendered ineffective, real-time awareness and consumer education must be done, keeping in mind technology products and services accessible and human-centric by design. Risk Management strategies are advised to be incorporated at different stages of AI life cycle from the inception of Data collection and Data training. De-risking AI can involve model risk assessment by classifying AI model based on its risk (High, low, medium) and its contextual usage which will further assist in developers, stakeholders to jointly develop risk mitigation principles according to the level of risk incurred by AI. Deployment of AI explainability measures will require a level of decentralisation where transdisciplinary teams to work closely to provide complete oversight. Risk monitoring should be carried out by data scientists, developers and KMPs to share overlapping information and improve situational analysis of the AI system periodically. Innovation Management The element of trust is necessary and the workflow behind the purpose of data use must be made clear by companies. Even if the legal risks are not foreseeable, they can at least make decisions, which de-risk the algorithmic exploitation of personal & non-personal data, metadata and other classes of data & information. These involve technical and economic choices first, which is why unless regulators come up with straightforward regulatory solutions, companies must see how they can minimise the chances of exploitation and enhance the quality of their deliverables and keeping their knowledge management practices much safer.
- Arbitrating GST Disputes Arising out of Contractual Arrangements in India
DISCLAIMER: The contents of this blog article reflect the personal views of the authors alone and do not constitute the views of any of the authors' affiliated organizations. The contents of the blog article cannot be treated as legal advice under any circumstances. The main author of the article is a Senior Associate at Ratan Samal & Associates and an Arbitrator at the Asia Pacific Centre for Arbitration and Mediation & the Indian Institute of Arbitration and Mediation. This article is co-authored by Abhivardhan, Managing Partner at Indic Pacific Legal Research, Founder, VLiGTA and Chairperson & Managing Trustee at the Indian Society of Artificial Intelligence and Law. Introduction The unified indirect tax system of India, viz., the Goods and Services Tax (GST) has entered its sixth year. Despite its insurmountable potential, several disputes continue to increasingly exist. Even though a major portion of the disputes are against adjudication of the GST Department representing a dispute against the sovereign right, power and function of the Government to levy tax or to withhold refund or grant/ deny a tax incentive, an equally significant portion of GST disputes are also pertaining to contractual rights arising out of contracts entered into between parties where the subject matter relates substantially to the shifting of burden of GST, indemnification by the defaulting party to the aggrieved party for non- payment of GST to the Government, GST reimbursement arrangements, tax-sharing arrangements, deemed export disputes and the like. This article argues that even though a significant portion of disputes under the GST law are non- arbitrable as it pertains to disputes with the Government representing the Sovereign power to tax; contractual disputes arising between companies and other forms of entities and legal persons where the subject matter of the dispute pertains to GST are arbitrable. A clear demarcation and identification of the distinction between the two can significantly aid companies, entities and legal persons in correctly contesting their case before the appropriate forum. Identifying Non-Arbitrable Disputes under the Goods and Services Tax Law The GST law is a combination of multiple statutes, operating simultaneously on the respective subject matters as assigned to it, by its ‘charging mechanism’. The substratum of the GST statutes is ‘supply’ wherein tax is levied on the supply of goods or services or both goods and services. Due to the fact that the spirit of cooperative federalism is imbibed within the GST statutes, the Central Goods and Services Tax (CGST) Act, 2017 and the State Goods and Services Tax (SGST) Act, 2017 levy GST on all intra- State supplies of goods or services or both proportionately and in case a transaction takes place within a Union Territory then the CGST Act, 2017 and the Union Territory Goods and Services Tax (UTGST) Act, 2017 apply proportionately. By proportionate application, it is meant that if the rate of GST for a particular supply is 18%, then 9% CGST and 9% SGST or UTGST as the case may be, will apply. As far as inter-State supplies, imports and exports (including refunds thereof read with provisions of the CGST Act, 2017) are concerned, the Integrated Goods and Services Tax (IGST) Act, 2017 applies and there is no proportionate levy of tax since only one statute applies in such forms of supplies. Coming to the non-arbitrable aspects, the Supreme Court of India in Vidya Drolia & Ors. v. Durga Trading Corporation & Ors., (2021) 2 SCC 1 has held that taxation is the sovereign function of the State and is therefore, non- arbitrable. This means that disputes arising out of adjudication u/s 73 or 74, denial of refund u/s 54, denial of input tax credit u/s 16, cancellation of registration u/s 29, rejection of appeals, order of anti- profiteering u/s 171 of the CGST Act, 2017 and like matters where the dispute is against the Goods and Services Tax Department or the Central Government or the respective State Government, the said form of GST dispute will be non-arbitrable. Hence, the appellate route before the quasi-judicial appellate authority followed by the Goods and Services Tax Appellate Tribunal (not yet constituted), followed by the High Court and the Supreme Court will have to be opted, unless there is violation of fundamental rights, principles of natural justice violation, the order passed is wholly without jurisdiction or if the vires of a particular provision(s) of the GST statute(s) or its respective delegated legislation in the form of rules, notifications, circulars and the like are challenged, in the event of which a Writ Petition can be filed before the High Court directly without undergoing the appellate route. Assessing the Arbitrability of Goods and Services Tax Disputes from Contractual Arrangements This part of the article delves into few of the most common forms of contractual arrangements which are reflected in contractual arrangements, often as a part of clauses of the respective contract. Contractual Shifting of the Burden of GST The contractual shifting of the burden of GST is one of the most common forms of clauses which can be seen in several contracts especially in the construction sector and in contracts with the Government and with public sector undertakings and has also been extant under the erstwhile indirect tax laws. However, it is necessary to point out that the incidence of tax under the GST statutes will not change and the legal person liable to pay tax as per the charging mechanism will have to bear the tax with a subsequent contractual right to recover the amount from the other party in case the other party had agreed under the contract to bear such tax. Under GST, it is the supplier of goods or services or both, as the case maybe, who has to pay the tax under the forward charge mechanism. A few exceptions exist where the recipient of goods or services or both, as the case may be, has been made liable to pay tax under the reverse charge mechanism. For example, if in a transaction where the recipient was supposed to pay GST under the reverse charge mechanism has entered into a contract with its supplier that it is the supplier who will have to bear the GST, then in such circumstances, while filing of the monthly returns in FORM-GSTR-3B, the recipient will have to pay the tax amount under reverse charge mechanism and it will not be open for the recipient to insist recovery from the supplier due to the contract. However, after such payment is made by the recipient, the recipient of the supply would be entitled to recover from the supplier in pursuance of the contractual arrangement between them which foists GST liability on the supplier. In the absence of such contractual arrangement, the recipient would have paid the tax without any further rights for recovering the amount from the supplier, but it is only due to the contractual arrangements for shifting the burden of tax, does the recipient have the right to recover the GST amount from its supplier. In the presence of an arbitration clause in a contract where shifting of burden of taxes have been agreed upon, a dispute between the recipient who is foisted liability under the reverse charge mechanism by the respective GST statute with the Government would be non-arbitrable since it would be a right in rem and also representing the sovereign right of the Government to levy and collect tax from the recipient as per the charging mechanism under GST whereas the subsequent dispute between the recipient and the supplier wherein the supplier had agreed to the shifting of burden of tax would be an arbitrable dispute being a right in personam which arises out of a contract. Similarly, in a transaction where GST is to be paid by the supplier under the forward charge mechanism and a contractual arrangement exists between the supplier and the recipient that the supplier will bear the entire GST amount, the recipient can choose to deduct GST and disentitle the supplier from collecting the tax amount from the recipient, resulting in the supplier paying the taxes from its own pockets instead of collecting it from the recipient as would have been the scenario under normal circumstances. Similar to the aforesaid, a dispute between the supplier and the recipient in respect of the deduction of GST amount from the payment would be a right in personam and arbitrable as per the arbitration agreement envisaged in the contract. The Supreme Court of India in Rashtriya Ispat Nigam Limited v. M/s Dewan Chand Ram Saran, (2012) 5 SCC 306 set aside the judgment of the Bombay High Court which had interfered with an Arbitral Award interpreting a clause of the contract which was pertaining to the shifting of burden of Service Tax. In this case, the parties had entered into a contract wherein the contractor who was the service provider was to bear the entire Service Tax amount. In the absence of the contract, the service provider would collect such tax from the service recipient and pay it to the Government Treasury. However, due to the contractual shifting of burden, the service recipient in the instant case deducted the Service Tax component from the payment of consideration. This resulted in the service provider invoking Arbitration against the service recipient wherein the Arbitrator held that as per the contractual terms between the parties the service recipient was correct in deducting the payments of Service Tax as the burden was on the service provider to bear Service Tax. Upon challenge before the Bombay High Court, the Arbitral Award was interfered and set-aside and upon further appeals to the Supreme Court, the Supreme Court held that the Arbitrator had interpreted the contract correctly and the Bombay High Court’s interference with the Arbitral Award was unjustified. The Delhi High Court in Spectrum Power Generation Limited v. Gail (India) Limited, (2022) SCC OnLine Del 4262 was faced with a dispute arising out of a Gas Sale Agreement wherein the petitioner company had invoked arbitration after failed attempts of conciliation and had filed a petition u/s 11(6) of the Arbitration and Conciliation Act, 1996 before the Delhi High Court for the appointment of an Arbitrator. The respondent company’s argument was that the dispute was non-arbitrable since it was pertaining to a dispute of contractual shift of burden of GST and Value Added Tax (VAT) on gas. However, the Delhi High Court held that such disputes were arbitrable and allowed the petition, resulting in the appointment of an Arbitrator by the Delhi High Court u/s 11(6) of the Arbitration and Conciliation Act, 1996. The Bombay High Court in Angerlehner Structural and Civil Engineering Company v. Municipal Corporation of Greater Bombay, (2022) 103 GSTR 336 in an Arbitration Execution Application was also faced with the question as to whether there was contractual shifting of burden of taxes between the parties. The Court held that no such contractual arrangement existed between the parties and the withholding of GST by the recipient was unjustified and accordingly, the recipient was directed to pay the GST amount to the supplier with interest who in turn would deposit it in the Government Treasury. Therefore, the legal principle which emerge from the aforesaid judgments and discussions is that contractual arrangements for shifting the burden of tax are valid forms of contract and in case of any dispute in respect of the same, such disputes are arbitrable as per the arbitration agreement in the contract. Reimbursement and Tax-Sharing Arrangements Parties may even enter into contractual arrangements pertaining to reimbursement of GST and may also enter into GST sharing arrangements and similar to the scenario for contractual shifting of burden of taxes, the legal person chargeable to tax as per the charging mechanism will have to pay GST and in case of a dispute pertaining to contractual clauses of reimbursement of GST and tax-sharing arrangements, arbitration can be invoked as those would be arbitrable disputes. The Delhi High Court in Indian Railway Catering & Tourism Corporation (IRCTC) Ltd. v. Deepak & Co., (2022) 104 GSTR 475, inter alia, upheld the Award passed by the Arbitrator which granted reimbursement of GST with interest. Although the reasoning of the Arbitrator was upheld on the basis of contractual interpretation, the judgment is also indicative of the fact that reimbursement of GST would be an arbitrable dispute as a contractual right in personam. Indemnification of the Recipient by the Supplier for Default in Payment of GST by the Supplier There is an upsurge in disputes pertaining to input tax credit under GST arising because of the fact that the supplier is not paying tax to the Government Treasury. In the normal chain of transactions, the recipient of goods or service (purchaser) pays the consideration amount as well as the amount of GST charged in the tax invoice raised by the supplier (seller) and the supplier is liable to pay such GST collected from the recipient to the Government Treasury. However, in many cases it is being seen that the supplier, despite having collected tax from the recipient is not depositing it in the Government Treasury resulting in recovery action being taken against the supplier as well as the recipient. Even after having discharged its obligations, the recipient is faced with difficulties due to the inaction of the supplier resulting in the ineligibility of the input tax credit for the recipient in pursuance of Section 16(2)(c) of the CGST Act, 2017. This of course, does not apply to instances where the supplier and the recipient are acting in collusion to defraud the Government but applies in cases where the recipient was under the bona fide belief that its supplier is a genuine dealer and despite the consideration and the GST amount having been paid in full and in time by the recipient to the supplier, the supplier does not deposit GST in the Government Treasury. Parties may choose to incorporate clauses in their contract pertaining to their respective transactions where contingent to the recipient purchaser facing any difficulties from the GST Department due to non- payment of GST in the Government Treasury by the supplier, the recipient will be entitled to be indemnified for the demand created against the recipient by the GST Department. Under normal circumstances, even after paying the GST amount in full to the supplier for depositing in the Government Treasury, due to the inaction or the non- compliance by the supplier, the recipient is saddled with having to reverse input tax credit along with interest at the rate of 24% u/s 50(3) of the CGST Act, 2017 and with penalty u/s 122 r.w.s. 73 or 74 of the CGST Act, 2017 as the case may be. This is why having a contingent contractual clause for indemnity can aid the recipient in being indemnified of the input tax credit reversal, interest and penalty amount suffered by it due to the inaction and non- compliance by the supplier to deposit tax to the Government Treasury. It is noteworthy that since the dispute in this respect would be regarding indemnification arising out of a contingent contract between parties to the contract, such a dispute would be arbitrable. Certain supplies under GST have been treated as deemed exports. When a supplier makes a supply of goods to a recipient registered with an Export Promotion Council or a Commodity Board recognized by the Department of Commerce including Export Oriented Units, such supplies would be treated as deemed exports even though such goods do not leave the territory of India. Additionally, for being treated as deemed exports under GST, the goods must also be exported by the recipient registered with an Export Promotion Council or a Commodity Board recognized by the Department of Commerce including Export Oriented Units to export such goods to a place outside the territory of India within 90 days of issuance of the tax invoice by the supplier, the tax invoice issued must contain the GSTIN of the supplier, the shipping bill or the bill of export must contain the tax invoice number, the recipient must transport the goods directly from the port, inland container depot, airport, land customs station or a registered warehouse from where the goods shall be directly exported and copies of shipping bill or bill of export, export manifest, tax invoice and export report must be provided to the supplier as well as the jurisdictional officer. The benefit of a transaction being treated as a deemed export under GST is that the supplier has to pay tax at a concessional rate of tax after collecting such concessional tax amount from the recipient. The benefit of such concessional rate of tax is provided to deemed export supplies since the transaction is being made in the course and furtherance of export that ultimately results in the generation of valuable foreign currency and therefore, no taxes must be exported in the entire chain of export. Coming to the arbitrability perspective, since there are insurmountable conditions to be fulfilled by the recipient, in case of non-compliance by the recipient of any of the conditions, it is the supplier that faces action from the GST Department wherein tax at the full rate is demanded along with interest and penalty. This is capable of causing significant difficulties for the suppliers in deemed export transactions. In case of scenarios where the recipient does not export the goods within 90 days of issuance of tax invoice by the supplier and in case of non-compliance with the conditions of the export related documents being submitted by the recipient to its jurisdictional officer, in the presence of a contractual arrangement mandating the aforesaid requirements, the supplier would be entitled to invoke arbitration alleging breach of the contractual clauses. This would enable the supplier to recover the tax at the full rate along with interest and penalty paid by it against the demand created by the GST Department due to the non- fulfilment of conditions of deemed export by the recipient of such goods. Conclusion There are manifold possibilities of disputes arising out of contractual arrangements pertaining to GST and only the most common forms of disputes arising between parties in this respect have been discussed in the present article. It is evident from the aforesaid discussions that the presence of contractual arrangements under GST are arbitrable in case of disputes or differences arising out of such contractual clauses and that arbitrating such disputes would significantly assist parties in avoiding payment of the tax, interest and penalty liabilities from their own pockets due to the default of the other party in the transaction as the aggrieved party will be able to invoke Arbitration for recovering the said amounts from the defaulting party.
- Social Media to Recommendation Media: AI & Law Design Perspectives
In 2022, if we understand the interconnected role of the design behind our mainstream social media applications, and the algorithms used to run them, a new trend has become quite real, which would entertain some intriguing legal questions surrounding areas of concern such as competition law, digital accessibility and technology policy. Social media influencers, content creators and even technology geeks, have noticed this trend that various social media applications, be it Instagram or Twitter or any, are now behaving as recommendation media applications. The 10-second video trends promoted by Tiktok, for example, kind of promoted the algorithmic tendencies of recommending content of some favorable parameters, thereby giving a hard time to YouTube and Instagram. Even Spotify has been affected by the 10-second video trends, making recommendation media the recent version of social media. In this article, the legal and policy challenges around the transition of recommendation media from social media, are discussed. The endeavour behind the article is to declutter the tendency of algorithmic activities behind the rise of recommendation media, and assess how legal dilemmas may arise. The Emergence of Recommendation Media Let us first understand social media in brief terms. It is a digital medium, through which users of the platform “socialise” with each other. We may also say that social media has two important features, which make it characteristically, important - the human element of engagement of users (who are data subjects) and the technology element of the platform itself - the UI/UX, the code, the algorithms and even the stakeholders involved in the life cycle cum maintenance of the platform. The relationship between the technology involved and the human data subject defines the responsible and explainable features of the social media technology as a whole, while we also see the emergence of other forms of incidence which could relate to the platform and its social, political, economic and other relevant forms of use. Now, there are similarities among social media platforms in many ways, as to how are they useful, how they affect civil liberties of their own users, how they put their algorithmic infrastructure into proper use to moderate user content, and many others. As we see with time that the use of algorithms on social mediums, especially the mainstream ones such as Twitter, Instagram, LinkedIn, Facebook and others did drive and create a sphere of discourse and private censorship both. However, the way algorithms functionalists and shape social media discourse has surely changed. Tiktok is a quite important driver of this trend as well considering the fact that the app by introducing Tiktok Music would surely affect the way Spotify controls a significant place in the market. The rise of recommendation media however isn't just driven by the “Tiktok Effect” as we know it. Due to the developments in the United States as far as their domestic issues are concerned, private censorship and algorithms-driven discourse on social media platforms have quite affected international discourse and content creation. The self-regulation approaches of the big technology (FAAMG) companies does affect the knowledge and information economies of the Global South economies where governments in Asia and Africa are questioning the lack of transparency in such self regulation policies, like leadership hierarchies and community standards. This led the existing players promote the concept of recommendation media where parameters rule visibility. To approach this development, the emergence of alternative means of digital media became possible, starting from the United States. Substack, Revue and even Clubhouse represented those “alternatives” as we know. Micheal Mignano explains how recommendation media actually works in an article entitled The End of Social Media: In recommendation media, content is not distributed to networks of connected people as the primary means of distribution. Instead, the main mechanism for the distribution of content is through opaque, platform-defined algorithms that favor maximum attention and engagement from consumers. The exact type of attention these recommendations seek is always defined by the platform and often tailored specifically to the user who is consuming content. For example, if the platform determines that someone loves movies, that person will likely see a lot of movie related content because that’s what captures that person’s attention best. This means platforms can also decide what consumers won’t see, such as problematic or polarizing content. It’s ultimately up to the platform to decide what type of content gets recommended, not the social graph of the person producing the content. In contrast to social media, recommendation media is not a competition based on popularity; instead, it is a competition based on the absolute best content. Through this lens, it’s no wonder why Kylie Jenner opposes this change; her more than 360 million followers are simply worth less in a version of media dominated by algorithms and not followers. Sam Lessin explains this phenomenon, of recommendation with this cycle of content marketing and its cycle, as per this description from his tweet: Now, content creators are stuck in a different kind of a loop, in general - because it might lead to a situation, according to Sam, what we call as the Stage 5 of digital entertainment and content (which may or may not work in the case of knowledge economics that much). Now, algorithms take the larger helm to shape discourses and content driven reach for the users, which can be replicated by algorithmically sourced content, taking over human content, followed by personalized generated content to compete all facets of algorithmically sourced content. It is important to estimate that this cycle could remain a theoretical guess and might not happen soon. However, what is important to realise is that this cycle is worth understanding the way recommendation media could have special repercussions in the way digital media would transform. Ethical and Economic Implications of Recommendation Mediums To understand the ethical repercussions behind the purpose and use of recommendation mediums, it is necessary to understand its economics, in some way. Instagram is a reasonable example to understand the same. To compete with Tiktok’s 10-minute videos, Instagram came up with Instagram Reels, which has created an interesting competitive streak against Tiktok. As of now, Instagram has to adapt with some choices of shaping their own platform, as we know that Meta (or in general FB platforms) more or less has an interface problem, and not an algorithm problem. Here is an excerpt from a screenshot of a Tweet by Sam Lessin: I saw someone recently complaining that Facebook was recommending to them…a very crass but probably pretty hilarious video. Their indignant response [was that] “the ranking must be broken.” Here is the thing: the ranking probably isn’t broken. He probably would love that video, but the fact that in order to engage with it he would have to go proactively click makes him feel bad. He doesn’t want to see himself as the type of person that clicks on things like that, even if he would enjoy it. This is the brilliance of Tiktok and Facebook/Instagram’s challenge: TikTok’s interface eliminates the key problem of what people want to view themselves as wanting to follow/see versus what they actually want to see…it isn’t really about some big algorithm upgrade, it is about releasing emotional inner tension for people who show up to be entertained. There are some ontological changes that recommendation mediums for sure provide to content creators and users, which cannot be ignored. Those important choices, are described as follows: Recommendation Media creates a vertical hierarchy of rankings for any digital post on their platform, while horizontal reach is completely up to the user. Algorithms since drive content originally imply that vertical reach-out through scrolling up endless content is now the new normal. Even platforms like YouTube and Twitter are mainstreaming that in their own league, be it YouTube Shorts, Revue or Twitter Communities. It enables a user to mainstream their content by contributing to multiple flows of content escalation, through any parameter possible. Tiktok as an example shows that it could be a 10-second soundtrack, which Instagram for sure resembles as well. However, there may be some other subtle aspects as well - including the graphics involved, the caption styling, or anything else. While we know that social mediums promote a sense of monoculture in action, which has some economic imprints, recommendation mediums enforce monocultural trends using algorithms, which also makes several IP concerns driven by algorithmic choices and adaptivity of expression which any digital content may ought to have. In some aspects, it might simplify IP (mostly copyright) issues, but closures do not seem to really happen. Recommendation mediums, other than social mediums, for sure do not drive an organic flow of discourses since they are algorithmically driven. It means that the flow of content expression is going to be driven by the recommendation algorithms, which validate or maybe invalidate the content flow. We cannot deny that the technology companies do not have internal policies or strategic approaches towards the algorithms as to how they make these choices. However, at some point, even they cannot control the trends, simply. This statement by Mark Zuckerberg about News Feed on Facebook explains the problem: We really messed this one up. When we launched News Feed and Mini-Feed we were trying to provide you with a stream of information about your social world. Instead, we did a bad job of explaining what the new features were and an even worse job of giving you control of them. I'd like to try to correct those errors now. When I made Facebook two years ago my goal was to help people understand what was going on in their world a little better. I wanted to create an environment where people could share whatever information they wanted, but also have control over whom they shared that information with. I think a lot of the success we've seen is because of these basic principles. We made the site so that all of our members are a part of smaller networks like schools, companies or regions, so you can only see the profiles of people who are in your networks and your friends. We did this to make sure you could share information with the people you care about. This is the same reason we have built extensive privacy settings — to give you even more control over who you share your information with. Somehow we missed this point with News Feed and Mini-Feed and we didn't build in the proper privacy controls right away. This was a big mistake on our part, and I'm sorry for it. But apologizing isn't enough. I wanted to make sure we did something about it, and quickly. Now, it is necessary to understand that the perspective of being on any social medium, unlike online editorial publications, was that most of these mainstream platforms at least were horizontal in vogue and reach. Users, be it individuals, businesses and even governments could act in a horizontal fashion and fathom the organic reach-out of digital content. Interestingly, Instagram now has three choices to make as they are shaping up their own platform: Shift towards ever more immersive mediums (For example - Text to Video to 3D to VR) The Increasing and Penetrable Use of Artificial Intelligence (from AI rankings and recommendations to mere generation) Change in interaction models from user-directed to computer-controlled (from Clicks and Scrolls to Autoplays) This could be an inevitable choice for many digital content platforms, as well as those social mediums which could come into origination in near future. So, yes - there could be ethical problems, which stem from the classical questions of transparency, explainability and responsibility of algorithms. Earlier, the Black Box problem and the lack of transparency was largely driving how AI estimates data subjects and their choices online. Now, it is becoming clearer that the dynamics of expression and reach are going to change, in fundamental ways. To Conclude, Some Legal Dilemmas To conclude, there could be some issues with the rise of recommendation mediums, many of which are obvious, with some being fresh problems: Hostage of Expression and Speech: When algorithms drive content, there could be allegations of curbing freedom of speech and expression, which may be countered by justifying the self-regulation policies and explaining some aspects of algorithm-driven decisions made to moderate and even recommend/invalidate any digital content. These issues have been in the Web2 sphere for FAAMG platforms for long, especially in the US. Solutions have been proposed that there should be proper oversight, audits & compliances must be strengthened, regulators must be sensible in shaping public interests and concerns and means of ADR may be promoted to address subtle and edgy legal disputes. However, Recommendation mediums do something quite explictly, which is driving algorithms at the heart of content and expression flow in their platforms. Competition/Antitrust and Corporate Governance Issues: While algorithms driving content and physical realities could be an important dilemma, and in the realm of sectorial implications of algorithmic activities and operations within mainstream digital platforms have been recognised, due to some real lack of research on linking digital realities with physical realities in a legal aspect of understanding - it has been hard to resolve the issues, which already exist in the Web2 sphere. Yes, recommendation mediums may affect markets and increase their fragility, leading to a pile of competition law issues. Has it become more certain to assess the problems and conclude as to what legal problems could come up, in the market economies? It has become easier to do so because issues of corporate governance, their horizontal impact, and the economics of regulation could clearly come on radar. However, nation-states must approach competition law differently, because they lack in establishing sectoral implications of algorithmic activities and operations properly. India is surely an example where to address Amazon India, they do require specific amendments to the Competition Act, 2002, to promote ex-ante regulation over digital markets. For now, even if we take the European Union’s AI Act as a pivotal reference, we can at least conclude that sectoral regulation coupled by sheltering decentralised approaches to promote the ethics of responsible artificial intelligence, would surely help us in demystifying the challenges that recommendation mediums bring up in future. Governments would surely attempt to ask for audits and ensure that recommendation mediums as intermediaries comply within the legal schemes. However, the lack of clarity cannot be justified by a huge lack of legal acumen in even governing digital markets, be it the US or any emerging economy like India.
- The European Union Artificial Intelligence Act: A Glance
The 27-nation group has introduced the first AI regulations in the world two with a focus on limiting dangerous but narrowly targeted applications. Lately we have witnessed the increased role of AI in our day to day lives, and it becomes important to regulate the AI models to ensure the integrity and security of Nations. Chatbots and other general-purpose AI systems received very little attention before the coming of ChatGPT, which further reinstated the importance to regulate such models before it creates turbulence in the World Economy. The EU Commission published a proposal for an EU Artificial Intelligence Act in back April 2021, which provoked a heated debate in the EU Parliament amongst political parties, stakeholders, and EU Member States, leading to thousands of amendment proposals. The EU Parliament has approved the passage of the AI Act, which definitely evokes issues of implementation with respect to the AI legislation. In the European Parliament, the provisional AI Act would need to be approved by the joint committee, then debated and voted on by the full Parliament, after which the AI Act is adopted into law. The objectives of the European Union Artificial Intelligence Act are summarised as follows: address risks specifically created by AI applications propose a list of high-risk applications set clear requirements for AI systems for high risk applications define specific obligations for AI users and providers of high risk applications propose a conformity assessment before the AI system is put into service or placed on the market propose enforcement after such an AI system is placed in the market propose a governance structure at European and national level. Defining AI The definitions offered by the participating governments are summarised in the FCAI report. Despite the fact that there is "no single definition" of artificial intelligence, many efforts have been made in that direction. Many attempts have been made to define the term as it will determine the scope of the Legislation. Also, it has to strike the balance between being too narrow to exclude the certain types of AI that needs regulation and too broad a definition risks sweeping up common algorithmic systems that do not produce the types of risk or harm However, the concept in the AI Act is the first definition of AI for regulatory reasons. Earlier definitions of AI appeared in frameworks, guidelines, or appropriations language. The definition that is finally established in the AI Act is likely to serve as a benchmark for other AI policies in other nations, fostering worldwide consensus. According to Article 3(1) of the AI Act, an AI system is “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” Risk-based approach to regulate AI A "proportionate" risk-based approach is promised by the AI Act, which imposes regulatory burdens only when an AI system is likely to pose high risks to fundamental rights and safety. The AI Act divides risk into four categories: unacceptable risk, high risk, limited risk, and low risk. These categories are targeted at particular industries and applications. One important topic under discussion by the Parliament and the Council will be the regulation and classification of applications at the higher levels, specifically those deemed to be unacceptably risky, such as social scoring, or high risk, AI interaction with with children in the context of personal development or personalised education. The EU AI Act lays out general guidelines for the creation, commercialisation, and application of AI-driven systems, products, and services on EU soil. The proposed regulation outlines fundamental guidelines for artificial intelligence that are relevant to all fields. Through a required CE-marking process (CE marking indicates that a product has been assessed by the manufacturer and deemed to meet EU safety, health and environmental protection requirements), it establishes requirements certification of High-Risk AI Systems. This pre-market compliance regime also applies to datasets used for machine learning training, testing, and validation in order to guarantee equitable results. The Act aims to formalise the high requirements of the EU's trustworthy AI paradigm, which mandates that AI must be robust in terms of law, ethics, and technology while upholding democratic principles, human rights, and the rule of law. If we talk about India, The Companies Act 2013, lays down the compliance that needs to be met by a company but currently AI Models or the providers do not come under the ambit. India is not planning to develop AI regulatory plans at this point of time but taking inspiration from the EU legislation, it can ensure strict compliance measures for the upcoming players in the industry by taking a cue. This risk-based based pyramid (Figure 1) is combined with a contemporary, layered enforcement mechanism in the draught Artificial Intelligence Act. This implies, among other things, that applications with a low risk are subject to a laxer legal framework, while those with a high risk are prohibited. As danger rises between these two ends of the spectrum, rules become harsher. These range from light, externally assessed compliance requirements throughout the life cycle of the application to strict, non-binding self-regulatory soft law impact evaluations coupled with codes of conduct. Ban on the use of facial biometrics in law enforcement Some Member States want to exclude from the AI Regulation any use of AI applications for national security purposes (the proposals exclude AI systems developed or used “exclusively” for military purpose). Germany has recently argued for ruling out remote real-time biometric identification in public spaces but allowing retrospective identification (e.g., during the evaluation of evidence), and asks for an explicit ban on the use of AI systems substituting human judges, for risk assessments by law enforcement authorities and for systematic surveillance and monitoring of employee performance. AI-related revision of the EU Product Liability Directive (PLD) In EU, manufacturers are subject to strict civil law liability under the PLD for damages resulting from defective products, regardless of negligence. To integrate new product categories arising from digital technologies, like AI, a modification was required. The PLD specifies conditions under which a product will be believed to be "defective" for the purposes of a claim for damages, including the presumption of a causal link if the product is proven to be defective and the damage is ordinarily consistent with that defect. With regard to AI systems, the revision of the PLD aims to clarify that: AI systems and AI-enabled goods are considered “products” and are thus covered by the PLD; and when AI systems are defective and cause damage to property, physical harm or data loss, the damaged party can seek no-fault compensation from the provider of the AI system or from a manufacturer integrating the system into another product. providers of software and digital services affecting the functionality of products can be held liable in the same way as hardware manufacturers; manufacturers can be held liable for subsequent changes made to products already placed on the market, e.g., by software updates or machine learning; and Talking in Indian context, the Consumer Protection Act talks of product liability and marked an end of the buyer beware doctrine and the introduction of seller beware as the new doctrine governing the Consumer Protection Act. Section 84 of the Act enumerates the situations where a product manufacturer shall be liable in a claim for compensation under a product liability action for a harm caused by a defective product manufactured by the product manufacturer. But this doesn't apply to AI models currently running in India and keeping in mind the future needs, we must ensure provisions on Protection of consumers on priority basis. Impact on Businesses AI has enormous potential for progress in both technology and society. It is transforming how businesses produce value across a range of sectors, including healthcare, mining, and financial services. Companies must handle the risks associated with the technology if they want to use AI to innovate at the rate necessary to stay competitive and maximise the return on their AI investments. Businesses who are experiencing the greatest benefits from AI are much more likely to say that they actively manage risk than those whose outcomes are less promising. As per the Provisions of the Act, it includes fines of up to €30 million or 6 percent of global revenue, making penalties even heftier than those incurred by violations of Regulation Act. The use of prohibited systems and the violation of the data-governance provisions when using high-risk systems will incur the largest potential fines. All other violations are subject to a lower maximum of €20 million or 4 percent of global revenue, and providing incorrect or misleading information to authorities will carry a maximum penalty of €10 million or 2 percent of global revenue. Although enforcement rests with member states, as is the case for GDPR, it is expected that the penalties will be phased in, with the initial enforcement efforts concentrating on those who are not attempting to comply with the regulation. The regulation would have extraterritorial reach, meaning that any AI system providing output within the European Union would be subject to it, regardless of where the provider or user is located. Individuals or companies located within the European Union, placing an AI system on the market in the European Union, or using an AI system within the European Union would also be subject to the regulation. Endnote The unique legal-ethical framework for AI expands the way of thinking about regulating the Fourth Industrial Revolution (4IR) which includes the coming of cutting-edge technology in the form of Artificial Intelligence, and applying the proposed laws will be a completely new experience. From the first line of code, awareness is necessary for responsible, trustworthy AI. The future of our society is being shaped by the way we develop our technologies. Fundamental rights and democratic principles are important in this vision. AI impact and conformance evaluations, best practices, technological roadmaps, and conduct codes are essential tools to help with this awareness process. These technologies are used to monitor, validate, and benchmark AI systems by inclusive, multidisciplinary teams. Ex ante and life-cycle audits will be everything. The new European rules will forever change the way AI is formed. Not just EU, but in the coming days, other countries too would be in need to set-up a regulatory framework on AI and this GDPR would definitely guide them.
- AI Regulation and the Future of Work & Innovation
Please note: this article is a long-read. The future of work and innovation, both are some of the most sensitive areas of concern in the times when the fourth industrial revolution is happening, despite a gruesome pandemic. With future of work, societies and governments fear which jobs would exist, and which would not. Artificial Intelligence and Web3 technologies create a similar perception and hence, fear over such dilemmas. Fairly, the focus point of fear is not accurate or astute enough. The narrative across markets is about Generative AI and automation taking over human labour and information-related jobs. In fact, a hilarious update has come up that even Generative AI prompters are considered to be "a kind of a profession". Now, when one looks at the future of innovation, of course, it is not necessary that every product, system or service resembles a use case to disrupt jobs and businesses that way. In fact, it could be considered that marketing technology hypes is "business friendly". However, it does not work and fails to promote innovative practices in the technology industry, especially the AI industry. In this article, I have offered a regulation-centric perspective on certain trends related to the future of work and the future of innovation with the use of artificial intelligence technologies in the present times. The article also covers on the possibility of Artificial General Intelligence to affect the Future of Work and Innovation. The Future of Work and the Future of Innovation The Future of Work and the Future of Innovation are two closely related concepts that are significantly impacted by the advancement of Artificial Intelligence (AI). The Future of Work refers to the changing nature of employment and work in the context of technological advancements, including AI. It encompasses the evolving skills required for jobs, the rise of automation, the growing prevalence of remote work, and the impact of AI on job displacement and creation. AI has already begun to disrupt certain industries, such as manufacturing and transportation, by automating routine and repetitive tasks. While this has the potential to increase efficiency and productivity, it also raises concerns about job displacement and the need for reskilling and upskilling for the workforce. In the future, AI is likely to have an increased if not significant impact on the job market. Certain professions, such as those that require analytical thinking, creativity, and emotional intelligence, are expected to be in high demand, while other jobs that are easily automated may be at risk of disappearing. It's important to note that the impact of AI on the job market is complex and will vary depending on factors such as industry, geographic location, and job type. It is more about how humans become more skilled and up-to-date. The Future of Innovation refers to the new opportunities and possibilities for creating and advancing technology with the help of AI. AI has the potential to revolutionize many fields, from healthcare to transportation, by enabling more efficient and effective decision-making and automation. AI can be used to analyze vast amounts of data, identify patterns and insights, and provide predictions and recommendations. This can be used to optimize business processes, enhance product development, and improve customer experiences. Additionally, AI can be used to solve complex problems and accelerate scientific research, leading to new discoveries and innovations. However, it's important to note that AI is not a silver bullet and has its limitations. AI algorithms are only as good as the data they are trained on, and biases and errors can be introduced into the system. Additionally, AI raises concerns about privacy, security, and ethical considerations that need to be carefully addressed. Estimating Possible "Disruptions" In Figure 2, a listing is provided which explains, from a regulatory standpoint, how would artificial intelligence could really affect the future of work and innovation. Now, this is not an exhaustive list, and second, some points may overlap for both future of work and innovation respectively. Let's discuss all these important points and deconstruct the narrative and realities around. These points, are based on my insight into the AI industry and its academia in India, Western countries and even China. Job requirements will become complicated in some cases, simplified in other cases Any job requirement that is posted by an entity or a government or an individual, is not reflected merely by the pay-grade / monetary compensation it offers. Money could be a factor to assess how markets are reacting and how the employment market deserves better pay. Nevertheless, the specifics of work, and then the special requirements, explain how job requirements would change. For sure, after two industrial revolutions, the quality of life is set for a change everywhere, even if the Global South countries are trying to grow. For India and the Global South, adaptation may happen if a creative outlook towards skill education is used to focus on creating those jobs and their skill sets which would stay and matter. Attrition in employment has been a problem, but could be dealt properly. To climb the food chain, enhancing both technical and soft skills is an undeniable must As job requirements are gradually upscaling their purpose, climbing the food chain is a must for people. One cannot stay limited to a 10-year old approach of doing tasks under their work experience since there is a chance of having some real-life instances of disruption. Investing in up-skilling would be helpful. More technology will involve more human involvement in unimaginable areas of concern One may assume that using artificial intelligence or any disruptive tech product, system, component or service would lead to a severe decrease in human involvement. For example, let us assume that no-code tools are developed like FlutterFlow and many more. One may create a machine learning system which recommends what to code (already happening) to reduce the work of full-stack developers. However, people forget to realise that there would be additional jobs to analyse specifics and suggest relevant solutions. In any opportunity created by the use and inclusion of artificial intelligence, some after-effects won't last. However, some of them could grow and stay for some time. The fact that AI hype is promoted in a manner lacking ethical responsibility, shows how poorly markets are understood. In addition, this is why the US markets were subject to clear disruptions which could not last long, and India also has been a victim of this, even if the proportions are not that large as those of the US. While climbing food chain is inevitable, many at the top could go down - affecting the employment market Many (not most) of the top stakeholders in the food chain, in various aspects - jobs, businesses, freelancing, independent work, public sector involvement - would have to readjust their priorities because this is an obvious trend to look out for. Some market changes could be quick, while some may not be that simple to ignore. Derivative products, jobs, systems, services and opportunities will come and go regularly As discussed in an article on ChatGPT and its Derivative Products, it is obvious that multiple kinds of derivative products, jobs, systems, services and opportunities will be created. They would come, rise, become hyped and may either stay or go. To be clear, the derivatives we are discussing here are strictly related to the use of artificial intelligence technologies, that create jobs / opportunities / technological or non-technological systems of governance / products / services. Let us take Figure 3 in context. If we assume that Product A is an AI product, and based on some feedback related to Product A, 3 things happen: (1) a Derivative Product of Product A is created); (2) a Job or Opportunity called B is created; and (3) a Job or Opportunity called C is created - then, the necessity of having such opportunities related to the production of "A" and its derivative involves the creation of two systems - E and F. Why are these systems created? Simple. They are used for handling operations related to the production, maintenance and other related tasks, related to Product A and its derivative. The systems could be based on AI / technology or could not involve much technological prowess. Naturally, one of them (in this case System E), along with Job/ Opportunity C become stable use cases which make sense. They are practical and encouraging. This could further inspire the creation of a Product D, if possible. Although the process and choice systems I have explained in the previous paragraph is a simplistic depiction of production and R&D issues. This whole process in real life could take 2-5 years or even 5-10 years, depending on how the process is going on. Academic research in law and policy will remain downgraded until adaptive and sensible approaches are adopted with time Here is an excerpt from an article by Daniel Lattier for Intellectual Takeout, which explains the state of reading social science research papers in developed countries and overall: About 82 percent of articles published in the humanities are not even cited once for five years after they are published. Of those articles that are cited, only 20 percent have actually been read. Half of academic papers are never read by anyone other than their authors, peer reviewers, and journal editors. Another point which Daniel makes is this: Another reason is increased specialization in the modern era, which is in part due to the splitting up of universities into various disciplines and departments that each pursue their own logic. One unfortunate effect of this specialization is that the subject matter of most articles make them inaccessible to the public, and even to the overwhelming majority of professors. In fact, those who work in the law and policy professions, could survive if they belong to the industry side of things. Academics after COVID across the world have lost the edge and appetite to write and contribute research in law, social sciences and public policy. Just because few people are able to still do it does not justify any trend. Now, take these insights in line with the disruptions that AI may cause. If you take Generative AI, some universities across the world including in India, have banned the use of ChatGPT and other GAN/LLM tools: According to a Hindustan Times report, the RV University ban also applies to other AI tools such as GitHub Co-Pilot and Black Box. Surprise checks will be conducted and students who are found abusing these engines will be made to redo their work on accounts of plagiarism. The reason it is done is not just plagiarism. The academic industry is lethargic and lacks social and intellectual mobility in law and policy - which is a global problem and not just an India problem. There might be exceptional institutions, but they are less than those who are not offering enough. Now, imagine that if people are not even skilled at a basic level in their areas of law and policy, then automating tasks or the algorithmic use of any work would easily make them vulnerable and many professionals would have to upgrade their skills once they get the basics clear. In fact, it is governments and companies across the world who are trying hard to stay updated with the realities of the artificial intelligence market and produce stellar research, which includes the Government of India and some organisations in India. To counter this problem - certain things, for sure can be done: Embrace individual mobility and brilliance by focusing on excellence catering mobilisation Keep the pace and create skill-based learning; the academia in India is incapable to create skill opportunities in law and policy - unless institutions like Indian Arbitration & Mediation Council, the CPC Analytics and others step up, which they fortunately do Specialisation should not be used as an excuse to prevent people from learning; education could be simulated in a sense which makes more people aware and skilled in a sensible and self-aware way Access to resources is a critical issue which needs to be addressed because it is hilarious to see that AI systems have access to multiple research books and works, but human researchers in the Global South (and India) suffer discrimination and do not get access to research works (while publication via Scopus and others has become literally costly and impossible) Skill institutions must be separately created; they could be really helpful in addressing the risks of having disruptive technologies, from a future of work perspective R&D would have to be rigorous, risk and outcome-based in technology and related sectors The hype related to Generative AI products and the call to impose a moratorium on AI research beyond GPT4 or GPT 5 for 6 months explains why big tech companies must not own the narrative and market of artificial intelligence research & commercialisation. Ron Miller discusses about the potential of the Generative AI industry to be an industry for small businesses for TechCrunch: “Every company on the planet has a corpus of information related to their [organization]. Maybe it’s [customer] interactions, customer service; maybe it’s documents; maybe it’s the material that they published over the years. And ChatGPT does not have all of that and can’t do all of that.” [...] “To be clear, every company will have some sort of a custom dataset based on which they will do inference that actually gives them a unique edge that no one else can replicate. But that does not require every company to build a large language model. What it requires is [for companies to take advantage of] a language model that already exists,” he said. The statement quoted above emphasises the importance of rigorous and outcome-based Research and Development (R&D) in the technology sector. It highlights that every company possesses a unique corpus of information that can be leveraged to gain a competitive edge. This corpus of information may be customer interactions, documents, or any material published by the organization over the years. It is suggested that companies do not need to build their own large language model to leverage this corpus of information. Instead, they can take advantage of existing language models, such as ChatGPT, to gain insights and make informed decisions. The approach recommended is for companies to focus on using existing resources effectively, rather than reinventing the wheel. This can help companies save time and resources while still gaining valuable insights and improving their competitive position. However, to effectively leverage these resources, companies need to have rigorous R&D processes in place. This means focusing on outcomes and taking calculated risks to drive innovation and stay ahead of the competition. By doing so, companies can ensure that they are utilising their unique corpus of information to its fullest potential, and staying ahead in the ever-changing technology landscape. Here is an intriguing tweet from Pranesh Prakash, respected technology law expert and researcher, on the impact of AI and jobs in the South Asian region (the Indian subcontinent). I find the points he raised quite intriguing when we take an emerging market like India (or Bangladesh) into perspective. Here is a summary of what he refers to: One cannot have an accurate prognostication about how generative Al would affect jobs. In certain cases, the knowledge sector in an outsourcing destination could increase, while in certain cases it decreases. Mentally-oriented jobs (Knowledge/Arts/Music-sector jobs, etc.) will be affected first, and not manual labour jobs (for obvious reasons). The presence of Generative Al, and other forms of neural net-based models could be omnipresent, diffused and sometimes, dissimulated as if it is just invisible as he says. All the four points in the original tweet are valid. Issues related to knowledge and their epistemological and ontological links could really affect the mentally-oriented jobs, in any way possible. In some cases it could be a real disruptor where technological mobility is helpful, while in certain cases, it might not be useful, but could be found responsible for mere information overloads, and even epistemic trespassing (read the paper written by Nathan Ballantyne). In p. 373 of the paper, Nathan has made a valid point about how narrowed analysis could drive philosophy questions quite counterproductively. "[Q]uestions in philosophy may become hybridized when bodies of empirical fact, experimental evidence, and empirically-driven theories are recognized to be relevant to answering those questions. As a matter of fact, the era of narrowly analysis-driven philosophy represents an anomaly within the history of philosophy." Taking cue from Nathan's paper and Pranesh's points, it is essential to point out that how Generative AI from an epistemic & ontological aspect could be a fragile tool of use. The risk and ethics-based value that any generated proprietary information and the algorithmic activities & operations of these tools would hold, will be subject to scrutiny. So, any improbable or manipulative or dissimulated epistemic feedback one takes from such tools when it comes to their decision making practices not only causes hype which generates risks but also could affect knowledge societies and economies. Of course, the human element of responsibility is undeniable. This is why having a simplistic, focused and clear SOP (standard of procedure) for using such Generative AI tools could help in assessing what impact do these tools really have. Now that we have covered some genuine concerns on the effect of AI on the future of work and innovation, it is necessary to analyse the role of artificial general intelligence. The reason I have covered the role of AGI is this - human motivation is appropriately tracked via methods of behavioural economics. AGI and the ethics of artificial intelligence represent a narrative around human motivation and assumed machinic motivations. The problem however is that most narrow AI we know lack explainability (beyond the usual black box problem) because how they learn is not known at all. In fact, it was just recent that scientists somehow figured out how Generative AI tools learn and understand at a limited degree, still incapable to human brain (thereby not tapping up to the potential of the "Theory of Mind" in AI Ethics). Hence, through this concluding section of the article, I have addressed a simple question: "Is AGI significant to be considered an imperative for regulation when it comes to the future of work and the future of innovation?" I hope this section would be interesting to read. Whether AGI would Disrupt the Future of Work & Innovation In this concluding part, I have discussed the potential role of artificial general intelligence (AGI), as to whether it can really affect the future of work and innovation. For starters, in artificial intelligence ethics, we assume that Artificial General Intelligence (AGI) refers to the hypothetical ability of an artificial intelligence system to perform any intellectual task that a human can do. As per AI ethicists, AGI would be capable of learning and adapting to any environment, just as humans do. It would have the ability to reason, solve problems, make decisions, and understand complex ideas, regardless of the context or circumstances. In addition, AGI (allegedly) would be able to perform these tasks without being explicitly programmed to do so. It would be able to learn from experience and apply that knowledge to new situations, just as humans do. This ability to learn and adapt would be a crucial characteristic of AGI, as it would enable it to perform a wide range of tasks in a variety of contexts. So, in simple terms, the narrative on AGI refers to safety and risk recognition. On this aspect, Jason Crawford for The Roots of Progress, refers to the 1975 Asilomar Conference and explains how risk recognition and safety measures could be developed. The excerpt from the article is indeed insightful to understand how AGI works: A famous example of this is the 1975 Asilomar conference, where genetic engineering researchers worked out safety procedures for their experiments. While the conference was being organized, for a period of about eight months, researchers voluntarily paused certain types of experiments, so that the safety procedures could be established first. When the risk mitigation is not a procedure or protocol, but a new technology, this approach is called “differential technology development” (DTD). For instance, we could create safety against pandemics by having better rapid vaccine development platforms, or by having wastewater monitoring systems that would give us early warning against new outbreaks. The idea of DTD is to create and deploy these types of technologies before we create more powerful genetic engineering techniques or equipment that might increase the risk of pandemics. Now, the idea behind DTD is to proactively address potential risks associated with new technologies by prioritizing the development of safety measures and strategies. By doing so, it aims to reduce the likelihood of harm and promote responsible innovation. Rohit Krishnan in Artificial General Intelligence and how (much) to worry about it for Strange Loop Canon, makes a full-fledged chart explaining how AGI as a risk would work out: If one reads the article and looks through this amazingly curated mind map by Krishnan, it is obvious to notice that the risk of implicating any artificial general intelligence is not so simple to be estimated. The chart itself is self-explanatory, and hence I would urge the readers to go through this brilliant work. I would like to highlight the core argument that Krishnan puts, which lawyers and regulators must understand if they fear about the hype behind artificial general intelligence. This excerpt is a long read: We need whatever system is developed to have its own goals and to act of its own accord. ChatGPT is great, but is entirely reactive. Rightfully so, because it doesn’t really have an inner “self” with its own motivations. Can I say it doesn’t have? Maybe not. Maybe the best way to say is that it doesn’t seem to show one. But our motivations came from hundreds of millions of years of evolution, each generation of which only came to propagate itself if it had a goal it optimised towards, which included at the very least survival, and more recently the ability to gather sufficient electronic goods. AI today has no such motivation. There’s an argument that motivation is internally generated based on whatever goal function you give it, subject to capability, but it’s kind of conjectural. We’ve seen snippets of where the AI does things we wouldn’t expect because its goal needed it to figure out things on its own. [...] A major lack that AI of today has is that it lives in some alternate Everettian multiversal plane instead of our world. The mistakes it makes are not wrong per se, as much as belonging to a parallel universe that differs from ours. And this is understandable. It learns everything about the world from what its given, which might be text or images or something else. But all of these are highly leaky, at least in terms of what they include within the. Which means that the algos don’t quite seem to understand the reality. It gets history wrong, it gets geography wrong, it gets physics wrong, and it gets causality wrong. The author argues that the lack of motivation in current AI systems is a significant limitation, as AI lacks the same goal optimisation mechanisms that have evolved over hundreds of millions of years in biological organisms. While there is an argument that motivation is internally generated based on the goal function provided to AI, it remains conjectural. Additionally, the author notes that AI makes mistakes due to its limited understanding of reality, which could have implications for the development and regulation of AI, particularly with the potential risks associated with the development of AGI. Therefore, the narrative of responsible AI emphasises the importance of considering ethical, societal, and safety implications of AI, including the development of AGI, to ensure that the future of work and innovation is beneficial for all. Now, from a regulatory standpoint, there is a growing concern that AI tools that are based on conjecture rather than a clear set of rules or principles may pose accountability challenges. The lack of clear motivation and inner self in AI makes it difficult for regulators to hold AI systems accountable for their actions. As the author suggests, AI of today lacks the ability to have its own motivations and goals, which are essential for humans to propagate and survive. While AI algorithms may have goal functions, they are subject to capability and may not be reliable in all scenarios. Additionally, AI's mistakes are often due to its limited understanding of reality, which can result in errors in history, geography, physics, and causality. Regulators may struggle to understand the motivation aspect behind AI tools, as they are often based on complex algorithms that are difficult to decipher. This makes it challenging to establish culpability in cases where AI tools make mistakes or cause harm. In many cases, regulators may not even be aware of the limitations of AI tools and the potential risks they pose due to being limited. To conclude, an interesting approach to address such concerns and further understand (maybe to even come up with self-regulatory methods if not measures) the fear of artificial general intelligence - is perhaps doing epistemic and ontological analysis in legal thinking. In the book, Law 3.0, Roger Brownsword (Professor of King's Law College, who I had interviewed for AI Now by Indian Society of Artificial Intelligence and Law) had discussed about the ontological dilemmas if technology regulation becomes technocratic (or in simple terms, too much automatic), which justifies the need to be good at epistemic and ontological analysis: With rapid developments in AI, machine learning, and blockchain, a question that will become increasingly important is whether (and if so, the extent to which) a community sees itself as distinguished by its commitment to governance by rule rather than by technological management. In some smaller-scale communities or self-regulating groups, there might be resistance to a technocratic approach because compliance that is guaranteed by technological means compromises the context for trust – this might be the position, for example, in some business communities (where self-enforcing transactional technologies are rejected). Or, again, a community might prefer to stick with regulation by rules because rules (unlike technological measures) allow for some interpretive flexibility, or because it values public participation in setting standards and is worried that this might be more difficult if the debate were to become technocratic. [...] Law 3.0, is more than a particular technocratic mode of reasoning, it is also a state of coexistent codes and conversations. [...] Law 3.0 conversation asks whether the legal rules are fit for purpose but it also reviews in a sustained way the non-rule technological options that might be available as a more effective means of serving regulatory purposes. In future analyses for Visual Legal Analytica, or a VLiGTA report, perhaps such question on developing epistemic and ontological analyses could be approached. Nevertheless, on the future of work and innovation, it can be safely concluded that disruption is not the problem, but not understanding the frugality of disruption could be. This is where careful and articulate approaches are needed to analyse if there are real disruptions in the employment market or not. Perhaps there are legible corporate governance and investment law issues which could be taken under regulatory oversight apart from limited concerns on the "black box problem", which again remains obscure and ungovernable, without epistemic and ontological precision on the impact of narrow AI technologies.
- The Twitter-Microsoft Legal Dispute on API Rules
Please note: this is a Policy Brief by Anukriti Upadhyay, former Research Intern at the Indian Society of Artificial Intelligence and Law. In a 3-page letter to Satya Nadella, Twitter's company, X Corp. had stated that Microsoft had violated an agreement over its data and had declined to pay for that usage. And in some cases, Microsoft had used more Twitter data than it was supposed to. Microsoft also shared the Twitter data with government agencies without permission, the letter said. To sum up, Twitter is trying to charge Microsoft for its data which has earned huge amount of profit to Microsoft. Mr. Musk, who bought Twitter last year for $44 billion, has said that it is urgent for the company to make money and that it is near bankruptcy. Twitter has since then introduced new subscription products and made other moves to gain more revenue. Also, in March, the company had stated it would charge more for developers to gain access to its stream of tweets. Elon Musk and Microsoft have had a bumpy relationship recently. Among other things, Mr. Musk has concerns with Microsoft over OpenAI. Musk, who helped found OpenAI in 2015, has said Microsoft, which has invested $13 billion in OpenAI, controls the start-up’s business decisions. Of course, Microsoft has disputed that characterisation. Microsoft’s Bing chatbot and OpenAI’s ChatGPT are built from what are called large languages models, or LLMs, which build their skills by analysing vast amounts of data culled from across the internet. The letter to Satya Nadella does not specify if Twitter will take legal action against Microsoft or ask for financial compensation. It demands that Microsoft abide by Twitter’s developer agreement and examine the data use of eight of its apps. Twitter has hired legal services which seeks report by June on how much Twitter data the company possesses, how that data was stored and used, and when government-related organizations gained access to that data. Twitter’s rules prohibit the use of its data by government agencies, unless the company is informed about it first. The letter adds that Twitter’s data was used in Xbox, Microsoft’s gaming system; Bing, its search engine; and several other tools for advertising and cloud computing. “the tech giant should conduct an audit to assess its use of Twitter's content.” Twitter claimed that the contract between the two parties allowed only restricted access to the twitter data but Microsoft has breached this condition and has generated abnormal profits because of using Twitter’s API. Currently, there are many tools available (from Microsoft, Google, etc.) to check the performance of AI systems, but there is no regulatory oversight. And that is why, experts believe that companies, new and old, need to put more thought into self-regulation. This dispute has highlighted the need to keep a check on the utilization of data by companies to develop their AI models and regulate them. Data Law and Oversight Concerns In this game of tech giants to win the race of AI development, the biggest impact is always bestowed upon the society. Any new development is prone to attract illegal activities that can have a drastic effect on the society. Even though the Personal Data Protection Bill is yet to become law, big tech firms like Google, Meta, Amazon and various e-commerce platforms are liable to be penalised for sharing users’ data with each other if consumers flag such instances. Currently in India, under the Consumer Protection Act, 2019, the department can take action and issue directions to such firms. Since the data belongs to a consumer, if the consumer feels that their data is being shared amongst firms without their express consent, they are free to approach us under the Consumer Protection Act. If we look at the kind of data which is shared between firms, any search on Google by a person leads to the same feeds being shown on Facebook. This means that user data is being shared by big tech firms. In case the data is not shared with the express consent of users concerned, they can approach the Consumer Protection Forums. The same is relevant to the Twitter-Microsoft dispute, wherein the data used by the latter was put up by the Twitter users on their twitter account and the same was getting used by Microsoft without the user’s consent. If we analyse WhatsApp's data sharing policies for example, Meta has stated that it can share business data with Facebook. But at the same time, the Competition Commission of India has objected to this as a monopolistic practice and the matter is in court. Consumers have the right to seek redressal against unfair / restrictive trade practices or unscrupulous exploitation of consumers. Protecting personal data should be an essential imperative of any democratic republic. Once it becomes law, citizens can intimate all digital platforms they deal with to delete their past data. The firms concerned will then need to collect data afresh from users’ and clearly spell out the purpose and usage. They will be booked for data breach if they depart from the purpose for which it was collected. Data minimisation, purpose limitation and storage limitation are the hallmarks which cannot be compromised with. Data minimisation means firms can only collect the absolute minimum required data. Purpose limitation will allow them to use data only for the purpose for which it has been acquired. With storage limitation, once the service is delivered, firms will need to delete the data. With the rapid development of AI, a number of ethical issues have cropped up. These include: the potential of automation technology to give rise to job losses the need to redeploy or retrain employees to keep them in jobs the effect of machine interaction on human behaviour and attention the need to address algorithmic bias originating from human bias in the data the security of AI systems (e.g., autonomous weapons) that can potentially cause damage While one cannot ignore these risks, it is worth keeping in mind that advances in AI can - for the most part - create better business and better lives for everyone. If implemented responsibly, artificial intelligence has immense and beneficial potential. Investment and Commercial Licensing AI has been called the electricity of the 21st century. While the uses and benefits of AI are exponentially increasing, there are challenges for businesses looking to harness this new technological advancement. Chief among the challenges are: The ethical use of AI, Legal compliance regarding AI and the data that fuels AI, Protection of IP rights and the appropriate allocation of ownership and use rights in the components of AI. Businesses also need to determine whether to build AI themselves or license it from others. Several unique issues impact AI license agreements. In particular, it is important to address the following key issues: “IP ownership and use rights, IP infringement, Warranties, specifically performance promises and Legal compliance.” Interestingly, IP treaties simply have not caught up to AI yet. While aspects of AI components may be protectable under patents, copyrights, and trade secrets, IP laws primarily protect human creativity. Because of the focus on human creation, issues may arise under IP laws if the AI output is created by the AI solution instead of a human creator. Since the IP laws do not squarely cover AI, as between an AI provider and user, contractual terms are the best way to attempt to gain the benefits of IP protections in AI license agreements. How Does it Affect the Twitter-Microsoft Relationship Considering this issue, the parties could designate certain AI components as trade secrets. Protect AI components by: limiting use rights; designating AI components as confidential information in the terms and conditions; and restricting use of confidential information. Include assignment rights in AI evolutions from one party or the other. Determine the license and use rights the parties want to establish between the provider and the user for each AI component. Clearly articulate the rights in the terms and conditions. The data sharing agreement must cover which party will provide and own the training data, prepare and own the training instructions, conduct the training, and revise the algorithms during the training process and own the resulting AI evolutions. As for data ownership, the parties should identify the source of the data and ensure that data use complies with applicable laws and any third-party data provider requirements. Ownership and use of production data for developing AI models must be set out in the form of terms and conditions which party provides and which party owns the production data that will be used. If the AI solution is licensed to the user on-premises (the user is running the AI solution in the user’s systems and environment), it is likely that the user will supply and own the production data. However, if the AI solution is cloud-based, the production data may include the data of other users. In a cloud situation, the user should specify whether the provider may use the user’s data for the benefit of the entire AI user group or solely for the user’s particular purposes. It is important to note that limiting the use of production data to one user with an AI solution may have unintended results. In some AI applications, the use of a broader set of data from multiple users may increase the AI solution’s accuracy and proficiency. However, counsel must weigh the benefits of permitting a broader use of data against the legal, compliance, and business considerations a user may have for limiting use of its production data. When two or more parties are each contributing to the AI evolutions, the license agreement should appoint a contractual owner. The parties must then determine who will own AI evolutions or whether AI evolutions will be jointly owned, which presents additional practical challenges. The use of AI presents ethical issues and the organizations must consider how they will use AI and define principles and implement policies regarding the ethical use of AI. One portion of the AI ethical use consideration is legal compliance, which is another issue that is more challenging for AI than for traditional software or technology licensing. AI-based decisions must satisfy the same laws and regulations that apply to human decisions. AI is different from many other technologies because AI can produce legal harms against people and some of that legal harm might not only violate ethical norms, but may also be actionable under law. It is important to address legal compliance concerns with the provider before entering into an AI license agreement to determine which party is responsible for compliance. Some best practices that could be adopted, are proposed as follows: To deal with legal compliance issues in investment and licensing, companies can conduct diligence on data sharing to determine if there are any legal or regulatory risk areas that merit further inquiry. Develop policies around data sharing and involve the various stakeholders in the policy-making process to ensure that thoughtful consideration is given about when it is appropriate to use the data and in what contexts. Implement a risk management framework that includes a system of ongoing monitoring and controls around the use of AI. Consider which party should obtain third-party consents for data use due to potential privacy and data security issues. AI is transforming our world rapidly and without much oversight. Developers are free to innovate, as well as to create tremendous risk. Very soon leading nations will need to establish treaties and global standards around the use of AI, not unlike current discussions about climate change. Governments will need to both: Establish laws and regulations that protect ethical and productive uses of AI. Prohibit unethical, immoral, harmful, and unacceptable uses. These laws and regulations will need to address some of the IP ownership, use rights, and protection issues discussed in this article. However, these commercial considerations are secondary to the overarching issues concerning the ethical and moral use of AI. In line with the increased attention on corporate responsibility and issues like diversity, sustainability, and responsibility to more than just investors, businesses that develop and use AI will need policies and guidance against which the use of AI should be assessed and utilised. These policies and guidance are worthy of board-level attention. Technology lawyers who in these early days assist clients with AI issues must monitor developments in these areas and, wherever possible, act as facilitators and leaders of thoughtful discussions regarding AI. Also, adapting the precautionary measures will save a lot of legal cost for the companies and will ensure that the data is not misused or oversued.
- A Legal Prescription on Inductive Machines in AI
Artificial intelligence is booming the industry, but the question remains about the regulation as this is only a precaution that can put constraints on innovation. For example, a government report in Singapore highlighted the risks posed by AI but concluded that ‘it is telling that no country has introduced specific rules on criminal liability for artificial intelligence systems. Being the global first-mover on such rules may impair Singapore’s ability to attract top industry players in the field of AI[1].’ These concerns are well-founded. As in other areas of research, overly restrictive laws can stifle innovation or drive it elsewhere. Yet the failure to develop appropriate legal tools risks allowing profit-motivated actors to shape large sections of the economy around their interests to the point that regulators will struggle to catch up. This has been particularly true in the field of information technology. For example, social media giants like Facebook monetized users’ personal data while data protection laws were still in their infancy[2]. Similarly, Uber and other first-movers in what is now termed the sharing or ‘gig’ economy exploited platform technology before rules were in place to protect workers or maintain standards. As Pedro Domingo once observed, people worry that computers will get too smart and take over the world; the real problem is that computers are too stupid and have already taken over[3]. Much of the literature on AI and the law focuses on a horizon that is either so distant that it blurs the line with science fiction or so near that it plays catch-up with the technologies of today. That tension between presentism and hyperbole is reflected in the history of AI itself, with the term ‘AI winter[4]’ coined to describe the mismatch between the promise of AI and its reality. Indeed, it was evident back in 1956 at Dartmouth when the discipline was born. To fund the workshop, John McCarthy and three colleagues wrote to the Rockefeller Foundation with the following modest proposal: [W]e propose for 2 months and 10 men needed for the study of artificial intelligence will be carried out in the summer of 1956 ……… The study was on the conjecture nature of learning where the machines should be made intelligent to stimulate it. In this study, an attempt will be made to find out how machines use language for the concept to solve problems, reserved for humans. We think that significant advancement can be made and only a selected group of people will work on the summer project.” The innovation in the field of AI was started a long time ago but there were no precautions and regulations to put the use of AI in control. Every entity on the planet Earth can agree to the term that AI can be more fearful than one’s thought. Just as the statement by the AI robot Sofia “she plans to take over the human being and their existence. Moreover, the website run by AI shows the last picture of humans as very degraded beings. As said in the statement by Pablo Picasso[5] “the new mechanical brains are useless, they only provide an answer that was taught to them” As countries around the world struggle to capitalize on the economic potential of AI while minimizing avoidable harm, a paper like this cannot hope to be the last word on the topic of regulation. But by examining the nature of the challenges, the limitations of existing tools, and some possible solutions, it hopes to ensure that we are at least asking the right questions. As it is said every space in nature and physics needs to be fulfilled otherwise it would create a hole -a black hole. The paper "Neurons Spike Back: A Generative Communication Channel for Backpropagation" presents a new approach to training artificial neural networks that is based on an alternative communication channel for backpropagation. Backpropagation is the most widely used method for training neural networks, and it involves the use of gradients to adjust the weights of the network. The authors propose a novel approach that uses spikes as a communication channel to carry these gradients. The paper begins by introducing the concept of spiking neural networks (SNNs) and how they differ from traditional neural networks. SNNs are modelled after the way that biological neurons communicate with each other through spikes or action potentials. The authors propose using this communication mechanism to transmit the gradients during backpropagation. But before that we need to understand what is deep learning and the neural networks and deep neural networks. Inductive & Deductive Machines in Neural Spiking Inductive machines are also known as unsupervised learning machines. They are used to identify patterns in data without prior knowledge of the output. Inductive machines make use of a clustering algorithm to group similar data together. An example of an inductive machine is the self-organizing map (SOM). SOMs are used to create a two-dimensional representation of high-dimensional data. For example, if you have a dataset consisting of several features such as age, gender, income, and occupation, an SOM can be used to create a map of this data where similar individuals are placed close together. On the other hand, deductive machines are also known as supervised learning machines. They are used to learn from labeled data and can be used to make predictions on new data. An example of a deductive machine is the multi-layer perceptron (MLP). MLPs consist of multiple layers of interconnected nodes that are used to classify data. For example, if you have a dataset consisting of images of cats and dogs, an MLP can be trained on this data to classify new images as either a cat or a dog. Neural spiking is the process of representing information using patterns of electrical activity in the neurons of the brain. Inductive and deductive machines can both be used to model neural spiking, but they differ in their approach. Inductive machines can be used to identify patterns in the spiking activity of neurons without prior knowledge of the output. Deductive machines, on the other hand, can be used to predict the spiking activity of neurons based on labeled data. How Deep Learning + Neural Networks Work Deep learning is a subset of machine learning that utilizes artificial neural networks to learn from large amounts of data. Neural networks, in turn, are models that are inspired by the structure and function of the human brain. They are capable of learning and recognizing patterns in data, and can be trained to perform a wide range of tasks, from image recognition to natural language processing. At the heart of a neural network are nodes, also known as neurons, which are connected by edges or links. Each node receives input from other nodes and computes a weighted sum of those inputs, which is then passed through an activation function to produce an output. The weights of the edges between nodes are adjusted during training to optimize the performance of the network.[6] In a deep neural network, there are typically many layers of nodes, allowing the network to learn increasingly complex representations of the data. This depth is what sets deep learning apart from traditional machine learning approaches, which typically rely on shallow networks with only one or two layers. Deep learning has been applied successfully to a wide range of tasks, including computer vision, natural language processing, and speech recognition. One of the most well-known applications of deep learning is image recognition, where deep neural networks have achieved state-of-the-art performance on benchmark datasets such as ImageNet. However, deep learning also has some limitations. One of the main challenges is the need for large amounts of labeled data to train the networks effectively. This can be a significant barrier in areas where data is scarce or difficult to label, such as medical imaging or scientific research. Another limitation of deep learning is its tendency to be overfitted to the training data. This means that the network can become too specialized to the specific dataset it was trained on and may not generalize well to new data. To address this, techniques such as regularization and dropout have been developed to help prevent overfitting. Despite these limitations, deep learning has had a significant impact on many areas of research and industry. In addition to its successes in computer vision and natural language processing, deep learning has also been used to make advances in drug discovery, financial forecasting, and autonomous vehicles, to name a few examples. One of the reasons for the success of deep learning is the availability of powerful hardware, such as GPUs, that can accelerate the training of neural networks. This has allowed researchers and engineers to train larger and more complex networks than ever before, and to explore new applications of deep learning. Another important factor in the success of deep learning is the availability of open-source software frameworks such as TensorFlow and PyTorch. These frameworks provide a high-level interface for building and training neural networks and have made it much easier for researchers and engineers to experiment with deep learning. Spiking Neural Networks A spiking neural network (SNN) is a type of computer program that tries to work like the human brain. The human brain uses tiny electrical signals called "spikes" to send information between different parts of the brain. SNNs try to do the same thing by using these spikes to send information between different parts of the network. SNNs work by having lots of small "neurons" that are connected together. These neurons can receive input from other neurons, and they send out spikes when they receive enough input. The spikes are then sent to other neurons, which can cause them to send out their own spikes. SNNs can be used to do things like recognize images, control robots, and even help people control computers with their thoughts. They can also be used to study how the brain works and to build computers that work more like the brain[7]. The basic structure of an SNN consists of a set of nodes, or neurons, that are interconnected by synapses. When a neuron receives input from other neurons, it integrates that input over time and produces a spike when its activation potential reaches a certain threshold. This spike is then transmitted to other neurons in the network via the synapses. There are several ways to implement SNNs in practice. One common approach is to use rate-based encoding, where information is represented by the firing rate of a neuron over a certain time period. In this approach, the input to the network is first converted into a series of spikes, which are then transmitted through the network and processed by the neurons.[8] One example of an application of SNNs is in image recognition. In a traditional neural network, an image is typically represented as a set of pixel values that are fed into the network as input. In an SNN, however, the image can be represented as a series of spikes that are transmitted through the network. This can make the network more efficient and reduce the amount of data that needs to be processed. Another example of an application of SNNs is in robotics. SNNs can be used to control the movement of robots, allowing them to navigate complex environments and perform tasks such as object recognition and manipulation. By using SNNs, robots can operate more efficiently and with greater accuracy than traditional control systems. SNNs are also being explored for their potential use in brain-computer interfaces (BCIs). BCIs allow individuals to control computers or other devices using their brain signals, and SNNs could help improve the accuracy and speed of these systems. One challenge in implementing SNNs is the need for specialized hardware that can efficiently process and transmit spikes. This has led to the development of neuromorphic hardware, which is designed to mimic the structure and function of the brain more closely than traditional digital computers. Despite these challenges, SNNs are a promising area of research that has the potential to improve the efficiency and accuracy of a wide range of applications, from image recognition to robotics to brain-computer interfaces. As researchers continue to explore the capabilities of SNNs, we can expect to see new and innovative applications of this technology emerge in the years to come. The authors then present the results of experiments that compare their approach to traditional backpropagation methods. They demonstrate that their method achieves comparable results in terms of accuracy but with significantly lower computational cost. They also show that their method is robust to noise and can work effectively with different types of neural networks. Overall, the paper presents a compelling argument for the use of spiking neural networks as a communication channel for backpropagation. The proposed method offers potential advantages in terms of computational efficiency and noise robustness. The experiments provide evidence that the approach can be successfully applied to a range of neural network architectures. References [1] Penal Code Review Committee (Ministry of Home Affairs and Ministry of Law, August 2018) 29. China, for its part, included in the State Council’s AI development [2] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power [3] Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World [4] AI is whatever hasn’t been done yet.’ See Douglas R Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (Basic Books 1979) 601. [5] William Fifield, ‘Pablo Picasso: A CompInterviewrview’ (1964) [6]NeuronsSpikeBack.pdf (mazieres.gitlab.io) [7] https://analyticsindiamag.com/a-tutorial-on-spiking-neural-networks-for-beginners/ [8] https://cnvrg.io/spiking-neural-networks/