Search Results
Results found for empty search
- AI Seoul Summit 2024: Decoding the International Scientific Report on AI Safety
The AI Seoul Summit on AI Safety, held in South Korea in 2024, has released a comprehensive international scientific report on AI safety . This report stands out from the myriad of AI policy and technology reports due to its depth and actionable insights. Here, we break down the key points from the report to understand the risks and challenges associated with general-purpose AI systems. 1. The Risk Surface of General-Purpose AI "The risk surface of a technology consists of all the ways it can cause harm through accidents or malicious use. The more general-purpose a technology is, the more extensive its risk exposure is expected to be. General-purpose AI models can be fine-tuned and applied in numerous application domains and used by a wide variety of users [...], leading to extremely broad risk surfaces and exposure, challenging effective risk management." General-purpose AI models, due to their versatility, have a broad risk surface. This means they can be applied in various domains, increasing the potential for both accidental and malicious harm. Managing these risks effectively is a significant challenge due to the extensive exposure these models have.
- TESCREAL and AI-Related Risks
TESCREAL serves as a lens through which we can examine the motivations and potential implications of cutting-edge technological developments, particularly in the field of artificial intelligence (AI). As these ideologies gain traction among tech leaders and innovators, they are increasingly shaping the trajectory of AI research and development. This insight brief explores the potential risks and challenges associated with the TESCREAL framework, focusing on anticompetitive concerns, the impact on skill estimation and workforce dynamics, and the need for sensitisation measures. By understanding these issues, we can better prepare for the societal and economic changes & risks that advanced & substandard AI technologies may bring. It is crucial to consider not only the promises but also the pitfalls of the hype rapid advancement. This brief aims to provide a balanced perspective on the TESCREAL ideologies and their intersection with AI development, offering insights into proactive measures that can be taken before formal regulations are implemented. Introduction to TESCREAL The emergence of TESCREAL as a conceptual framework marks a significant milestone in our understanding of the ideological underpinnings driving technological innovation, particularly in the realm of artificial intelligence. This acronym, coined by computer scientist Timnit Gebru and philosopher Émile P. Torres, encapsulates a constellation of interconnected philosophies that have profoundly shaped the trajectory of AI development and the broader tech landscape.
- The Generative AI Patentability Landscape: Examining the WIPO Report
This insight examines a recently published Report on Generative Artificial Intelligence Patents by the World Intellectual Property Organisation , as of mid-2024. Now, let's address a caveat before delving into the analysis and the WIPO report itself. It's important to note that this report may not serve as the definitive authority on AI patentability within the WIPO's international intellectual property law framework. While the report provides valuable insights, certain sections discussing the substantive features of Generative AI and related aspects might not directly reflect WIPO's official stance on AI patentability. This caveat is crucial for two reasons: Evolving Landscape: The AI patentability landscape is still developing, and individual countries are establishing their own legal frameworks, positions, and case law on the subject. International Framework: The creation of an international intellectual property law framework under WIPO for AI patentability remains uncertain, as aspects related to economic-legal contractual rights and knowledge management may evolve. The Three Perspectives of Analysis in this Report This report is based on three key aspects of analysis, or let us say, three perspectives of analysis, when it examines the Generative AI Landscape: The first perspective covers the GenAI models. Patent filings related to GenAI are analyzed and assigned to different types of GenAI models (autoregressive models, diffusion models, generative adversarial networks (GAN), large language models (LLMs), variational autoencoders (VAE) and other GenAI models).
- The French-German Report on AI Coding Assistants, Explained
The rapid advancements in generative artificial intelligence (AI) have led to the development of AI coding assistants, which are increasingly being adopted in software development processes. In September 2024, the French Cybersecurity Agency (ANSSI) and the German Federal Office for Information Security (BSI) jointly published a report titled "AI Coding Assistants" to provide recommendations for the secure use of these tools. This legal insight aims to analyse the key findings from the ANSSI and BSI report. By examining the opportunities, risks, and recommendations outlined in the document, we can understand how India should approach the regulation of AI coding assistants to ensure their safe and responsible use in the software industry. The article highlights the main points from the ANSSI and BSI report, including the potential benefits of AI coding assistants, such as increased productivity and employee satisfaction, as well as the associated risks, like lack of confidentiality, automation bias, and the generation of insecure code. The recommendations provided by the French and German agencies for management and developers are also discussed.
- Deciphering Australia’s Safe and Responsible AI Proposal
This insight by Visual Legal Analytica is a response/ public submission to the Australian Government's recent Proposals Paper for introducing mandatory guardrails for AI in high-risk settings published in September 2024. This insight, authored by Mr Abhivardhan, our Founder, is submitted on behalf of Indic Pacific Legal Research LLP. Key Definitions Used in the Paper The key definitions provided in the Australian Government's September 2024 High Risk AI regulation proposal reflect a comprehensive and nuanced approach to AI governance: Broad Scope and Lifecycle Perspective
- The John Doe v. GitHub Case, Explained
This case analysis is co-authored by Sanvi Zadoo and Alisha Garg, along with Samyak Deshpande. The authors of this case analysis have formerly interned at the Indian Society of Artificial Intelligence and Law quite recently. In a world where artificial intelligence is redefining the way developers write code, Copilot, an AI-powered coding program developed by GitHub in collaboration with OpenAI was launched in 2021. Copilot promised to revolutionize software development by generating code functions based on the input of choice. However, this ‘revolution’ soon found itself in the midst of a legal storm . The now famous GitHub-Copilot case revolves around allegations that the AI-powered coding assistant uses copyrighted code from open-source repositories without proper credit. The initiative for the lawsuit was taken by programmer and attorney Matthew Butterick and joined by other developers. They claimed that Copilot's suggestions include exact code from public repositories without adhering to the licenses under which the code was published. Despite efforts by Microsoft, GitHub, and OpenAI to dismiss the lawsuit, the court allowed the case to proceed. Timeline of the case June 2021 GitHub Copilot is publicly launched in a technical preview November 2022 Plaintiffs file a lawsuit against GitHub and OpenAI, alleging DMCA violations and breach of contract December 2022 The court dismisses several of the Plaintiffs' claims, including unjust enrichment, negligence, and unfair competition, with prejudice. March 2023 GitHub introduces new features for Copilot, including improved security measures and an AI-based vulnerability prevention system. June 2023 The court dismisses the DMCA claim with prejudice July 2024 The California court affirms the dismissal of nearly all the claims
- Supreme Court of Singapore’s Circular on Using RoughDraft AI, Explained
The Supreme Court of Singapore came up with an intriguing circular on Using Generative AI, or RoughDraft AI (term coined by AI expert Gary Marcus), by stakeholders in courts. The guidance indicated in the circular is quite intriguing, which requires an essential breakdown. However, to begin with: the circular itself shows that the Court does not regard using GenAI tools, as an ultimatum to improve their court tasks, and has reduced the status of Generative AI tools as mere productivity enhancement tools, unlike what many AI companies in India & abroad tried to claim. This insight covers the circular in detail.
- CCI's Landmark Ruling on Meta's Privacy Practices
The Competition Commission of India's (CCI) recent press release announcing a substantial penalty of Rs. 213.14 crore on Meta marks a significant milestone in the regulation of digital platforms in India. This decision, centered on WhatsApp's 2021 Privacy Policy update, underscores the growing scrutiny of data practices and market dominance in the digital economy. The CCI's action reflects a proactive approach to addressing anti-competitive behaviours in the tech sector, particularly concerning data sharing and user privacy.This policy insight examines the implications of the CCI's decision, which goes beyond mere financial penalties to impose behavioural remedies aimed at reshaping Meta's data practices in India. The order's focus on user consent, data sharing restrictions, and transparency requirements signals a shift towards more stringent regulation of digital platforms. It also highlights the intersection of competition law with data protection concerns, setting a precedent that could influence regulatory approaches both in India and globally. As a draft of Digital Competition Bill was proposed in March 2024, this CCI action provides valuable insights into the regulator's perspective on digital market dynamics and its readiness to enforce competition laws in the digital sphere. The decision raises important questions about the balance between fostering innovation in the digital economy and protecting user rights and market competition. Detailed Breakdown of the CCI Press Release
- India's Draft Digital Personal Data Protection Rules, 2025, Explained
Sanad Arora, Principal Researcher is the co-author of this Insight. The Draft Digital Personal Data Protection (DPDP) Rules, released on January 3, 2025 , represent an essential step towards making in the digital age simple. These rules aim to enhance the protection of personal data while addressing the challenges posed by emerging technologies, particularly artificial intelligence (AI). As AI continues to evolve and integrate into various sectors, ensuring that its deployment aligns with ethical standards and legal requirements is paramount. The DPDP rules seek to create a balanced environment that fosters innovation while safeguarding individual privacy rights. Figure 1: Draft DPDP Rules (January 3, 2025 version, explained and visualised). This chart, meticulously created by Abhivardhan and Sanad Arora, as a part of the explainer. Download the chart below.
- Technology Law is NOT Legal-Tech: Why They’re Not the Same (and Why It Matters)
Created using Luma AI. Technology has changed the way we communicate, transact, and live our daily lives. It’s no wonder that law and technology have increasingly converged into two distinct but often confused areas: Tech-Legal and Legal-Tech . This quick explainer clears the air on what each term means, why mixing them up creates chaos, and how to get them right. The Basic Difference Tech-Legal focuses on the legal frameworks, policies, and governance of technology itself. It deals with questions like: How should AI be regulated? What legal boundaries apply to blockchain or cross-border data flows? Think of Tech-Legal as creating the rules of the game for emerging technologies. Legal-Tech , on the other hand, is about using technology to improve or automate legal services and processes. It addresses questions like: How do we efficiently manage legal cases online? Can we use AI to review contracts faster? Legal-Tech is essentially playing the game better by adopting new tools to streamline legal work. Diving into Tech-Legal What It Involves
- When AI Expertise Meets AI Embarrassment: A Stanford Professor's Costly Citation Affair
In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI. The case, Kohls v. Ellison , which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice. Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less." The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice. Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations.
- Beyond the AI Garage: India's New Foundational Path for AI Innovation + Governance in 2025
This is quite a long read. India's artificial intelligence landscape stands at a pivotal moment, where critical decisions about model training capabilities and research directions will shape its technological future. The discourse was recently energised by Aravind Srinivas, CEO of Perplexity AI, who highlighted two crucial perspectives that challenge India's current AI trajectory. T he Wake-up Call Figure 1: The two posts on X.com on Strategic Perspectives, by Aravind Srinivas. Srinivas emphasises that India's AI community faces a critical choice: either develop model training capabilities or risk becoming perpetually dependent on others' models. His observation that "Indians cannot afford to ignore model training" stems from a deeper understanding of the AI value chain. The ability to train models represents not just technical capability, but technological sovereignty. A significant revelation comes from DeepSeek's recent achievement . Their success in training competitive models with just 2,048 GPUs challenges the widespread belief that model development requires astronomical resources. This demonstrates that with strategic resource allocation and expertise, Indian organisations can realistically pursue model training initiatives. India's AI ecosystem currently focuses heavily on application development and use cases . While this approach has yielded short-term benefits, it potentially undermines long-term technological independence. The emphasis on building applications atop existing models, while important, shouldn't overshadow the need for fundamental research and development capabilities. In short, Srinivas attempts to highlight 3 key issues, through his posts, on the larger tech development and application layer debate in India: Limited hardware infrastructure for AI model training Concentration of model training expertise in select global companies Over-reliance on foreign AI models and frameworks This insight fixates itself on legal and policy perspectives around building necessary capabilities around innovating in core AI models, and also focusing on building use case capitals in India, including in Bengaluru and other places. In addition, this long insight covers recommendations to the Ministry of Electronics and Information Technology, Government of India on the Report on AI Governance Guidelines Development , in the concluding sections. The Policy Imperative: Balancing Use Cases and Foundational AI Development What is pointed out by Aravind Srinivas about AI development avenues in India's scenario is also backed by policy & industry realities. The recent repeal of the former Biden Administration (US Government)'s Executive Order on Artificial Intelligence by the Trump Administration hours ago demonstrated that the US Government's focus has pivoted on hard resource considerations around AI development, such as data centres, semiconductors, and talent. India has no choice but to keep both ideas - building use case capitals in India, and focus on foundational AI research alternatives, at the same time.











