

AI & Law 101
honestly, aI is the talk of the town, which is why, let's understand piece-by-piece about aI ethics & Governance.
what is data, let alone artificial intelligence?
Data is like the "food" AI consumes to grow smarter. Just as humans learn from experiences, AI systems learn by analyzing vast amounts of data. This data can be numbers, text, images, or even sensor readings.


Now, Data could be numerical, categorical, visual and more.

Yet, we live in a world where Unstructured Data exists. Social media content, and videos. It's just scattered.
Structured Data, means the data collected is organised, by purpose. You do organise data the way you have to. That's important.

But it could be Structured, and Unstructured too.
Right to Access Data
Imagine you lend your friend a notebook. You have the right to ask, “Hey, can I see what you wrote about me?”
-
How It Works:
-
Companies must show you what data they’ve collected (e.g., your purchase history, location data).
-
Next time if an OTT platform tracks what you watch, you can ask for a copy of that list.
-


Right to Correct Errors
Let's say your teacher spells your name wrong on a test, you’d say, “That’s not me—fix it!”
-
Here's how this right works:
-
If a bank has your old address, you can demand they update it.
-
Fixing a typo in your email on Amazon so you don’t miss delivery updates.
-
Right to Delete Data
Think of a photo you posted online but later regretted. You’d delete it and say, “I don’t want this here anymore.”
How It Works:
Ask social media platforms to remove old posts or accounts.
Example: Deleting your search history from Google so it stops showing you ads for embarrassing things.

The first AI applications in law were simple databases. They evolved into more complex systems capable of performing basic legal analysis.

Before 2016, local courts in China operated their individual information systems with little to no interconnectivity. The introduction of the national smart court system mandated a uniform digital format for documents and a centralized database in Beijing. This central “brain” now analyses nearly 100,000 cases daily, ensuring consistency and aiding in the detection of malpractice or corruption.
The AI system’s reach extends beyond the courtroom. It directly accesses databases maintained by police, prosecutors, and some government agencies, significantly improving verdict enforcement by instantly identifying and seizing convicts’ properties for auction. Furthermore, it interfaces with China’s social credit system, restricting debtors from accessing certain services like flights and high-speed trains.

But guess what?
While AI can handle the syllogism and conditional reasoning of legal texts, it fails to grasp the subtleties of natural law, human rights, and the intricate web of legal judgments.

Data is like the "food" AI consumes to grow smarter. Just as humans learn from experiences, AI systems learn by analyzing vast amounts of data. This data can be numbers, text, images, or even sensor readings.


Now, Data could be numerical, categorical, visual and more.

Yet, we live in a world where Unstructured Data exists. Social media content, and videos. It's just scattered.
Structured Data, means the data collected is organised, by purpose. You do organise data the way you have to. That's important.

But it could be Structured, and Unstructured too.

Okay, what is Ethics then?
Let's understand this with a story.
There are some basic principles of ethics, which are quite universally applicable, in the case of artificial intelligence, and even lack of jurisdiction might never be able to undo the need to address them in practice.

Transparency
Imagine you’re playing a game, but the rules are hidden. It would feel unfair, right? Similarly, AI systems must be open about how they work.
Accountability
If a self-driving car causes an accident, someone must take responsibility. Blaming the car alone isn’t enough.

Privacy
Sharing someone’s secrets without permission is unethical. Similarly, AI must respect personal data.
Fairness
A referee in a sports game should treat all players equally. If they favor one team, it ruins the game. AI must also avoid favoritism.

Now, while it may feel that implementing these principles isn't easy, it's not impossible to think of these ideas in the most basic way as may be possible.
So, what is Ethics then? Is it conditional, or unconditional?
Let's now understand the implementation value of AI Frameworks. Every ethical idea around AI boils down to whether it can be implemented or not.
There is a huge lack of country-specific AI Safety documentation.
Paralysis 2: Lack of Jurisdiction-Specific Documentation on AI Safety
Think of building a fire safety system for a city without knowing where fires have occurred or how they started. Without this knowledge, it’s hard to design effective safety measures.
Many countries don’t have enough local research or documentation about AI safety incidents—like cases of biased algorithms or data breaches. While governments talk about principles like transparency and privacy in global forums, they often lack concrete, country-specific data or institutions to back up these discussions with real-world evidence. This makes it harder to create effective safety measures tailored to local needs.
Supervised Learning
-
Imagine a teacher giving you a math problem and the correct answer. You learn by mimicking the process.
-
How It Works: Machines are trained on labeled data (input + correct output).
-
Examples: Spam email detection, image recognition.
-
Techniques include Linear regression, decision trees, neural networks.
Unsupervised Learning
imagine being dropped into a room full of strangers and figuring out who belongs to which group based on their behaviour.
How It Works: Machines find patterns in unlabelled data.
Examples: Customer segmentation, anomaly detection.
Techniques include K-means clustering, principal component analysis (PCA).
Reinforcement Learning
Think of training a dog with treats. The dog learns which actions get rewards.
How It Works: Machines learn by trial and error through rewards and punishments.
Examples: Game-playing AIs like AlphaGo, robotics.
Techniques include Q-learning, deep reinforcement learning.
Semi-Supervised Learning
Imagine doing homework where only some answers are given. You figure out the rest based on what you know.
How It Works: Combines small labeled datasets with large unlabeled ones.
Examples: Medical image classification when labeled data is scarce.
Here's some confession: never convert ethics terms into balloonish jargons or they won't work.
Paralysis 3: Responsible AI Is Overrated, and Trustworthy AI Is Misrepresented

Imagine a company claiming its product is "eco-friendly," but all they’ve done is slap a green label on it without making real changes. This is what happens with "Responsible AI" and "Trustworthy AI."
"Responsible AI" sounds great—it’s about accountability and fairness—but in practice, it often becomes a buzzword. Companies use these terms to look ethical while prioritizing profits over real responsibility. For example, they might create flashy ethics boards or policies that don’t actually hold anyone accountable. This dilutes the meaning of these ideals and turns them into empty gestures rather than meaningful governance.
Neurosymbolic AI
Think of it as combining intuition (neural networks) with logic (symbolic reasoning). It’s like solving puzzles using both gut feeling and rules.
How It Works: Merges symbolic reasoning (rule-based systems) with neural networks for better interpretability and reasoning.
Examples: AI systems for legal reasoning or scientific discovery.
The more garbage your questions are on AI, the more garbage will be your policy understanding on AI.
Paralysis 4: How AI Awareness Becomes Policy Distraction

Imagine everyone panicking about fixing potholes on one road while ignoring that the entire city’s bridges are crumbling. That’s what happens when public awareness drives shallow policymaking.
When people become highly aware of visible AI issues—like facial recognition—they pressure governments to act quickly. Governments often respond by creating flashy policies that address these visible problems but ignore deeper challenges like reskilling workers for an AI-driven economy or fixing outdated infrastructure. This creates a distraction from systemic issues that need more attention.
Beware: most Gen AI benchmarks are fake.
Paralysis 5: Fragmentation in the AI Innovation Cycle and Benchmarking

Imagine you’re comparing cars, but each car is tested on different tracks with different rules—one focuses on speed, another on fuel efficiency, and yet another on safety. Without a standard way to compare them, it’s hard to decide which car is actually the best. That’s the problem with AI benchmarking today.
In AI development, benchmarks are tools used to measure how well models perform specific tasks. However, not all benchmarks are created equal—they vary in quality, reliability, and what they actually measure. This practice creates confusion because users might assume all benchmarks are equally meaningful, leading to incorrect conclusions about a model’s capabilities.
Many benchmarks don’t clearly distinguish between real performance differences (signal) and random variations (noise).
A benchmark designed to test factual accuracy might not account for how users interact with the model in real-world scenarios. Without incorporating realistic user interactions or formal verification methods, these benchmarks may provide misleading assessments.
Why It Matters: Governments increasingly rely on benchmarks to regulate AI systems and assess compliance with safety standards. However, if these benchmarks are flawed or inconsistent:
Policymakers might base decisions on unreliable data.
Developers might optimise for benchmarks that don’t reflect real-world needs, slowing meaningful progress.
AI Governance priorities sometimes may not be as obvious around privacy & accountability as we know it.
Paralysis 6: Organizational Priorities Are Multifaceted and Conflicted

Imagine trying to bake a cake while three people shout different instructions: one wants chocolate frosting (investors), another wants it gluten-free (regulators), and the third wants it ready in five minutes (public trust). It’s hard to satisfy everyone.
Organizations face conflicting demands when adopting AI:
Investors want quick returns on investment (ROI) from AI projects.
Regulators require compliance with evolving laws like the EU AI Act.
The public expects ethical branding and transparency.
These competing priorities make it difficult for companies to create cohesive strategies for responsible AI adoption. Instead, they end up balancing short-term profits with long-term accountability—a juggling act that complicates governance.
Here's some truth: it never gets easy for anyone.
Paralysis 1: Regulation May or May Not Have a Trickle-Down Effect
Imagine writing a rulebook for a game, but when the players start playing, they don’t follow the rules—or worse, the rules don’t actually change how the game is played. That’s what happens when regulations fail to have the intended impact.
Governments might pass laws or policies to regulate AI, but these rules don’t always work as planned. For example, a law designed to make AI systems fairer might not actually affect how companies build or use AI because it’s too hard to enforce or doesn’t address real-world challenges. This creates a gap between policy intentions and market realities.
Still, there will be AI risks, and one must determine them in a reasonable way.
Think of AI risk like weather forecasting - but instead of predicting rain, we're trying to predict how AI systems might affect people and society. Let's break this down in a way that focuses on actual outcomes rather than theoretical frameworks.
What are some Immediate Effects?
Individual harm (like biased lending decisions)
System failures (like AI safety incidents)
Data breaches or privacy violations
Economic displacement

What could be some Systemic Effects?
-
Social inequality amplification
-
Market concentration
-
Governance or Political process interference
-
Cultural homogenisation


Instead of abstract risk categories, focus on:
Observable Impacts such as documented incidents, user complaints, system failures and performance disparities across target groups
Systemic Changes such as market structure shift, behavioural changes & cultural practice alterations in affected populations and environmental impacts
Cascading Effects such as secondary economic impacts, social relationship changes, trust in institutions and power dynamics shifts
Always ask yourself
Who is actually affected?
What changes in behavior are we seeing?
Which impacts are measurable now?
What long-term trends are emerging?
But before we dive into AI Frameworks, let's take a recap to understand AI, & ML too.
Artificial Intelligence (AI) is like the term "transportation." It covers everything from bicycles to airplanes. AI refers to machines designed to mimic human intelligence—like learning, reasoning, problem-solving, and decision-making. But just as "transportation" includes many forms (cars, trains, boats), AI includes various approaches and techniques.


So, WTF is Machine Learning anyway?
ML focuses on teaching machines to learn from data rather than being explicitly programmed. Think of it like teaching a dog tricks by showing it treats instead of manually moving its paws.
Here are some types of ML you should know.
Now, there are some common rights, which have been recognised across the world, for you, and us, and others, when it comes to use and sharing of data.
Let's explore some Data Protection Rights, shall we?
Right to Opt-Out/Object
If a store keeps texting you coupons, you’d say, “Stop spamming me!”
-
How It Works:
-
Tell companies not to sell your data or send targeted ads.
-
It's like clicking “unsubscribe” on promotional emails from a shopping app.
-

Right to Withdraw Consent
If you let a friend borrow your bike but change your mind, you’d say, “Actually, I need it back.”
-
How It Works:
-
Revoke permission for apps to track your location or contacts.
-
Something like turning off Facebook’s access to your phone’s camera after initially allowing it.
-
