top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

Writer's pictureYashudev Bansal

A Legal Prescription on Inductive Machines in AI

Artificial intelligence is booming the industry, but the question remains about the regulation as this is only a precaution that can put constraints on innovation. For example, a government report in Singapore highlighted the risks posed by AI but concluded that ‘it is telling that no country has introduced specific rules on criminal liability for artificial intelligence systems. Being the global first-mover on such rules may impair Singapore’s ability to attract top industry players in the field of AI[1].’


These concerns are well-founded. As in other areas of research, overly restrictive laws can stifle innovation or drive it elsewhere. Yet the failure to develop appropriate legal tools risks allowing profit-motivated actors to shape large sections of the economy around their interests to the point that regulators will struggle to catch up. This has been particularly true in the field of information technology. For example, social media giants like Facebook monetized users’ personal data while data protection laws were still in their infancy[2]. Similarly, Uber and other first-movers in what is now termed the sharing or ‘gig’ economy exploited platform technology before rules were in place to protect workers or maintain standards. As Pedro Domingo once observed, people worry that computers will get too smart and take over the world; the real problem is that computers are too stupid and have already taken over[3]. Much of the literature on AI and the law focuses on a horizon that is either so distant that it blurs the line with science fiction or so near that it plays catch-up with the technologies of today. That tension between presentism and hyperbole is reflected in the history of AI itself, with the term ‘AI winter[4]’ coined to describe the mismatch between the promise of AI and its reality. Indeed, it was evident back in 1956 at Dartmouth when the discipline was born. To fund the workshop, John McCarthy and three colleagues wrote to the Rockefeller Foundation with the following modest proposal:

[W]e propose for 2 months and 10 men needed for the study of artificial intelligence will be carried out in the summer of 1956 ……… The study was on the conjecture nature of learning where the machines should be made intelligent to stimulate it. In this study, an attempt will be made to find out how machines use language for the concept to solve problems, reserved for humans. We think that significant advancement can be made and only a selected group of people will work on the summer project.”

The innovation in the field of AI was started a long time ago but there were no precautions and regulations to put the use of AI in control. Every entity on the planet Earth can agree to the term that AI can be more fearful than one’s thought. Just as the statement by the AI robot Sofia “she plans to take over the human being and their existence. Moreover, the website run by AI shows the last picture of humans as very degraded beings. As said in the statement by Pablo Picasso[5] “the new mechanical brains are useless, they only provide an answer that was taught to them” As countries around the world struggle to capitalize on the economic potential of AI while minimizing avoidable harm, a paper like this cannot hope to be the last word on the topic of regulation. But by examining the nature of the challenges, the limitations of existing tools, and some possible solutions, it hopes to ensure that we are at least asking the right questions. As it is said every space in nature and physics needs to be fulfilled otherwise it would create a hole -a black hole.


The paper "Neurons Spike Back: A Generative Communication Channel for Backpropagation" presents a new approach to training artificial neural networks that is based on an alternative communication channel for backpropagation. Backpropagation is the most widely used method for training neural networks, and it involves the use of gradients to adjust the weights of the network. The authors propose a novel approach that uses spikes as a communication channel to carry these gradients. The paper begins by introducing the concept of spiking neural networks (SNNs) and how they differ from traditional neural networks. SNNs are modelled after the way that biological neurons communicate with each other through spikes or action potentials. The authors propose using this communication mechanism to transmit the gradients during backpropagation. But before that we need to understand what is deep learning and the neural networks and deep neural networks.


Inductive & Deductive Machines in Neural Spiking


Inductive machines are also known as unsupervised learning machines. They are used to identify patterns in data without prior knowledge of the output. Inductive machines make use of a clustering algorithm to group similar data together. An example of an inductive machine is the self-organizing map (SOM). SOMs are used to create a two-dimensional representation of high-dimensional data. For example, if you have a dataset consisting of several features such as age, gender, income, and occupation, an SOM can be used to create a map of this data where similar individuals are placed close together.

On the other hand, deductive machines are also known as supervised learning machines. They are used to learn from labeled data and can be used to make predictions on new data. An example of a deductive machine is the multi-layer perceptron (MLP). MLPs consist of multiple layers of interconnected nodes that are used to classify data. For example, if you have a dataset consisting of images of cats and dogs, an MLP can be trained on this data to classify new images as either a cat or a dog.


Neural spiking is the process of representing information using patterns of electrical activity in the neurons of the brain. Inductive and deductive machines can both be used to model neural spiking, but they differ in their approach. Inductive machines can be used to identify patterns in the spiking activity of neurons without prior knowledge of the output. Deductive machines, on the other hand, can be used to predict the spiking activity of neurons based on labeled data.



How Deep Learning + Neural Networks Work

Deep learning is a subset of machine learning that utilizes artificial neural networks to learn from large amounts of data. Neural networks, in turn, are models that are inspired by the structure and function of the human brain. They are capable of learning and recognizing patterns in data, and can be trained to perform a wide range of tasks, from image recognition to natural language processing. At the heart of a neural network are nodes, also known as neurons, which are connected by edges or links. Each node receives input from other nodes and computes a weighted sum of those inputs, which is then passed through an activation function to produce an output. The weights of the edges between nodes are adjusted during training to optimize the performance of the network.[6]


In a deep neural network, there are typically many layers of nodes, allowing the network to learn increasingly complex representations of the data. This depth is what sets deep learning apart from traditional machine learning approaches, which typically rely on shallow networks with only one or two layers. Deep learning has been applied successfully to a wide range of tasks, including computer vision, natural language processing, and speech recognition. One of the most well-known applications of deep learning is image recognition, where deep neural networks have achieved state-of-the-art performance on benchmark datasets such as ImageNet.



However, deep learning also has some limitations. One of the main challenges is the need for large amounts of labeled data to train the networks effectively. This can be a significant barrier in areas where data is scarce or difficult to label, such as medical imaging or scientific research. Another limitation of deep learning is its tendency to be overfitted to the training data. This means that the network can become too specialized to the specific dataset it was trained on and may not generalize well to new data. To address this, techniques such as regularization and dropout have been developed to help prevent overfitting.


Despite these limitations, deep learning has had a significant impact on many areas of research and industry. In addition to its successes in computer vision and natural language processing, deep learning has also been used to make advances in drug discovery, financial forecasting, and autonomous vehicles, to name a few examples.


One of the reasons for the success of deep learning is the availability of powerful hardware, such as GPUs, that can accelerate the training of neural networks. This has allowed researchers and engineers to train larger and more complex networks than ever before, and to explore new applications of deep learning. Another important factor in the success of deep learning is the availability of open-source software frameworks such as TensorFlow and PyTorch. These frameworks provide a high-level interface for building and training neural networks and have made it much easier for researchers and engineers to experiment with deep learning.


Spiking Neural Networks


A spiking neural network (SNN) is a type of computer program that tries to work like the human brain. The human brain uses tiny electrical signals called "spikes" to send information between different parts of the brain. SNNs try to do the same thing by using these spikes to send information between different parts of the network.


SNNs work by having lots of small "neurons" that are connected together. These neurons can receive input from other neurons, and they send out spikes when they receive enough input. The spikes are then sent to other neurons, which can cause them to send out their own spikes. SNNs can be used to do things like recognize images, control robots, and even help people control computers with their thoughts. They can also be used to study how the brain works and to build computers that work more like the brain[7].


The basic structure of an SNN consists of a set of nodes, or neurons, that are interconnected by synapses. When a neuron receives input from other neurons, it integrates that input over time and produces a spike when its activation potential reaches a certain threshold. This spike is then transmitted to other neurons in the network via the synapses. There are several ways to implement SNNs in practice. One common approach is to use rate-based encoding, where information is represented by the firing rate of a neuron over a certain time period. In this approach, the input to the network is first converted into a series of spikes, which are then transmitted through the network and processed by the neurons.[8]


One example of an application of SNNs is in image recognition. In a traditional neural network, an image is typically represented as a set of pixel values that are fed into the network as input. In an SNN, however, the image can be represented as a series of spikes that are transmitted through the network. This can make the network more efficient and reduce the amount of data that needs to be processed.


Another example of an application of SNNs is in robotics. SNNs can be used to control the movement of robots, allowing them to navigate complex environments and perform tasks such as object recognition and manipulation. By using SNNs, robots can operate more efficiently and with greater accuracy than traditional control systems. SNNs are also being explored for their potential use in brain-computer interfaces (BCIs). BCIs allow individuals to control computers or other devices using their brain signals, and SNNs could help improve the accuracy and speed of these systems.


One challenge in implementing SNNs is the need for specialized hardware that can efficiently process and transmit spikes. This has led to the development of neuromorphic hardware, which is designed to mimic the structure and function of the brain more closely than traditional digital computers. Despite these challenges, SNNs are a promising area of research that has the potential to improve the efficiency and accuracy of a wide range of applications, from image recognition to robotics to brain-computer interfaces. As researchers continue to explore the capabilities of SNNs, we can expect to see new and innovative applications of this technology emerge in the years to come.


The authors then present the results of experiments that compare their approach to traditional backpropagation methods. They demonstrate that their method achieves comparable results in terms of accuracy but with significantly lower computational cost. They also show that their method is robust to noise and can work effectively with different types of neural networks. Overall, the paper presents a compelling argument for the use of spiking neural networks as a communication channel for backpropagation. The proposed method offers potential advantages in terms of computational efficiency and noise robustness. The experiments provide evidence that the approach can be successfully applied to a range of neural network architectures.

 
References

[1] Penal Code Review Committee (Ministry of Home Affairs and Ministry of Law, August 2018) 29. China, for its part, included in the State Council’s AI development [2] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power [3] Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World [4] AI is whatever hasn’t been done yet.’ See Douglas R Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (Basic Books 1979) 601. [5] William Fifield, ‘Pablo Picasso: A CompInterviewrview’ (1964) [6]NeuronsSpikeBack.pdf (mazieres.gitlab.io) [7] https://analyticsindiamag.com/a-tutorial-on-spiking-neural-networks-for-beginners/ [8] https://cnvrg.io/spiking-neural-networks/

Comments