Separator

When Will Regulation Enter The AI Chat?

Separator
When Will Regulation Enter The AI Chat?

Sanjay Sehgal, Chairman & CEO at MSys Technologies, 0

Sanjay is the Chairman and CEO of MSys Group. In the business world, Sanjay Sehgal has more than 20 years of management and entrepreneurial expertise in the enterprise software, sales, marketing, and operations sectors. Sanjay holds BE degree in electronics from the University of Delhi.
__________________________________________________

“One day, the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.” - Nathan from the movie Ex Machina

While this may seem like a very harsh and dystopian outlook on AI, the current legislation barring a few countries in the world, is not equipped to deal with the pitfalls of AI. Developing comprehensive regulatory frameworks that address issues related to privacy, intellectual property, and the responsible use of generative AI is crucial in ensuring the ethical and safe deployment of this technology. These frameworks aim to strike a balance between promoting innovation and protecting the rights and interests of individuals and society as a whole.

So where is India in this digital regulatory landscape? Well, India is all set and in its final stages of introducing the Digital India Tech bill as a reform to regulate big tech. Digital India Act is set to replace the Information Technology Act, 2000, and is set to become a foundation of tech legislation for the country in the future. Interestingly, the law is set to cover not just the current menaces of big tech like spreading misinformation or deep fake videos; but also it accounts for emerging technologies and will be the bedrock for a strong legislative framework in the coming years. This is a very important requirement for an evolving technology like AI to prosper, tight and stringent regulations may hamper its potential, and too vague laws can be used as loopholes. Nonetheless, it is becoming clear that the government has to be in the loop to create a strong system for safeguarding the end user while keeping up with big tech innovations.

This would be a fascinating new chapter in the history of technological regulation in India, where the government deals with multifaceted issues of AI like the ethical dilemma of AI, compromised datasets that trigger inaccurate decision-making, cybersecurity, and more such nuanced areas. The trenches of AI may seem like a maze with no end, but here are some key facets that should be under the purview of the new law.

Privacy protection
We all are privy to the Cambridge Analytica scandal back in 2018 when political consulting firm Cambridge Analytica harvested personal data from millions of Facebook users without their consent. The data was used to create psychological profiles and target
individuals with personalized political advertisements during the US presidential election. Although not strictly AI-related, machine learning algorithms were employed to analyze and exploit the data for targeted messaging.

Now the high use of voice assistants like Alexa, Google Assistant & Apple’s Siri, or even for that matter, AI-powered facial recognition technology in surveillance systems, has raised significant privacy concerns. Here, one has to understand the nuances of how big data functions. AI models heavily rely on large datasets for training. If these datasets contain personal or sensitive information and are not properly secured, it can lead to data breaches. Unauthorized access to such datasets can compromise individuals' privacy and expose them to identity theft or other forms of misuse.

IP Rights
This one is the trickiest because generative language models rely heavily on recreating content from existing data sets. In simple terms, it does not own the content it creates. Regulations should clarify the ownership and attribution of AI-generated works to ensure fair compensation for creators and prevent unauthorized use.

It is becoming clear that the government has to be in the loop to create a strong system for safeguarding the end user while keeping up with big tech innovations


On the other hand, the data set it uses, i.e., huge amounts of academia such as articles and blog posts to recreate the answers, is now becoming a big concern in reproducing from the works of authors and artists. In one such case, earlier this year, Getty Images sued Stability AI, creators
of popular AI art tool Stable Diffusion, for allegedly unlawfully copying and processing millions of images protected by copyright to train its software.

Responsible Use
Historically the data sets may reflect a bias that can be inadvertently spilled over into AI-led processes too. For example, AI systems used in hiring and recruitment processes can inadvertently perpetuate bias. If the training data used to develop these systems is biased, the algorithms may learn and replicate the same biases. For instance, if historical hiring decisions were influenced by gender or race, the AI models trained on that data may continue the discriminatory patterns by favoring certain candidates over others.

AI implementation and automation of industries may still be a futuristic reality, but if the past has taught us anything, it is that change is constant, and so is survival of the fittest in evolution. It is imperative for these regulatory frameworks to be developed collaboratively, involving various stakeholders such as policymakers, legal experts, technologists, ethicists, and representatives from affected industries and user communities. Additionally, these frameworks should be adaptable and periodically reviewed to keep pace with technological advancements and evolving societal needs. India is pro-AI and pro-technology, but we should also account for the safety & integrity of end users.