img
March 29 , 2025

Legal Challenges of Artificial Intelligence Regulation

Introduction

Artificial Intelligence (AI) encompasses the capacity of machines to execute cognitive functions such as thinking, perceiving, learning, problem-solving, and decision-making. Initially envisioned as a technology capable of emulating human intelligence, AI has evolved significantly, surpassing its original scope. With remarkable advancements in data collection, processing capabilities, and computational power, intelligent systems are now deployed across various sectors, enhancing productivity and connectivity. As AI’s capabilities have dramatically expanded, so have its utility in a growing number of fields.

While AI has the potential to provide large incremental value to a wide range of sectors, adoption till date has been driven primarily from a commercial perspective. In the contemporary global environment, a large scale adoption of such technology would require comprehensive strategies capable of banalcing narrow financial and commercial interests wuth broader society implications.

AI is set to revolutionize numerous aspects of our lives. By facilitating high-level cognitive processes and leveraging advancements in data analytics and computational power, AI has the potential to augment human intelligence and enrich daily life. However, as technology reshapes job landscapes and redefines benchmarks for technological proficiency, workforce skilling and reskilling become essential components of effective AI integration. It is crucial to recognize that AI lacks the capacity for genuine innovation. Those who believe AI can generate original thought often misunderstand its operational mechanics. Fundamentally, AI requires human input to define what constitutes "good" or "bad." It operates on learned patterns rather than independent judgment; thus, its outputs are primarily replicative rather than innovative.

Moreover, while AI tools can be valuable when strategically employed, they do not replace human judgment or expertise. New governance frameworks and policies are imperative for navigating this digital era effectively. Societal regulation must be both human-centered and environmentally conscious, balancing public interests—such as human dignity and trust—with private sector goals like profitability and innovation.

With respect to professional fields such as law, AI Tools are not a type of magical solution, and they certainly don't replace a lawyer's good judgment. But they can be handy tools if used strategically.

Despite global efforts to harness AI's potential while mitigating its risks, there remains a lack of unified vision regarding effective regulation. Approaches vary widely: from comprehensive legislation in the European Union (EU) to technology-specific regulations in China and voluntary guidelines in the United States.

India stands at a critical juncture as one of the fastest-growing economies with the second-largest population globally. Recognizing the transformative potential of AI, India must strategize its approach to leverage this technology effectively. The complexities inherent in India's economic and societal challenges can serve as a model for other emerging economies seeking similar technological advancements.

Ethical and Regulatory challenges

A significant aspect of India's AI strategy involves addressing complex global challenges through technological intervention. India's scale offers an ideal testing ground for sustainable solutions that can be scaled effectively. This includes a focus on ethical and regulatory challenges posed by generative AI technologies—particularly concerning misinformation, deepfakes, and privacy violations

Legal frameworks traditionally govern formal regulations enforced by authorities; however, ethics encompass moral principles guiding behavior. The emergence of AI introduces new challenges that existing legal structures do not adequately address—such as ethical development practices and accountability for automated decisions—necessitating a reevaluation of both legal and ethical frameworks.

Ethical dilemmas associated with AI include potential biases within models, misuse of generated content, and concerns regarding transparency and accountability. Although many of these issues predate AI's rise, their impact has intensified due to rapid technological advancement. While machines have historically replaced humans in various sectors, AI's ability to replicate—and potentially surpass—certain creative tasks raises profound questions about creativity and authorship.

Generative AI technologies have enabled new forms of deepfakes that extend beyond simple media manipulations. The ease with which convincing deepfakes can now be created—often without specialized skills—highlights the urgency for regulatory frameworks to address these risks comprehensively.

The environmental implications of AI technologies are also gaining attention from regulators. The development and deployment of large-scale AI models often require substantial computational resources that eventually contributes to significant amounts of carbon emissions,, leading to increased energy consumption and environmental concerns.

AI systems often depend on extensive datasets for training purposes. When personal data is involved, significant privacy concerns arise. Insufficient privacy safeguards may lead to 

unauthorized surveillance or data breaches that violate individual rights. Such violations can manifest economically—through identity theft—or emotionally—by exposing personal information to public scrutiny.

What is the ideal approach for India?

India's unique social and political context as a rapidly developing post-colonial nation has significantly influenced its approach to cyberspace and cybersecurity. Initially, India's capacity for cybersecurity was limited due to a lack of awareness and insufficient technological infrastructure to establish robust cybersecurity frameworks. In contrast to the integrated regulatory frameworks seen in the European Union (EU) and the United States, India currently lacks a cohesive strategy to address these pressing issues, highlighting a significant gap in legal reforms.

In considering how India should regulate AI, two primary models emerge: a state-driven regulatory framework akin to the EU's comprehensive approach or a voluntary framework similar to existing cybersecurity standards.
The state-driven regulatory model is characterized by comprehensive, legally binding regulations imposed by government authorities. This contrasts with a voluntary framework that encourages self-regulation among industry stakeholders without imposing mandatory legal requirements. The latter approach fosters collaboration among organizations, promoting best practices and ethical guidelines while allowing flexibility in implementation.

A notable example of a state-driven regulatory model is the EU AI Act, published in the EU Official Journal on July 12, 2024. This legislation represents the first comprehensive horizontal legal framework for AI regulation across the EU. Set to enter into force on August 1, 2024, and effective from August 2, 2026, the EU AI Act aims to promote human-centric and trustworthy AI while ensuring a high level of protection for health, safety, fundamental rights, democracy, and the rule of law against harmful effects of AI systems. It also seeks to support innovation and maintain the functioning of the internal market by balancing public interest with technological advancement. The EU AI Act classifies AI systems according to varying levels of risk and imposes specific requirements based on these classifications. Conversely, a voluntary framework offers a more adaptable approach that encourages collaboration between businesses, regulators, and other stakeholders to develop shared standards and practices.

By reducing regulatory burdens, this model can stimulate innovation and provide companies with greater freedom in developing new technologies. Tailored regulations can enhance flexibility and adaptability in response to the rapidly changing technological landscape.’

AI and the First Amendment

The United States has emerged as a pivotal player in the development and regulation of artificial intelligence in the cyber sphere. Over the years, multiple federal and state-level initiatives have been launched to foster innovation while balancing out the better interests of society. As AI continues to evolve, it presents a plethora of challenges that may pose a conflict with fundamental rights, particularly those enshrined under the First Amendment. The First Amendment to the US Bill of Rights protects freedom of speech, the press, assembly, and the right to petition the Government for a redress of grievance. However, it is essential to note that people hold these rights, not the technology in itself. Therefore, arguing for the application of First Amendment rights for generative artificial intelligence tools may rob the very legislation of its human focus.

Nevertheless, individuals and organizations that utilize AI to create content and claim it as their own are entitled to First Amendment rights as speakers. Furthermore, the public can access AI- generated content, even though the AI lacks constitutional rights. However, individuals cannot evade liability by replacing human-generated speech with AI-generated speech. Another illustration is that a healthcare provider using AI to dispense medical advice remains liable for malpractice. The ability of generative AI to produce realistic yet false content (like deepfakes) complicates the regulation of speech. This blurs the lines of truth and trust, challenging existing legal frameworks that govern speech and misinformation.

Ultimately, effective regulation necessitates collaboration among governments, industry stakeholders, and civil society. Achieving consensus on ethical standards and best practices poses significant challenges due to diverse interests at play. As countries worldwide grapple with how best to govern AI—balancing innovation with public interest protection—the complexities surrounding algorithmic bias and automated decision-making highlight the inadequacies of existing legal frameworks designed before these technologies emerged.

(Author-Michelle Subin, Law student at Symbiosis Law School, Nagpur. The views expressed are personal.)