AI and Law

I. INTRODUCTION AND BACKGROUND OF AI

Law and AI interact in complex, dynamic ways. Artificial intelligence (AI) can help make decisions, assist with legal research, and automate tasks like document screening and contract analysis, all of which have the potential to totally change the legal sector. Artificial intelligence will transform other industries in a manner similar to this. However, the creation and application of AI brings up a number of ethical and legal issues, including those pertaining to bias, privacy, accountability, and guilt.

Laws and regulations addressing certain challenges pertaining to AI are likely to be developed as the technology develops. Laws addressing the application of AI to the criminal justice system, for instance, might cover the use of prediction algorithms in sentencing. Laws pertaining to AI in the workplace, including those pertaining to hiring and performance reviews, may also exist.

Many nations are starting to implement laws and rules regarding the use of AI in the legal industry in response to these difficulties. The General Data Protection Regulation (GDPR), for instance, was passed by the European Union and regulates the use of AI and personal data. Although there isn’t a federal law in the US that particularly addresses AI, there may be relevant state and federal laws, such as those dealing to privacy and anti-discrimination.

The relationship between artificial intelligence and the law will eventually depend on a variety of variables, such as social objectives and values, the speed at which technology advances, and the capacity of legal institutions to keep up with these advancements. To guarantee that AI is used in a way that is compatible with societies ideals and that it serves everyone’s interests, governments, corporations, and individuals must collaborate.

One such major field is that of Media, as AI starts to grow, there is potential that with the growth of AI, media agencies incorporating AI will significantly boom in the market. As majority of process of writing the news can be automated.[1]

Across the globe, artificial intelligence is currently one of the newest developments in technology. It’s frequently applied in many different fields. AI is used to make activities easier for humans to complete and more productive. “The science of making machines do things that would require intelligence if done by men” is how “Marvin Minsky,” among other authors, describes artificial intelligence (AI).[2] “Artificial intelligence is not a single technology; rather, it is viewed as a field with numerous subfields, including deep learning, robotics, machine learning, and language processing.”[3]

In an effort to combat crime and terrorist activity within their borders, many countries have heavily integrated artificial intelligence into their police services. The majority of the time, police use artificial intelligence (AI) in their surveillance systems by utilising facial recognition cameras. Because artificial intelligence (AI) systems may forecast crime based on previous data in the system that can identify where crime is expected to occur more frequently, they are helpful in many nations where the government can support its representatives with fewer employment costs.[4]

In addition to its potential benefits, artificial intelligence poses a threat to people’s fundamental human rights. Artificial intelligence (AI) has the potential to establish a surveillance state where it would be feasible to follow every human action, endangering people’s privacy. Furthermore, it has been seen that AI increases pre-existing bias, which serves to maintain the discrimination that currently exists in society. Since AI is expanding quickly in the nation and there is now no legislation governing it, social unrest will result from the misuse of the technology in its early stages. To prevent serious exploitation of the technology, it will be crucial for the government to address the difficulties at the outset of AI implementation.

The growth of AI laid down the foundations for various other effects on the working methodology of media agency. Big media corporate can use the AI to target the consumers that can have several consequences on fundamental rights as well as the competition in the market. Fair competition in market needs to be maintained as it is possible that with use of AI dissemination of information may be biased and centrally focused on the data on which AI is trained. Targeted advertisement with use of AI can have several advantages to big media houses. There might be chances of the politically motivated news that can be published.

This study will consider the potential drawbacks of artificial intelligence as well as how it affects the media and fundamental rights. India, which is still a developing nation, will be the main focus of our analysis of how AI affects democracies, where the welfare of the populace is prioritised.

II. ARTIFICIAL INTELLIGENCE IMPACT ON FUNDAMENTAL RIGHTS

The way that the state uses AI is changing paradigms as a result of technological advancement, and this trend will only intensify going forward. The subject of how AI might affect fundamental rights is one that is brought up. Due to the numerous regulatory issues surrounding artificial intelligence. First, there are ex ante hurdles, or issues with AI research and development, like the opacity problem, discreteness, diffuseness, and discreetness issues. Second, there are ex-post obstacles (i.e., issues related to the development and application of AI), including predictability, general control, and limited control.[5]

We will first define ex-ante problems. To start, the discreetness problem indicates that it is difficult to identify AI projects in extremely tiny industries. Therefore, anyone with an understanding of AI may be able to develop it. b) The diffuseness problem allows any diffuse group of individuals operating under diffuse authority to build artificial intelligence (AI) anywhere in the world.  c) The discreteness issue indicates that AI initiatives will use or utilise discrete technologies and components, the full potential of which won’t become visible until the components are assembled. d) The opacity issue refers to the fact that AI is predicated on encrypted data, which is difficult for governments to regulate because it is invisible to them.

We will now examine the definition of ex-post problems, beginning with the following: a) Foreseeability problem: this refers to the ability of AI programmes to function in a way that is unprecedented and not even anticipated by the AI’s inventor. This will perhaps result in a “liability gap.” b) The narrow control problem allows AI to function so independently that even its creators are unable to recognise when AI is at work. c) Due to the general control problem, AI may be resistant to human control. In Superintelligence, authors like Nick Bostrom make reference to this issue.[6]

A.    Right to Privacy and AI

Saudi Arabia has granted artificial intelligence robots like “Sophia” the status of citizens. It is no longer the case that other nations would begin to acknowledge the rights of artificially intelligent robots with intelligence levels on par with or higher than those of humans. We are “on the cusp of a technological revolution which will alter the way we live, work, and relate to one another,” according to Schwab. The extent, scope, and complexity of the transformation will be unprecedented in human experience.

One thing is certain despite the uncertainty surrounding the precise trajectory of events: all stakeholders in the global political system, ranging from academics and civil society to the public and corporate sectors, must participate in a coordinated and all-encompassing response.[7]

Elon Musk, a well-known businessman and technology expert, said that there is a greater risk from artificial intelligence (AI) than from nuclear weapons.[8]It should be mentioned that Elon Musk’s threat goes well beyond a simple infringement of fundamental rights. Since no other living thing has coexisted with humans at the same level of intellect, it is absurdly risky to consider the possibility of sentient machines powered by artificial intelligence (AI) that could be 100 times more intelligent than humans.

We won’t examine the risks that artificial intelligence poses to human existence in this study, though. We will examine the potential violations of fundamental rights that could result from the effective application of AI, such as the creation of a surveillance state where the government can see every move its citizens make. These states have a profound effect on people’s fundamental rights. This will have an impact on how the nation’s media outlets operate. People in the nation may receive biased information from them as AI.

One could argue that technology’s opacity and complexity lend it authoritative support because personal data is kept private. But the ignorance of AI and the incapacity to challenge outcomes have brought up a lot of issues concerning human rights.[9]

Here, unbridled AI devices have the potential to seriously violate human rights since police in certain jurisdictions have disproportionate authority to make arrests, conduct searches, and use lethal force when necessary. Using AI would only increase this authority. In this case, the monitoring state would actually exist.  It is important to highlight that Article 21 of the Constitution guarantees each and every person the right to life and personal liberty,[10] which can only be restricted by following the legal process.

In the Maneka Gandhi case, the Hon’ble Supreme Court made the observation that Articles 14 and 19 should be read in conjunction with Article 21, and that the law’s substantive provisions should also be acceptable.[11]  A law is deemed reasonable under Article 21 if the procedures outlined in it are not capricious. A process that thrives to enhance living quality is one that is legitimate. One such right that derives from Article 21 is the “right to privacy.” Additionally, everyone has the right to privacy, which is derived from Article 21.

According to Nariman J., the Preamble contains the constitutional foundation for privacy, which reads as follows: “The dignity of the individual encompasses the right of the individual to develop to the full extent of his potential.” And this development is only possible if a person is free to make basic decisions and maintains control over the sharing of personal information that could be violated by unauthorized usage.

The main issue is that India does not yet have any laws protecting citizen data gathered by AI. In the absence of law, it can be egregiously abused,[12] However, the Hon. Supreme Court affirmed the established law in the case of Justice K.S. Puttaswamy (Retd) v. Union of India, where the constitutionality of the Aadhar card—which gathers biometric information from individuals and saves it on government servers—was questioned.[13]

The argument that the right to privacy had been infringed upon was used to dispute it. However, no court has yet declared the right to privacy to be a fundamental right. Because it established the right to privacy as an essential component of fundamental rights, this case is momentous. In a majority ruling, the nine judges on the bench ruled that Article 21 includes the right to privacy.[14]

Furthermore, the Honourable Justice Subba Rao noted that the “right to personal liberty takes in not only a right to be free from restrictions placed on his movements but also free from encroachments on his private life” in the minority judgement of the much older case of Kharak Singh.[15] “Although the right to privacy is not specifically stated as a fundamental right in our constitution, it is nevertheless a necessary component of individual liberty.” Every citizen is guaranteed the “right to be let alone” by their right to privacy.[16] 

Despite not being stated directly, it is a significant aspect of Article 21 of the Constitution.[17]  The Supreme Court of the United States acknowledges that there is a right to personal privacy under the Constitution, and that the first, fourth, and fifth amendment’s serve as the foundation for this right, despite the fact that the United States Constitution does not specifically mention any right to privacy.[18] 

The Supreme Court has ruled unanimously that the right to privacy is a fundamental right, and the very fact that AI produces a highly monitoring state raises the possibility of abuses of that right. Furthermore, AI has the capacity to exacerbate societal biases already in place. It has been observed that AI uses data gathered from society, and if the society has any current gaps, those will also be incorporated into the AI’s algorithm. For example, prejudice exists in American society where it is assumed that black people commit more crimes than white people.

B.    Right to equality and AI

The biggest threat posed by AI is the persistence of discrimination due to biased algorithms. AI technology developers have repeatedly said that because the algorithm is data-driven, it is impervious to human bias. Consequently, the results are entirely impartial and do not lead to discrimination of any kind.[19] This allegation has now been contested by a number of international human rights and technology organisations. AI has an inherent risk of propagating and maintaining the social biases that exist now.

The reason for this is the data, which AI systems are trained to assess and subsequently replicate the pattern they discover from the data. This is where the problem lies: AI will unavoidably perpetuate societal biases when it replicates historical trends. Consequently, there is going to be what is known as data bias. Unfortunately, the majority of the time biased data is available, which adds to the generalization of bias in society.[20]

Article 14 offers protection against state discrimination.[21] If there is discrimination in society as a result of the state’s use of AI technology. It is frequently seen in industrialized nations that the application of AI is contributing to a growth in racial discrimination in society, which is against Article 14.[22]  Systems for recognising faces claim classification accuracy of above 90%. These outcomes, though, are not always accurate. According to a Harvard University investigation, an expanding body of data shows that different demographic groups had differing rates of inaccuracy, with individuals who are female, Black, and between the ages of 18 and 30 consistently having the lowest accuracy.[23]

It was stated in the landmark case of Bachan Singh by Justice Bhagwati that the rule of law is against arbitrariness. Even a single piece of discrimination is not allowed.[24]

 The other violation of rights can be the freedom of speech and expression. It is one of the major research contention of this article to deal with how media agencies and fundamental right of speech and expression would be affected by the advent of AI.

C.    Media, Freedom of Speech and Expression and AI

Our analysis is based on the fact that there is a significant effect of AI on freedom of speech and expression. The expression of ideas often starts with the information we receive. We can only reflect on what we consume. The quality of information we receive shows the quality of our expression. Even if we thought there was freedom of expression, it might be filled with biases on the content we consume. Biases are inherently present in society and these biases can further grow with the use of AI.[25] The data that AI is used to train is not free from such biases and various research establish this fact.

Now coming to media, it is considered as the fourth pillar of democracy. Freedom of Press is considered the most important thing in a democratic country. It is recognized as a part of freedom of speech and expression by the hon’ble supreme court.[26] AI as already pointed poses certain threats on freedom of speech and express as it can boost the biases. Apart from that there are other challenges that AI can have on Media and the freedom of speech.

Furthermore, private actors—specifically, search engine and social media platform providers—use artificial intelligence (AI) to filter content for the purposes of content moderation, which is the identification, removal, or reprioritization of “undesired” content, and content curation, which is the ranking and distribution of customized information.[27] The goals of both programs are to control speech in order to improve user experience, promote online conversation, and—most importantly—raise revenue.

Large-scale user behavior tracking makes it possible for AI to rank and filter content. Large amounts of precise data are needed for AI to assess and forecast the “relevance” of information. Additionally, this data makes advertising easier, which is a major component of the revenue strategy of many internet intermediaries. Extensive data gathering and processing is encouraged by the commodification of personal data for targeted advertising, which equates to profit. This phenomenon is known as “surveillance capitalism.”[28]

Intermediaries make money by profiling and commercializing the public domain while providing services for free. This fosters potential abuses of power and widespread state control because it is inherently intrusive. While all forms of surveillance have a chilling effect on the media and free speech, artificial intelligence (AI) has the potential to negatively impact source protection and investigative journalism.

AI is sometimes described as a “black box,” meaning that its use is undetectable.  This could give rise to the false belief that what comes out of it is an impartial and objective portrayal of reality. It’s possible that users are unaware of the use of AI, how it generates search results, or how it promotes or deletes material. However, it might not always be clear when and how AI is used to stifle the media through monitoring or other means.[29] Any AI program that lacks awareness and opacity is seriously flawed.

Public discourse may suffer greatly from opaque AI that controls information distribution in accordance with corporate objectives, especially given the dominance of a small number of intermediaries in the market. Oligopolies now function as private speech judges, establishing the guidelines for international internet communication and information access. The only option available to anyone who wishes to engage in the online world is to submit to the regulations and monitoring of hegemonic middlemen. Furthermore, governmental monitoring and press control on political grounds may be made easier by such private AI systems and vast digital footprints.

Because they structurally shift power to the detriment of high-caliber journalism, the ad-driven economic models at the heart of today’s internet structure have had a dramatic impact on the sustainability of traditional media. This disparity is further shifted by the application of AI technologies, which has an especially large influence in nations with poor internet penetration or no robust public service media. Media freedom is seriously threatened by any deliberate use of AI to obstruct independent reporting, whether it be by widespread surveillance of investigative journalists, targeted censorship, or the deployment of AI-driven bots to attack and silence specific journalists.

However, even in the absence of malice, there are significant concerns associated with using AI to monitor speech in order to filter particular content or spread information. While employing AI to shape and moderate information at scale exacerbates many existing difficulties and creates new ones, many of the fundamental questions surrounding content removal and curation are not specific to AI. The use of AI in content moderation and curation is examined in the parts that follow, along with any possible implications for media freedom and free expression.[30]

III. MEDIA AND AI

Before moving to the solutions how the use of AI can be tackled while ensuring the dissipation of real and correct information, it is pertinent to note the challenges posed by AI on fundamental rights and what are the remedy available in the cases of violation of fundamental rights. First, we will examine whether we can cover AI within fundamental right or not, then we can avail the remedy if any of the fundamental rights is violated by the use of AI.

A.    Impact on Fundamental rights

Depending on the user or application, AI is subject to fundamental rights. Since the State is accused of violating fundamental rights in the majority of cases.[31]  Few fundamental rights, like untouchability, hiring children under the age of 14, and forced work, can be used against specific people. In this work, we will only address the situation where the state violates a fundamental right; we will not address the latter instance. It implies that the State may be held accountable if AI technology utilised by the State infringes against a fundamental right.

When the State is held accountable for violating a basic right, it is important to remember that the violation must occur throughout the course of the State’s regular operations. Similarly, the state is held accountable for the acts of any government employee who breaches a basic right while they work for the government. But it’s debatable if AI devices qualify as government workers.

Currently, no court has addressed the question of whether artificial intelligence falls under the purview of legal personhood. A court could soon find artificial intelligence (AI) to be a fiction of law with respect to legal rights. A UN report states that “India is among the top countries for publications in specific categories such as computer vision and natural language processing and is emerging as a new target for patent filings in the field of artificial intelligence.”[32]

Moving to dissenting opinion of Field, J. in Munn v. Illinois,[33] it was stated that right to life includes living with dignity and not merely like an animal.

The claim makes it abundantly obvious that AI cannot be granted the right to life and personal freedom simply because it possesses intelligence comparable to that of humans. Still, there is no explanation for what happens when AI infringes on fundamental rights. As a result, since AI machines possess their own intelligence, they can think like humans and produce results with greater accuracy and precision. If these actions result in some violations of human rights, they may be challenged under Articles 32 or 226 of the Indian Constitution. The primary issue is determining who is responsible for AI’s conduct.

The esteemed Supreme Court of India has granted rights to animals alongside those of humans through its inventive jurisprudence. Animal life may fall under the protection of the right to life provided by Article 21 of the Indian Constitution, the Hon’ble Supreme Court of India observed in Animal Welfare Board of India v. A Nagaraja (but only to the extent that human rights were not violated). However, the Supreme Court erred when it connected animal rights with human rights in its decision as it was held to be wrong decision in 2023 judgment of supreme court.[34]

The wide area of interpretation cannot be in such a manner that it exceeds the constitutional limits. There have been attempts to acknowledge the rights of habitats in order to safeguard the rights of the creatures that reside there. On the other hand, this can be interpreted as an attempt to extend the jurisdiction of environmental law.

The Honorable Supreme Court of the United States of America holds that corporations are entitled to the same protection under the law as individuals under the 14th Amendment of the Constitution, which provides for equal protection and prohibits a state from depriving anyone.[35]

Even corporations are granted the same rights as artificial legal persons through court intervention. It’s feasible that in the future, humanoid robots will be granted the same legal standing. Even robots are granted citizenship in Saudi Arabia; citizenship is a formal means of recognizing one’s complete membership in an organization. It entails acknowledgment on a national and international scale.[36]

B.    Consequence of Fundamental Rights violation

Article 32 and Article 226 of the Indian Constitution provide remedies for violations of fundamental rights, which include petitions in the Supreme Court and High Court, these two courts are considered constitutional court where the matters of constitutional importance takes place. These two courts are knowns for imparting justice as it is given important duty to interpret constitution and give protection to the fundamental rights. In Daryao v. State of UP[37], SC stated that it is the duty of supreme court to interpret rights and provide for their remedy in such a manner that it is in consonance with the constitutional scheme.

Without a suitable remedy, the basic rights clause is ineffective; article 32 offers one. Under article 32, a fundamental right breach is a requirement sine qua non. We can determine, with the aid of AI, that a reasonable individual would have been able to anticipate several violations of fundamental rights. In order to avoid or stop the negative effects of State machinery using AI without proper regulation, the government must take the necessary measures. There aren’t many nations with AI-related legislation. Nonetheless, the author will research AI-related regulations in India, the UK, and the USA.

Now, as we have seen that AI can be treated as a part or the subject of the fundamental right, we will closely examine the implication of incorporating AI by large media houses.

C.    Incorporation of AI by Media Houses

It is evident that the AI can be incorporated by those media houses that have large financial backing as any technology that is prominent in market significantly effect the economy of the company for its smooth incorporation. Therefore, due to economic differences, large media houses gain an advantage over the incorporation of AI in their system. This advantage led to several effects that can be pointed out.

i.     Targeted campaigning.

With the use of AI, there can be targeted campaigning of the information to the people based on the media agenda. As various media houses want to cover the news fastest, the chance of accuracy is severely affected. It is rhetorically correct to say that media houses are free from any bias but in reality, they also have certain biases and they are running a business not a charity. Therefore, they will promote that information that can help them to monetize their work.

Often the political party’s role comes into play, they fund news agencies to promote their candidates, show criticism about their opponents. Suppose a person accessing internet only sees information of bad activity by opposition, will he ever vote in favour of the opposition leader. This way the media houses with incorporation of AI can create targeted campaigning. The AI can be used to track the geographical location of the user to target them with the content, news agency wants to show.[38]

ii. Negative effect to competition.

Balance of competition is very important in market; it is necessary that the every player should get an equal opportunity in market. But AI has huge chances to create an appreciable adverse effect on competition. The incorporation of AI is a hefty process that can be afforded by only the top-tier companies. It is quintessential technology that can cause market disruption. Even the CEO of ChatGPT, which is one the prominent generative AI in the market, agrees that we need to slow down the speed with which AI is taking over or else it can cause market disruption in various ways.[39]

The small media houses can be severely affected as their news will not get as much algorithmic support as the large media houses that have incorporated AI. This can result in creating a sustenance problem for small media houses and entrance of new players in market will be badly affected.

iii. Spreading propaganda news.

Spreading propaganda with the use of AI is one of the easiest tasks. There is no requirement for a large number of people to apply their intelligence to manually share information with specific people of specific areas. As already being pointed out AI can do targeted campaigning. This campaigning can have negative effect on the spreading of propaganda news. The people will be manipulated in masses, their choices can be limited by the capture of the information they receive.[40]

iv. Biasness can be transmitted.

As already pointed out that the data from which the AI is trained is very important. How good the data is there are always certain biases in the data. There are several factors involving the reason of bias, the background, education, social surroundings, ethical understanding of data trainer. All these factors contribute to a biased overall data.

It is not incorrect to say that too much reliance on AI can led to biasness. Suppose a news come in front of people, spreading misleading information about some community, a large mass of people will start thinking about that community in certain ways who do not consider doing research themselves or the people who already have something in similar thinking for that community, their thoughts will get enhanced with confirmation bias. It is a serious problem as it can led to communal violence which is very prevalent in India.[41]

v. Restrictions on freedom of speech and expression.

It is pointed out by John Charney, in his book, The Illusion of the Free Press,[42] “One of the most influential justifications of the free press is the one that maintains that it is an instrument for the discovery of truth and the advancement of knowledge.” But the research by Critique of the Political Economy of the press suggests that the economic structure of the press in capitalist societies makes the press servant to a series of economic, political and financial interests that necessarily affect its independence and freedom.

Now, suppose when there is already threat due to capitalism on freedom of media, AI can even flourish these aspects even more. With the advent of AI, media houses will want more capital and like any business grow their business commodifying the truth and related aspects.

IV. ACTION TAKEN IN DIFFERENT JURISDICTIONS

A. USA

At least seventeen states presented resolutions or laws pertaining to artificial intelligence in 2022. Task forces or commissions were established in Vermont, Illinois, and Colorado to investigate artificial intelligence. The Artificial Intelligence Video Interview Act, originally enacted in 2021, was modified by a bill in Illinois.[43] The position of Chief Information Officer established a study group with funding from Washington to investigate how automated decision-making systems might be assessed and reviewed on a regular basis to ensure that they are fair, transparent, and responsible.

One of the largest threats to democracy in the modern era, according to a concept for the AI bill of rights, is the misuse of technology, data, and automated processes in ways that compromise the rights of American citizens. These instruments are much too frequently used to restrict our options and keep us from getting essential supplies or services. It has been established that patient care support systems, both nationally and globally, are unsafe, ineffective, or biased.

It has been discovered that algorithms employed in credit and hiring decisions either include or encourage discriminatory and harmful biases. Unrestricted social media data collection has, frequently without the users’ knowledge or consent, compromised people’s opportunities, violated their privacy, or invasively monitored their activities.[44]

These outcomes are not necessary, despite the fact that they are highly harmful. Automated systems have brought incredible benefits, from computers that forecast storms and make it easier for farmers to cultivate food to algorithms that can identify ailments in patients. Even while data is helping to transform global businesses, these technologies now have a significant impact on important choices made in every industry. Because these technologies are fueled by American creativity, they have the potential to completely transform our society and raise everyone’s standard of living.[45]

President Biden has stressed that human rights and democratic ideals are the cornerstones of his administration, and they must not be abandoned in order to attain this significant goal. On his first day in office, the President signed an executive order directing all federal government branches to work towards eradicating injustice, incorporating fairness into decision-making procedures, and aggressively advancing civil rights, racial justice, and equal opportunity in the country.[46]

The right to privacy is one of the civil rights that the President has repeatedly called on people of conscience to defend since it is “the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.” In his speech, the President presented a compelling case for the threats that modern democracy faces.[47]

The White House Office of Science and Technology Policy has suggested five principles that should direct the creation, advancement, and application of automated systems to safeguard American citizens in the era of artificial intelligence, in order to accomplish President Biden’s objective. The Blueprint for an AI Bill of Rights presents a comprehensive framework for a societal structure that effectively protects individuals from potential hazards while ensuring that technology is employed in a manner that aligns with our fundamental moral values.

The publication titled “From Principles to Practise” serves as a thorough guide that provides detailed instructions on how to effectively implement these principles into practise and policy within the context of the technical design process. This resource aligns with and reinforces the aforementioned paradigm. This reaction is a reflection of the collective experiences of the American populace, drawing upon the perspectives of scientists, activists, journalists, technologists, and decision-makers. These principles can prove to be beneficial in situations where automated technologies possess the capacity to significantly influence the rights, opportunities, or access to necessities of the general public.[48]

B.  UNITED KINGDOM

On July 18, 2022, the UK government presented new proposals to control the application of artificial intelligence (AI) technology while promoting creativity, boosting public trust, and safeguarding data. The recommendations adopt a more risk-based and less centralised strategy than the EU’s planned AI Act. The proposals coincide with the introduction of the Data Protection and Digital Information Bill to Parliament, which includes measures to use AI responsibly while cutting compliance costs for companies to boost the economy.

One of the biggest challenges in regulating the use of AI is unquestionably the rate of technological advancement in this area. The current strategy used by the UK government is to list the essential characteristics of artificial intelligence (AI) while letting regulators give more detailed definitions for different businesses. The government feels that it should regulate the application of AI rather than the technology itself. Making the UK’s AI laws future-proof without impeding creativity was the aim.

Two main qualities have been emphasised by the proposals:

  1. “Adaptiveness” describes how AI systems often function in part by following rules that were learned through training data rather than being explicitly programmed with human intention;
  2. “Autonomy” describes how AI systems demonstrate a high degree of autonomy and frequently automate challenging cognitive tasks.

Systems that meet the aforementioned requirements include natural language processing and self-driving car control systems.

The government has developed six cross-sectoral principles that will be relevant to all parties involved in the AI lifecycle. Regulators in place, such as the Competition and Markets Authority or Ofcom, will then construe and implement these concepts.

Regulators will be pushed to use “lighter touch” measures such as education, voluntary initiatives, or the establishment of sandbox settings prior to the widespread adoption of AI technology.

The guidelines are meant to guide authorities and help them implement a reasonable and risk-based strategy for regulating AI[49]:

1. Ensure the safe application of AI.

2. Verify that AI is safe to use and performs as intended.

3. Ensure that AI is understandable and sufficiently transparent.

4. Incorporate fairness concerns into AI.

5. Specify who is legally responsible for AI governance.

6. Clearly state the paths for contestability or remedy.

The risk-based, more adaptable approach of the UK government’s proposal may benefit business and innovation alike. Granting jurisdiction to several regulators across diverse sectors and industries may lead to unanticipated complexities, hence increasing the difficulty for businesses to navigate the regulatory landscape. International businesses usually work in many different industries and are both suppliers and a consumer of AI services in other areas. Without a sure, collaboration and a clear assignment of responsibilities among the many authorities will be necessary for this plan to succeed.[50]

Prior to now, a number of regulatory agencies have drawn attention to particular areas of concern in AI. Earlier in July, the UK Information Commissioner brought attention to the risks associated with using AI to screen candidates for financial aid or employment. The Commissioner also revealed that investigations into this kind of use have begun. The Commissioner plans to publish revised guidelines for AI developers on how to ensure that AI systems handle individuals’ data responsibly and fairly. In a similar vein, the Financial Conduct Authority underlined that one of its goals is to look into bias and ethics in AI and algorithms.

Nor is the current proposal the same as the proposed EU AI Act, which includes more detailed guidelines for all parties involved in the AI lifecycle. Though the UK government appears to want to push AI-driven businesses, it is vital to keep in mind that the EU AI Act still applies extraterritorially and is pertinent to UK-based enterprises.

The plan has been made available in tandem with a call for evidence, which will conclude on September 26, 2022. The purpose of the call is to solicit feedback on the proposal’s implementation from academics, corporate executives, and civil society organizations that concentrate on technology. A white paper is expected in late 2022.

Alongside the plan, a call for evidence has been launched, which will end on September 26, 2022. The purpose of the call is to gather input on how company executives, academics, and technology-focused civil society organizations might go about putting the proposals into practice. A white paper is expected in late 2022.

C. INDIA

There are presently no regulations pertaining to big data, AI, or machine learning in India. The IT Act of 2000 and its rules provide a few minimal standards. In order to develop an AI policy framework, MEITY, the executive agency for AI-related activities, has established four committees.

Seven guiding principles for responsible AI have been developed by the Niti Aayog. They include the following ideas: transparency, accountability, equality, inclusivity and non-discrimination, security and privacy, dependability, and the preservation and growth of good human values. By increasing adoption and trust, these principles are expected to protect the public interest and foster innovation.[51]

The think tank has also collaborated with several prominent figures in AI to implement AI projects in critical sectors such as healthcare, agriculture, and education. The Department of Telecommunications has also established a committee for AI standardisation in order to develop the Indian AI stack and various interface standards.

Without the courts, some provisions cannot be enforced. The constitution requires the Supreme Court and other courts to defend fundamental rights including the right to privacy. In addition to its own domestic legislation, India is a party to the Global Partnership on Artificial Intelligence, which regulates the ethical development and implementation of AI while taking into account human rights, inclusion, diversity, and economic development.

V. CONCLUSION AND SUGGESTIONS

Artificial intelligence is still in its early stages of development, given its relative youth. The potential of AI is still not fully understood. However, it’s critical to control it at an early stage and adopt preventative measures. Although legislation is taking time to pass, AI is expanding at an exceptionally rapid pace. The research has only identified a small number of situations in which AI could have a dangerous impact on media. But there are also a lot of other problems that might not show up until the technology is widely utilized.

The statutes we’ve provided for the US and the UK make it clear that many developed countries have already started passing legislation related to artificial intelligence.  Guidelines for the regulation of artificial intelligence use are included. It has been seen from legislation that nations are more likely to harmoniously enforce rules without impeding innovation. They have demonstrated the need for both the development and regulation of AI use. AI is undoubtedly harmful, but it should only be utilised in safe situations.

The goal of legislation should be to keep AI out of the wrong hands. Even though nuclear weapons are deadly, nations have united to sign the nuclear peace pact. In the same vein, international cooperation is required to develop a single, comprehensive law pertaining to AI. Along with these, the media houses using AI should be regulated in such a manner that they should not misuse it. In case of any violation strict punishment or penalty should be incorporated. Since, they cannot be punished for only use of technology rather the punishment can be given for unfair use of technology.

Since AI is a problem that affects all countries, there needs to be a model law created internationally by the various nations. If AI is deployed in conjunction with appropriate legislation, it can accelerate national development more than before. Nothing is inherently safe or dangerous; this all relies on the technology user. The development of artificial intelligence holds the potential to usher in a new era of technological upheaval. We have witnessed a fundamental shift in the way we use technology during COVID-19. People will begin to adopt AI since it is still being used in many facets of daily life.

It is now evident from the researcher’s research question that AI has the capacity to infringe against fundamental rights. Nonetheless, laws and guidelines can stop it. AI robots and humans can coexist peacefully without breaking any laws. If applied correctly, it can also be utilized to build a safer society. In conclusion, lawmakers will find it difficult to safeguard rights in the context of artificial intelligence (AI) law without compromising innovation.

Furthermore, freedom of speech and expression needs an even wider view in the era of technology. As we have recognized the right to the Internet. AI is also a branch that is required or needs a status of fundamental in the internet era. The things that appear on top are something which people like more; content regulation of online website data is something challenging and law need to cater to that.

Therefore, we can say that there will be effect on freedom of speech and expression through AI but that can be solved with right approach of balancing the interest in a proportional manner. Secondly, intelligent machines can be made subject to fundamental rights violation but the capacity to operate on its own is something that needs to be achieved and the user can be made vicariously liable. Finally, the media houses incorporation AI have serious effect on small media houses, but it can be governed as already mentioned in the chapter III.

Read more content on similar topics by clicking here.


[1] Sylvia M. Chan-OlmstedA Review of Artificial Intelligence Adoptions in the Media Industry, IJMM, 193-215, https://www.tandfonline.com/doi/full/10.1080/14241277.2019.1695619?scroll=top&needAccess=true (last visited Oct 26, 2023).  

[2]United Nations Office of Counter Terrorism, Countering Terrorism Online with AI, Report countering-terrorism-online-with-ai-uncct-unicri-report-web.pdf (last visited Nov. 1, 2023). [hereinafter UN AI Report]

[3]Id. at 31.

[4] Cathy O’Neil, Weapons of Math Destruction, (Penguin Publishing 2017).

[5] Scherer Matthew, AI risks and challenges, 29 Harv. Law Review 354, 361 (2016).

[6] Id at 358.

7 Corinne Cath, governing artificial intelligence: ethical, legal and technical opportunities and challenges, The Royal Society, https://doi.org/10.1098/rsta.2018.0080 (last visited Nov. 2, 2023).

[8] J. Sinha, Effect of AI on Fundamental Rights Enshrined in the Constitution, Vol. 2, NUJHR 125, 136(2022).

[9] F.A. Raso et al, AI and Human Rights: Opportunities & Risks, SSRN, (Accessed Nov. 2, 2023), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259344,

[10] Ind. Const. Art. 21.

[11] Maneka Gandhi v. UOI, 1978 SCR (2) 621.

[12] UN AI Report, supra noteat 32.

[13] J. Sinha, supra note at 131.

[14] Id.

[15] Kharak Singh v. State of Uttar Pradesh, 1964 SCR (1) 332.

[16] R Rajagopal v. State of Tamil Nadu, 1994 SCC (6) 632.

[17] People Union of Civil Liberties v. UOI, (1997) 1 SCC 301.

[18] Jane Roe v. Henry Wade, 410 US 113. 

[19] UN AI report, supra noteat 34.

[20] John Feast, AI and Bias, Harvard Business Review, AI and Bias (hbr.org) (last visited Nov. 3, 2023). [John Feast]

[21] Ind. Const. Art. 14.

[22]John Feast, supra note at 4.

[23] Id at 6.

[24] Bachan Singh v. State of Punjab, (1982) 3 SCC 24.

[25] Harvard Business Review, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai (last visited Oct 29, 2023).

[26] Romesh Thaper v. State of Madras, AIR 1950 SC 124.

[27]Big Commerce, https://www.bigcommerce.com/ecommerce-answers/what-content-curation/ (last visited Oct 30, 2023).

[28] The Guardian, https://www.theguardian.com/books/2019/oct/04/shoshana-zuboff-surveillance-capitalism-assault-human-automomy-digital-privacy (last visited Oct 30, 2023).

[29] University of Michigan-Dearborn, https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained (last visited Oct 31, 2023).

[30]Global Conference for Free Media, https://www.osce.org/files/f/documents/4/5/472488.pdf (last visited Oct 31, 2023).

[31] Shamdasani v. Central Bank of India, AIR 1952 SC 59.

[32] Invest India, https://www.investindia.gov.in/team-india-blogs/artificial-intelligence-powering-indias-growth-story (last visited Nov. 5, 2023).

[33] Munn v. Illinois, 94 U.S. 113 (1876).

[34] Animal Welfare Board of India v. A Nagaraj, civil appeal no. 5387 of 2014. (Repealed by SC)

[35] J. Sinha, supra note at 136.

[36] Id at 136.

[37] Daryao v. State of UP, AIR 1961 SC 1457.

[38] IBM, https://www.ibm.com/watson-advertising/thought-leadership/how-ai-is-changing-advertising (last visited Nov 1, 2023).

[39]Business Insider, https://www.businessinsider.in/policy/economy/news/chatgpt-creator-says-ai-advocates-are-fooling-themselves-if-they-think-the-technology-is-only-going-to-be-good-for-workers-jobs-are-definitely-going-to-go-away/articleshow/102116739.cms (last visited Nov 1, 2023).

[40]MIT Technology Review, https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/#:~:text=Governments%20and%20political%20actors%20around,automatically%20censor%20critical%20online%20content. (last visited Nov 1, 2023).

[41] The Wire, https://thewire.in/communalism/riots-religious-processions-ram-navami (last visited Nov 2, 2023).

[42] John Churney, The Illusion of Free Press (Bloomsbury 2018).

[43] Artificial Intelligence Video Interview Act, 2021, No. 41, Acts of State Legislature, 2021 (Illinois).

[44]The White House, https://www.whitehouse.gov/ostp/ai-bill-of-rights/ (last visited Oct 31, 2023).

[45] Id.

[46] The Guardian, https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses (last visited Oct 31, 2023).

[47]NPR, https://www.npr.org/2022/06/24/1107360710/biden-suprume-court-overturn-roe-v-wade-abortion (last visited Oct. 27, 2023).

[48] Id. at 3.

[49]European Parliament, https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence (last visited Nov. 2, 2023).  

[50] Id.

[51]Vidushi Marda, AI Policy in India: A Framework for Engaging the Limits of Data-Driven Decision-Making, 1 PTS 1, 14-23 (2018).

Leave a Reply

Discover more from Legal SYNK

Subscribe now to keep reading and get access to the full archive.

Continue reading