“Artificial intelligence, a force of unprecedented magnitude, has captured the attention of humanity, surpassing the impact of fire or electricity”, remarked Mr. Sundar Pichai, the visionary CEO of Google[1], during a candid interview with Forbes. Recently, AI has emerged as a boon, igniting a global debate on whether it should become an indispensable facet of progress. However, the renowned physicist Stephen Hawking paints a bleak picture of AI, expressing his apprehensions about its potential to outpace human evolution and threaten our very existence. In an interview with the BBC[2], he cautioned that once AI attains full autonomy, it could rapidly self-improve, leaving human capabilities far behind. In this initial instalment, our article endeavours to illuminate the precise contours of AI, delving into its definition while meticulously elucidating the far-reaching consequences that accompany the integration of this remarkable technology.

Introduction:

What is Artificial Intelligence? 

According to Merriam-Webster’s dictionary[3], it is a branch of computer science dedicated to simulating intelligent behavior in machines. However, this definition fails to capture the essence of AI comprehensively.

In simpler terms, AI refers to the creation of computers that possess the ability to think and learn like humans. These machines perform tasks that typically demand human intelligence, such as image recognition, speech comprehension, decision-making, and gaming prowess. AI systems learn from vast amounts of data and experiences, gradually enhancing their competence over time. However, it still relies on algorithms to accomplish these tasks, lacking the human intelligence to perform such tasks spontaneously[4].

AI has found widespread application in industries like banking, finance, insurance, logistics, and drilling. The future projections for AI are staggering, with estimates suggesting that it will generate a staggering $15.7 trillion by 2030[5]. Moreover, AI has transcended conventional boundaries, delving into the realm of creativity with innovations like Dale 2 and Midjourney. These platforms can generate breathtakingly realistic art with just a few prompts or sentences. While this breakthrough has sent shivers down the spines of many artists and digital creators, fearing the loss of their livelihoods, others have embraced AI's ascent. In India, AI has gained remarkable popularity, with 72% of the population favouring its integration, compared to just 63% in the USA. Notably, China emerges as the country with the firmest belief in the advantages outweighing the disadvantages of AI, with an overwhelming 78% support[6].

Implication associated with AI Integration:

Given the diverse range of opinions on AI, it becomes crucial to address the current challenges posed by this technology. A report by NITI Aayog[7] highlights the problems scientists face in advancing AI technology.

Firstly, cognitive biases have been identified as a significant concern, often resulting in unfair treatment based on factors such as religion, race, caste, gender, and genetic diversity. As AI systems are designed and trained by humans using real-world data examples, the introduction of human bias into decision-making processes becomes a risk. The widespread deployment of AI amplifies the impact of unfair biases, leading to a lack of trust and potential disruption of social order. Notable incidents[8], such as Microsoft's ill-fated chatbot that quickly turned racist and sexist, emphasize the urgent need to address this issue. Furthermore, the development of the social humanoid robot Sophia by Hanson Robotics raised similar concerns.

Secondly, the Blackbox Problem[9] poses a significant challenge. AI systems receive input, features, and annotated labels that provide the “correct” output during training. The AI system then identifies the relationship between these features and labels. However, as models grow increasingly complex, understanding this relationship becomes more difficult. Consequently, comprehending the decision-making process of AI and predicting its outputs becomes a daunting task.

Thirdly, the fear of the unknown looms large, as the rapid proliferation of AI technology increases the risk of its abuse. The pace of technological development often outstrips the formulation of adequate legal provisions to govern its usage. This was exemplified by the Facebook-Cambridge Analytica scandal[10], where Facebook harvested 87 million user profiles, subsequently sharing psychological data on American voters with Cambridge Analytica, a political consulting firm employed by Trump during the 2016 elections[11]. The aftermath of this scandal led to intense debates on privacy issues in applications like Facebook and Tinder, highlighting the lag in legal frameworks' adaptability to emerging technologies. Legislative bodies, often dominated by members aged 60 and above, face challenges in comprehending and regulating this rapidly evolving field.

Fourthly, data privacy and protection have been thrust into the limelight following the Facebook-Cambridge Analytica scandal. Countries worldwide are scrutinising companies to enhance their privacy protection frameworks or, at the very least, provide transparent information to users about the usage of their data.

Fifthly, the reliability of AI systems, especially at a national level, remains a concern[12]. While various measures exist to evaluate their effectiveness and performance, including accuracy, precision, recall, and sensitivity, achieving high accuracy does not guarantee excellence across all evaluation criteria. This discrepancy can have severe consequences, potentially excluding citizens from accessing essential services guaranteed by the state.

Sixthly, the scope of legal liability in accidents involving self-driving vehicles presents a complex challenge[13]. Traditionally, liability has been based on the level of responsibility imposed on individuals. However, attributing responsibility to machines, which lack personal autonomy and were historically considered mere tools operated by humans, poses a formidable legal quandary.

Seventhly, the emergence of generative AI platforms like Dale 2 and Midjourney has not only threatened artists' livelihoods but also raised intellectual property concerns[14]. Legal battles have already erupted as artists unite to sue various generative AI platforms, alleging unauthorized use of their original works to train AI models, resulting in the generation of similar unlicensed creations[15]. If courts deem these AI-generated works unauthorized and derivative, substantial penalties for infringement may be imposed.

Conclusion:

The integration of Artificial Intelligence (AI) into various aspects of society presents a myriad of challenges and concerns. The rapid proliferation of AI technology outpaces the formulation of appropriate legal frameworks, raising fears of abuse and privacy breaches, as exemplified by the Facebook Cambridge Analytica scandal. Reliability issues in AI systems, liability challenges in accidents involving self-driving vehicles, and intellectual property concerns arising from generative AI platforms further complicate the AI landscape. These issues highlight the urgency for effective regulation and guidelines to address the potential risks and consequences of AI integration. Governments and legislative bodies face the challenge of comprehending and regulating this rapidly evolving field. As AI continues to advance and permeate various industries, it is essential to strike a balance between innovation and responsible implementation. By addressing the identified challenges and formulating comprehensive regulations, society can harness the transformative potential of AI while mitigating its potential negative impacts.

 


[1]Toews, Rob. “12 Thought-Provoking Quotes About Artificial Intelligence.” Forbes, 28 Mar. 2020, (www.forbes.com/sites/robtoews/2020/03/28/12-thought-provoking-quotes-about-artificial-intelligence)

[2] Balaganur, Sameer. “Ten Famous Quotes About Artificial Intelligence.” Analytics India Magazine, 12 Apr. 2020, (analyticsindiamag.com/ten-famous-quotes-about-artificial-intelligence) 

[3] “Definition of Artificial Intelligence.” Artificial Intelligence Definition & Meaning-Merriam-Webster, 7 May 2023, (www.merriam-webster.com/dictionary/artificial+intelligence) 

[4] Gupta, Pooja. “Artificial Intelligence Law in India - Laws Study.” Laws Study, 7 July 2022,

(lawsstudy.com/artificial-intelligence-law-in-india.)  

[5] PWC. “Sizing the Prize What’s the Real Value of AI for Your Business and How Can You Capitalise?” PWC, Mar. 2017. (pwc.com/AI, www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf.)

[6] “More Indians Are Positive About AI Products Than Americans: Report.” The Wire, (thewire.in/tech/more-indians-are-positive-about-ai-products-than-americans-report.)

[7] National Institution for Transforming India. “Responsible AI.” Niti Aayog, India, Government of India, Feb. 2021, (www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf.)

[8] Conrad, Luke. “When AI Goes Wrong: What Happens When Machines Go Rogue? | Tbtech.” Tbtech | The Latest on Tech News & Insights, 24 July 2019, (tbtech.co/innovativetech/artificial-intelligence/what-happens-when-ai-goes-wrong.)

[9] Bathaee, Yavar. “The Artificial Intelligence Black Box and the Failure of Intent and Causation.” Harvard Journal of Law & Technology, vol. 31, no. 2, 2018. Harvard Law Review, (jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf.)

[10] “The Facebook and Cambridge Analytica Scandal, Explained With a Simple Diagram.” Vox, 23 Mar. 2018, www.vox.com/policy-and-politics/2018/3/23/17151916/facebook-cambridge-analytica-trump-diagram.

[11] “Cambridge Analytica Shutting Down: The Firm’s Many Scandals, Explained.” Vox, 21 Mar. 2018, (www.vox.com/policy-and-politics/2018/3/21/17141428/cambridge-analytica-trump-russia-mueller.)

[12] National Institution for Transforming India. “Responsible AI.” Niti Aayog, India, Government of India, Feb. 2021, (www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf.)

[13] National Institution for Transforming India. “Responsible AI.” Niti Aayog, India, Government of India, Feb. 2021, (www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf.)

[14] “Generative AI Has an Intellectual Property Problem.” Harvard Business Review, 7 Apr. 2023, (hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem.)

[15] “Andersen Et Al V. Stability AI Ltd. Et Al .” US District Court for the Northern District of California, USA, District Court, Northern District of California, 13 Jan. 2023,

(dockets.justia.com/docket/california/candce/3:2023cv00201/407208.)

Picture Source :

 
Jayanti Pahwa