November 15, 2023
Good morning. In today’s either/view, we discuss whether global governments can succeed in regulating AI. We also look at the regularisation of contracted teachers in Assam, among other news.
📰 FEATURE STORY
Can global governments succeed in regulating AI?
Since the release of ChatGPT, the general public has been made aware of the vast opportunities opened up by Artificial Intelligence (AI). From cutting manpower costs across several fields to assisting in others, AI has been rightfully hailed as a revolutionary technology. However, the AI boom has also brought with it several threats – from speculative sentient AIs to massive unemployment and even injurious AI porn and deepfakes.
Naturally, governments worldwide are racing to rein in this new technology. US President Joe Biden signed an Executive Order to deal with AI using war-time rules as October ended and the UK just concluded its AI Summit as November rolled in. With the EU finalizing its AI Act and China and India making their own developments, there is evident global concern. But can something as static as government machinery control something as dynamic as Artificial Intelligence?
Context
History was made on November 1 and 2 as global powers led by the British government sat down to discuss what to do about a technology that may change the world. The AI Safety Summit was held at Bletchley Park, the residence of Alan Turing who cracked Nazi Germany’s Enigma code from this very residence and saved countless lives while probably changing the course of the Second World War.
Poetically, we might be staring at potentially similar harm today, if not equivalent to WWII, with a new technology. “If this technology goes wrong, it can go quite wrong… doing significant harm to the world”, said OpenAI CEO Sam Altman, the company behind ChatGPT and other AI software while testifying before the US Congress.
Given the potential scale of this disruption as well as issues such as privacy, bias, and even national security, it’s reasonable for lawmakers to take notice of this emergent technology. The Center for AI Safety insists that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Radical advocates have also called for a blanket pausing of all AI development until a framework to regulate it has been achieved.
As the UK concluded its AI Safety Summit, the EU has been finalising its AI Act with a focus on becoming the global leader in reining in the dangers of generative AI. On October 30, President Biden signed an Executive Order which even outdid the EU’s stringency. Considering just about every potential risk one could imagine, from everyday fraud to the development of weapons of mass destruction, the order develops standards for A.I. safety and trustworthiness. It establishes a cybersecurity program to develop AI tools and requires companies developing AI systems to share their safety test results with the federal government by invoking a law from the 1960s which was made with respect to the Korean War.
In devoting so much effort to the issue of AI, global governments are rightly determined to avoid their previous disastrous failure to meaningfully regulate social media in the 2010s. With governments sitting on the sidelines, social media technology evolved from a seemingly innocent tool for sharing personal updates among friends to a large-scale tool for revolutions like the Arab Spring or mass-scale election psyops, complete with a privacy-invasive business model. While the effort is certainly there, aspersions are cast upon their ability to do so. AI is a fast-evolving field with new tools coming every day and thus, laws regulating AI must also not be static as most laws generally are.
VIEW: Evolving Governments
Tech-policy experts have long maintained that the success of governments’ efforts to regulate AI will depend on their ability to stay focused on concrete problems (like deep fakes) instead of getting swept up in mass hysteria over hypothetical risks like the arrival of robot overlords. Fortunately, recent government steps have been in this positive direction.
Biden’s executive order is not overly caught up in the hypothetical. Most of what it suggests is a framework for future action. Some of its recommendations are urgent and important, such as creating standards for the watermarking of photos, videos, audio, and text created with AI.
Other than these traditional methods of regulation, government agencies are also trying novel approaches to deal with this novel technology. Regulators like the U.S. Food and Drug Administration have completed a groundbreaking pilot program into new ways to certify AI systems. In lieu of its approach so far, the new efforts centre on certifying processes surrounding software development rather than certifying each system itself.
New governmental research has also been made into technology like ‘Constitutional AI’. The technique allows one AI system to police the content of another AI system which is proving to be a reliable and scalable way to oversee generative AI systems instead of human-oriented approaches. This technology-focused approach to managing AI risks also does not suffer from many of the pitfalls of other approaches as the oversight occurs before the AI system provides its content to the user.
Apart from technology-oriented approaches showing promises, there is always the scope of creating new federal agencies like the SEC, SEBI or the FDA but staffed with experts on artificial intelligence to regulate new technologies.
COUNTERVIEW: Premature Execution
The public imagination has associated AI threats with those like HAL 9000 from Kubrick’s ‘2001: A Space Odyssey’ or Skynet from the ‘Terminator’ films. This exaggerated fear is perhaps what is motivating premature government action that fails to address actual concrete harms. Technological predictions are not like those of climate science with a relatively limited number of parameters and overeager regulators can end up fixating shortsightedly on the wrong target of regulation. Today’s policymakers are preoccupied with large language models like ChatGPT, which could be the future of everything, or, given their gross unreliability stemming from chronic falsification and fabrication, be just the overture.
Preemptive regulation can erect barriers to entry for companies interested in breaking into an industry. Established players, with millions of dollars to spend on lawyers and experts, can find ways of abiding by a complex set of new regulations, but smaller start-ups typically don’t have the same resources. This risk monopolisation of an industry of which little is yet known.
While it is recognised that generative AI systems pose collective risks, policymakers don’t yet have a consensus on how to define their caused harm. Harms arising from a specific output created by the system generally have minimal repercussions and are easy to overlook. The harms that are most worrisome from generative AI systems are harms that accrete over time. One erroneous fact may be insignificant, but on a societal level, inaccurate information can proliferate and generate serious consequences. Who is harmed when an AI system gives answers with inaccurate information and when (when the information is generated, when the information is acted upon, or when the system was trained) are also difficult questions to answer. Furthermore, how these harms are to be penalised and compensated for is also an unexplored legal territory with no precedent.
Many of the dangers of generative AI are also related to speech, a notoriously difficult issue to regulate in the ‘liberal West’. However, authoritative countries like China have had better luck in this aspect with blanket bans.
The current and proposed legislations have already proven to be unsuccessful by researchers. For instance, a Stanford analysis found the EU AI Act failed to make even one out of the top 10 most popular generative AI models comply with its regulations. Historically, efforts to make tech-savvy laws with ‘thinking outside the regulatory box’ solutions have failed since the Obama administration times.
Governments are being tempted into techno-solutionism to govern risky technologies with yet more technology and innovation. Such an approach is only likely to generate even more risks.
Reference Links:
- In Regulating A.I., We May Be Doing Too Much. And Too Little. – The New York Times
- 3 Obstacles to Regulating Generative AI – Harvard Business Review
- Regulating AI Will Be Essential. And Complicated. – Bloomberg
- The world wants to regulate AI, but does not quite know how- The Economist
What is your opinion on this?
(Only subscribers can participate in polls)
a) Global governments can succeed in regulating AI.
b) Global governments cannot succeed in regulating AI.
🕵️ BEYOND ECHO CHAMBERS
For the Right:
Jawaharlal Nehru and the Dissenting Congressman
For the Left:
In Chhattisgarh’s mining belt, Congress’s ‘doublespeak’ leaves Adivasi voters disgruntled
🇮🇳 STATE OF THE STATES
Pollution levels spike (Delhi) – With unfavourable meteorological conditions, air pollution levels worsened in the nation’s capital. The city’s air quality index (AQI) was 363, categorised as very poor. Despite some relief due to the rain, pollution levels went up as most people flouted the ban on firecrackers on Deepavali night. According to Swiss company IQAir, Delhi was the most polluted city in the world on Monday.
Why it matters: The pollution control body said nearly all stations in the city recorded a spike. Stubble-burning incidents are increasing again, which has only made matters worse. The government has already introduced a ban on construction work and the entry of polluting trucks.
Firecracker sales decline (Tamil Nadu) – The state recorded total firecracker sales of ₹5,100 crore this year. That’s ₹900 crore less than last year. The main reasons seem to be a hike in prices and the ongoing ban on barium nitrate used in the manufacturing process. Even the alternate chemicals used in making firecrackers saw a 7-8% price hike. Following orders from the Supreme Court, manufacturers made crackers that emitted 25-30% less particulate matter.
Why it matters: For the 2023 festival, Sivakasi, the manufacturing hub for firecrackers, made ₹6,000 crore worth of crackers. Many sellers across the state also saw delays in their temporary firecracker licenses. Interestingly, small Japanese manufacturers were happy with the sales and planned to align their production with the rainy season.
Trucker’s strike impact (Odisha) – Due to the truckers’ strike entering its 4th day, there’s about to be a coal shortage at several power plants in the state. Among those affected are the Jindal India Thermal Power Plant, Jindal Steel and Power, and TATA Steel Meramandali. On November 9, the Truck Owners Association of Talcher called for a strike that has impacted the transportation of coal.
Why it matters: The approximately 8,000 truckers on strike have been demanding a fare hike after 2019 revisions resulted in financial losses amid rising input costs. If the strike continues, there’s a fear among the power plants that they could be forced to shut down. Some plants may be able to avert that thanks to rail connectivity for getting coal deliveries.
Congress’ warning to rebels (Rajasthan) – The state-in-charge of the All India Congress Committee, Sukhjinder Singh Randhawa, sent a letter warning senior leaders of the party about not supporting the party’s authorised candidate. The letter was also sent to party workers asking them to support the official candidate. He also warned that action will be taken against those who don’t follow.
Why it matters: Party sources stated that 15-20 members have rebelled against the party and are a hindrance for the authorised candidates ahead of the upcoming Assembly elections. With just under a couple of weeks till the polls open, the Congress in the state suffered a blow as former Congress members joined the BJP.
Regularising contracted teachers (Assam) – The state government will regularise the jobs of 40,000 teachers working in government schools on a contractual basis. During his Independence Day speech, Chief Minister Himanta Biswa Sarma announced the plan to regularise the jobs of contracted teachers. The whole process has a deadline of March 2024. Those who join the regular posts will be treated as fresh appointments.
Why it matters: Despite the 40,000 teachers working on a contract basis, they were being paid regular salaries on the same scale as permanent teachers. Assam has over 9,500 contractual teachers in primary schools and over 4,500 for students in classes 9 and 10. The earlier service length of the contractual teachers won’t be calculated.
🔢 KEY NUMBER
39 – The Coal Ministry will launch its 8th round of Commercial Coal mines auctions. 39 mines across Bihar, Jharkhand, Maharashtra, Odisha, and West Bengal will be offered.