Wednesday, February 21, 2024
Editorial

Guardrails for the master

On Saturday, November 25, the Chief Justice of India (CJI) DY Chandrachud reflected on the ethical conundrum surrounding Artificial Intelligence (AI). The CJI served a reminder that when it comes to AI, we are indeed navigating uncharted territories that demand both philosophical reflection and practical considerations. “We are confronted with fundamental questions about the ethical treatment of these technologies”, he said. Though made in the context of the complex interplay of identity between AI and personhood, his observations struck instant resonance nonetheless in light of the mayhem at OpenAI ~ the tech company that produced ChatGPT. The world has watched with astonishment and bemusement in equal measure, the sacking, and subsequent rehiring, of Sam Altman as its CEO. The court of public opinion, as ever, was wildly divided. Some pointed to incompetence of the board; others, to a clash of monstrous egos. But somewhere in betwixt, the farce felt like a reflection of the contradictions at the heart of the tech industry. Hence, the reference in the intro to our CJI’s remarks. The contradiction, it has been pointed out, is between the self-serving myth of tech entrepreneurs as rebel “disruptors”, and their control of a multibillion-dollar monster of an industry through which they shape all our lives today. And then, there is the tension between the view of AI as a mechanism for transforming human life and the fear that it may be an existential threat to humanity. Elon Musk, one of the Silicon Valley heavyweights who founded OpenAI in 2015, has warned that “with artificial intelligence, we are summoning the demon”. But then, Musk’s own company X has now unveiled its AI chatbot called Grok, a rival to ChatGPT. No point scrambling to shut the stable after the horse has bolted, then. Also, there are opinions that focusing on apocalyptic scenarios ~ AI refusing to shut down when instructed, or even posing humans an existential threat ~ overlooks the pressing ethical challenges. Social values are always contested, even more so in a society as markedly divisive as today’s. It therefore is of no surprise that our relationship to technology is itself a matter for debate. For some, the need to curtail hatred or to protect people from online harm outweighs any rights to free speech or privacy. The contradiction (read turmoil) is simply inevitable. In addition, there is also the question of disinformation. Few people would deny that disinformation is a problem and will become even more so, raising difficult questions about democracy and trust. The question of how we deal with it remains, though, highly contentious ~ especially as many attempts to regulate disinformation results in even greater powers being bestowed on tech companies to police the public. Another area of concern is algorithmic bias. The reason algorithms are prone to bias, especially against minorities, is precisely because they are aligned to human values. AI programmes are trained on data from the human world, one suffused with discriminatory practices and ideas. These become embedded into AI software, too, whether in the criminal justice system or healthcare, facial recognition or recruitment. This too is a concern the CJI had raised in July this year. Back then he had cited an example of AI recruitment tools deployed by firms favouring men over women as they were trained on successful employees who happened to be predominantly male. So the problem really is not about machines exercising power over humans. When stripped down, the issue appears to be about having to live in a society (or societies) in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power. That is the danger. Any tool, as useful as it is, can also cause harm. But the logic is that they rarely do so by themselves; the harm is caused when human beings exploit them. And the discussion around AI, like any other tool, should start there.

error: