In R. K. Narayan’s novel, The Vendor of Sweets, the protagonist’s son, who has recently returned from America to his home town of Malgudi (a fictitious name), wants to set up a factory to manufacture storytelling machines. The novel was published in the 1960s before the advent of personal computers so the premise must have seemed preposterous at the time. But five decades later, you can ask a machine to write a novel and it will do so at the press of a button.
In fact, we have been hearing about AI for quite some time but the arrival of GPT-4 in 2023 has taken many people by surprise. I first heard of it from my 10-year-old son who had been using this tool for weeks. And it seems like only yesterday that someone at my workplace mentioned a new search engine called Google and suggested that I use it since you could type a query in a field on a blank page and it looked quite elegant. A few years later, in a similar vein, I learnt that there was a video-sharing website called YouTube. Then along came the iPhone, which seduced us with a version of the internet on a mobile phone that, according to its promotional ad, wasn’t ‘watered down’.
I must confess that I listen to talks on YouTube now and again while I am shaving or taking a bath. However, I sometimes regret the day that YouTube came into being because adults and children alike now spend many hours each day watching videos.
As a child, I watched with wonder an imported programme shown on our local TV channel, Srinagar Doordarshan, about the tasks computers would be performing in our homes in the distant future. I had yet to see a computer with my own eyes. True, the University of Kashmir had obtained a computer and rumour had it in our town that you had to take off your shoes to enter the room it was in as if you were entering the inner sanctum – the holy of holies – of a shrine. I often wondered what computations such a machine performed in this place of higher learning.
The word ‘Google’, a noun when it was launched, soon changed into a verb. A few months later, an acquaintance sent me an article he’d written – or so he claimed. I copy-pasted a line from it in Google’s search field and thus discovered that this person was a plagiarist. I worked in a stationery shop in London when talk of ‘a paperless future’ was all the rage. Some soothsayers were even predicting that our world would come to an end at the end of the second millennium as computers wouldn’t be able to cope with the change of date. Mercifully, that turned out to be a false alarm. Could it be, then, that predictions of disaster resulting from AI take-0ver have likewise been greatly exaggerated?
Grim predictions were made about the not-so-distant future when it was claimed that everyone would read news online and newspapers would disappear. It reminded me of a childhood friend who once told me that no one would go to tailors in the future as people would be buying only readymade clothes. I thought of all the tailoring shops and cloth merchants in our town, including my own father. Similarly, the arrival of ebooks sounded a death knell for printed books a decade ago, only for the tide to turn again in favour of the printed word. Printing technology has survived for over 500 years. Gutenberg changed the course of history by printing a 42-line Bible using movable letters in a mechanical device. It’s much the same in the information age.
We have recently been warned that AI poses an existential threat to mankind that is bigger than climate change. Mankind has faced many existential threats before and the most recent, the Covid-19 pandemic, which appeared only a couple of years ago, is generally regarded as a thing of the past. AI has also been dubbed ‘The Oppenheimer Moment’ – the moment when the scientist who created the atomic bomb came to believe that he had blood on his hands.
Frankenstein’s monster, let’s remember, is the creation of a novelist, not an actual mad scientist. Mary Shelley’s immortal story is based on the Greek myth of Prometheus, who tricks the greatest of the gods – Zeus. And Jorge Luis Borges would have loved the idea of an infinite library in the form of AI – a library that would encompass all other libraries.
It gives me a reason to be optimistic about the future of AI when I hear that those who work in the tech industry in Silicon Valley believe in Stoic philosophy. It is a basic tenet of Stoicism that ‘until we begin to go without them, we fail to realise how unnecessary many things are. We’ve been using them not because we needed them but because we have them’.
The biblical saying that there is nothing new under the sun rings true in asserting how unchanged the human species remains, despite so many planetary upheavals. Our desires and aversions aren’t really all that different from those of our long-gone ancestors and, like them, we continue to fear mortal combat with creatures mightier than ourselves.
In the present age of AI, machines are taught logic. The ancient Greeks called reason logos. Human beings are capable of reason but we don’t always act reasonably. Machines cannot differentiate between good and bad. In fact, as Shakespeare observed, there is no good or bad but thinking makes it so. According to Noam Chomsky, ‘think’ is the wrong word to use with respect to machines. His view should be respected as he is a linguist by training and understands that this word has much philosophical weight attached to it, as witness, for instance, the famous dictum ‘I think, therefore I am’.
The most innovative machine in my childhood was a Casio calculator. It was very intriguing for me to find that such a small hand-held device could carry out additions and multiplications in a fraction of a second. An electronic calculator was a much sought-after gadget because you couldn’t obtain one in Srinagar. Your only chance was to receive it as a gift from a relative or family friend who had been to Delhi or Bombay. I was deeply impressed by the Japanese who made these nifty little things, thereby rendering the jobs of accountants and bookkeepers so much easier. Such a solar-powered calculator was also a boon because you didn’t need to change its batteries.

AI is predicted to leapfrog soon and acquire what is known as general intelligence. I have only ever heard this phrase in the context of complaint since my wife often claims that I haven’t got an iota of it. I mistakenly believed it to be an archaism until it gained currency this year because of the debate about AI.
I didn’t know much about literary theory either and learnt about pathetic fallacy from my son. He asked ChatGPT some time ago to write an essay about his dad and it replied that I wasn’t known in literary circles, which I thought was a spot-on response.
Language is always a powerful tool. It is only through the use of language that arguments are lost and won before a judge in a court of justice. But languages aren’t that old compared to the hundreds and thousands of years that homo sapiens has inhabited planet Earth. Writing came much later than the spoken word. Anything before 1200 BC is considered to be prehistoric as there are no written records before that. Language is arguably the most important human invention. It forms the basis of every civilisation and yet every child learns it anew. Human knowledge is passed from one generation to another through the medium of language.
Learning English as a second language has always been fraught for me. I have had endless trouble, for instance, with the correct usage of the definite article. My long-suffering editor suggested a couple of years ago that I use an app called Grammarly, which is now powered by AI. In the past, my conversations with this fastidious gent (my editor, that is, not Grammarly) revolved around topics like the difference between a hyphen and a dash or the function of the tractable apostrophe. Our respective typos keep both of us humble. I still feel embarrassed when I think of a typo in the first edition of my first book – mistakenly typing the word ‘rare’ instead of ‘rear’ – thus changing my intended meaning drastically and causing me to imagine stringent readers grinding their teeth on spotting the mistake.
I have never been a fan of auto-filling software as I was quite comfortable with manually shifting back the carriage of my old Olivetti typewriter and using a correction pen to cover my typing errors, then typing the correct versions over them. But editing entails more than correcting typos and grammar. Genuine artistry is involved when repositioning a word or phrase word here and there that can significantly improve a piece of writing. After more than twenty years of collaboration, my editor is on the point of retirement, which makes me feel utterly bereft. I can’t imagine feeling the same when Mr Grammarly is replaced by a more efficient model.
As an adolescent, I was puzzled to learn how computers used simple binary numbers – zero and one – to perform highly complex tasks. This was around the time when half a dozen institutes opened in Srinagar teaching various computer languages. I thought it was rather misleading to use the word ‘language’ with respect to computers as I associated language strictly with poetics. Czeslaw Milosz says that language is ‘the only homeland’. In fact, languages are always evolving and the language that doesn’t alter is a dead one.
The 12th-Century poet, Farid-ud-din Attar, composed a long narrative poem alluding to Solomon and David, who in a Sufi doctrine, are said to have understood the language of the birdsong. As a child, I heard of a man who had mastered various sounds made by deer. One day, while practising the sounds in a forest, he was shot dead by a hunter who mistook him for prey. As we can now teach machines human language, we are certainly creating opportunities for multiple mistaken identities. Neither seeing nor hearing is believing anymore as AI can create digital clones that are hyper-realistic, thus facilitating the perpetration of hoaxes and scams. AI is understandably raising some ethical questions but its champions are cautiously optimistic. Identity theft could become commonplace when you can easily impersonate anyone, using AI.
George Orwell in his novel, Nineteen Eighty-Four – the year in which Steve Jobs introduced the first Macintosh, named after a variety of apples, warned us of a dystopian future in which machines control human beings. There have been mind-blowing breakthroughs since then in computer technology and many key players in the research and development of information technology have recently had second thoughts and asked for a moratorium on AI. The rise of social media proved to be a false dawn and our world certainly isn’t a better place today because of it.
Machines will be trained to spy on people whose ideas conflict with their government’s. Search engines are now using AI. Microsoft tried to up-end Google by using AI in its search engine, Bing. However, it wasn’t for long because Google soon revealed its AI experimental service, known as Bard.
AI was inevitable but it doesn’t have to be threatening. Besides its many other uses (in medicine, for instance, hastening cures for diseases that are currently incurable) this new technology also holds great promise. No progress has ever been made without change and only Luddites would like to destroy machines in a vain effort to halt the development of the new technology. Perhaps writers of the future, equipped with apps like Grammarly, will supersede their erstwhile editors. However, self-editing is usually inexact. Doctors do not self-diagnose but consult a fellow doctor when they are poorly. There are myriad reasons for a headache, a doctor friend once told me, but only one of them is a brain tumour. A layperson who resorts to self-diagnosis may worry to death if he or she has a slight headache. Better to leave such matters to accredited professionals.
What the future holds is right now anyone’s guess. But if the past is any guide to it, I believe we have nothing to fear from this new technology.