Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Pause Icon A two-lined pause icon for stopping interactions. Quote Mark A opening quote mark. Quote Mark A closing quote mark. Arrow An icon of an arrow. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Close Icon An icon used to represent where to interact to collapse or dismiss a component Comment An icon of a speech bubble. Comments An icon of a speech bubble, denoting user comments. Comments An icon of a speech bubble, denoting user comments. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. LinkedIn An icon of the LinkedIn logo. Magnifying Glass An icon of a magnifying glass. Search Icon A magnifying glass icon that is used to represent the function of searching. Menu An icon of 3 horizontal lines. Hamburger Menu Icon An icon used to represent a collapsed menu. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Twitter An icon of the Twitter logo. Video Camera An icon of a video camera shape. Speech Bubble Icon A icon displaying a speech bubble WhatsApp An icon of the WhatsApp logo. Information An icon of an information logo. Plus A mathematical 'plus' symbol. Duration An icon indicating Time. Success Tick An icon of a green tick. Success Tick Timeout An icon of a greyed out success tick. Loading Spinner An icon of a loading spinner. Facebook Messenger An icon of the facebook messenger app logo. Facebook An icon of a facebook f logo. Facebook Messenger An icon of the Twitter app logo. LinkedIn An icon of the LinkedIn logo. WhatsApp Messenger An icon of the Whatsapp messenger app logo. Email An icon of an mail envelope. Copy link A decentered black square over a white square.

Catherine Deveney: Be afraid of what humans will do with AI, not sentient robots

Controlled with properly policed ethical standards, AI technology could be - and, in many cases, IS - a positive force.

Actor Tom Hanks recently said he could feature in films long after his death thanks to AI technology (Image: Shutterstock)
Actor Tom Hanks recently said he could feature in films long after his death thanks to AI technology (Image: Shutterstock)

Actor Tom Hanks’s observation this week that artificial intelligence (AI) creates the potential for him to appear in films long after he is dead is interesting, until applied to other fields – like politics – at which point, it becomes deeply disturbing.

The House of Lords – or God’s Waiting Room as the less sensitive amongst us might call it – is already home to some political zombies, but imagine a Donald Trump whose diatribes actually go on forever, as opposed to merely feeling like they do. Or a tousle-haired Boris Johnson popping up in 30 years to appeal to the ingenues who missed his first appearance. A resurrected Adolf Hitler does not bear thinking about.

Then there’s the opportunity for conspiracy theorists to emerge with “proof” of their madness, shouting: “We told you Elvis wasn’t dead!” Even in a year when simple arithmetic would tell you he’d need to be 125 to still be alive.

Therein, lies the REAL danger of AI: the inability to determine the reality of what you apparently “see”. The recent AI generated images of Donald Trump supposedly being arrested, or the Pope looking like a white Michelin man, wearing a Balenciaga puffer coat, are evidence that, sometimes, your eyes really do deceive you.

When Dr Geoffrey Hinton, “the godfather of AI”, resigned from Google recently, saying he wanted to be free to talk about the “existential risk” he now realised he’d been partly responsible for creating, it was tempting to think of Dr Frankenstein and his monster, with AI the out of control, robotic threat.

But, as with Frankenstein, the reality is that it is the creator, not the creation, that is out of control. Greed, malice, megalomania… The dangerous motivation is the man’s, not the machine’s. AI is not an overpowering force; it is the mirror held up to who and what we are. That’s what makes it frightening.

AI is biased because programmers are biased

At an XpoNorth conference for the creative industries recently, organised by Highlands and Islands Enterprise, I heard an interesting story about AI that puts all this into perspective. AI, it was said, is being used positively in the field of health, which is something of an irony when you think of chatbots being developed to support people in situations that so desperately need person-centred approaches, like mental health. Tell me how you’re feeling and I’ll give you some preprogrammed reassurance.

Nonetheless, a diagnostic tool, we were told, has been developed with a checkbox list of symptoms to help people diagnose heart problems. Males and females were asked the same questions around things like arm pain, chest tightness, and feelings of anxiety.

There has been increasing concern in recent months over how AI like ChatGPT could change the way we work (Image: Ascannio/Shutterstock)

The only difference was the diagnosis. If you were male and ticked all the symptoms plus the box saying yes to feelings of anxiety, you were advised to seek help for a heart attack. If you were a woman, Dr Chatbot told you that, in all probability, you were suffering a panic attack. Mad old Mrs Rochester is still screaming loudly in that attic.

What should we say? Dr Chatbot, you are a misogynistic old pig? Well, of course he is, if his programmer is. AI is not a sentient being. It is simply a set of algorithms: specific rules fed into a computer to give particular answers.

ChatGPT is not a robotic “person”. It’s a computer programme. AI will, therefore, reflect everything its programmer feeds it – including their biases and prejudices. If we want AI to correctly diagnose heart attacks in women, we might need to inform programmers that women are not hysterical nutjobs.

Risks are real, but AI can be a positive force

It would be a foolish person who underestimated the risks of AI, particularly when they are highlighted by those involved in its creation. Those risks are real. It is not just AI’s ability to gobble up jobs because of its capacity to complete mundane administrative work with lightning speed accuracy that is concerning.

Creative industry workers, too, swallow hard when they see AI-generated images that have “won” art competitions, or read reviews of novels “written” by AI. But AI can never emulate humans completely, for one reason: they cannot “feel” real emotion.

If we are frightened of artificial intelligence, we should identify what we are frightened of

The true risks of AI are not robot-inspired rebellion, but human-inspired fraud, invasions of data privacy, and – perhaps most chilling of all – manipulation of the masses: the ability to control thinking and seize power with reality-altering “facts” and images. Donald Trump is already halfway there.

The recent – valid – outcries about the dangers of AI rely too heavily on science fantasy while misunderstanding its real threat. The idea of a “monster” being created, of sentient robots taking over our world in some dystopian nightmare, is misplaced.

Controlled with properly policed ethical standards, it could be – and, in many cases, IS – a positive force. If we are frightened of artificial intelligence, we should identify what we are frightened of. The truth is that the biggest thing we have to fear with AI is ourselves.


Catherine Deveney is an award-winning investigative journalist, novelist and television presenter, and Scottish Newspaper Columnist of the Year 2022

Conversation