Opinion
If AI just cut out the middle moron, would that be so bad?
Parnell Palme McGuinness
Columnist and communications adviserThere was a lot of artificial intelligence about this past week. Some of it the subject of the roundtable; some of it sitting at the roundtable. All of it massively hyped. Depending on who you believe, AI will lead to widespread unemployment or a workers’ paradise of four-day weeks.
There was a lot of artificial intelligence about this week. Credit: Greg Straight
These wildly different visions suggest that assessments of the implications of AI are based on something less than a deep understanding of the technology, its potential and the history of humanity in interacting with new stuff. In the immediate term, the greatest threat posed by AI is the Dunning-Kruger effect.
This cognitive bias, described and named by psychologists David Dunning and Justin Kruger around the turn of the century, observes that people with limited competence in a particular domain are prone to overestimating their own understanding and abilities. It proposes that the reason for this is that they’re unable to appreciate the extent of their own ignorance – they’re not smart enough or skilled enough to recognise what good looks like. As Dunning put it, “not only does their incomplete and misguided knowledge lead them to make mistakes, but those exact same deficits also prevent them from recognising when they are making mistakes and other people are choosing more wisely”.
AI has layers and layers of Dunning-Kruger traps built in. The first is that the machine itself suffers from a mechanical type of cognitive bias. Large language models – the type of generative AI that is increasingly used by individuals at home and at work (we’re not talking about models designed for a specific scientific purpose) – are especially slick predictive text models. They scrape the web for the most likely next word in a sequence and then row them up in response to a query.
If there’s a lot of biased or incorrect information on a topic, this significantly colours the results. If there’s not enough information (and the machine has not been carefully instructed), then AI extrapolates – that is, it just makes shit up. If it detects that its user wants an answer that reflects their own views, it’ll filter its inputs to deliver just that. And then it presents what it has created with supreme confidence. It doesn’t know that it doesn’t know. If generative AI were a person, it would be psychology’s perfect case study of the Dunning-Kruger effect.
But we’re not here to beat up on machines. The robot is just a robot; the special dumb comes from its master. AI delivers a very convincing answer based on generalist information available; it’s the human Dunning-Kruger sufferer who slips into the trap of thinking the machine answer makes him look smart.
This is where the Dunning-Kruger effect will meet AI and become an economic force. The user who doesn’t know enough about a subject to recognise the deficits in the AI answers passes the low-grade information up the chain to a client or superior who also lacks the knowledge and expertise to question the product. A cretinous ripple expands undetected into every corner of an organisation and leaks out from there into everyday life. The AI is fed its own manure and becomes worse. Experts refer to the process as model collapse.
Illustration by Joe BenkeCredit:
There will be job losses, because when incompetents rely on AI to do their work for them, eventually the clients or superiors they’re serving will cut out the middle-moron and go straight to the machine. Companies are cutting roles that can be convincingly emulated by AI because humans have not been value-adding to them. The question is just whether managers are themselves competent enough to recognise which roles these are and restructure their processes and workforce to provide value-add before their output is compromised.
To date, it has been so-called low-skilled jobs that have been most at threat from automation. But AI is changing the very nature of the skills that businesses require. A decade ago, workers who lost their jobs to increasing automation were told to “learn to code”. Now, coding itself is being replaced by AI. “Learn to care” is the mantra of this wave of social change.
Care isn’t just a gentle touch in health or aged care. It comes from emotional insight. A call-centre worker with no emotional intelligence can be classed as unskilled. There’s no question that a machine can answer the phone, direct queries and perform simple information sharing functions such as reading out your bank balance. But when the query is more complex or emotionally loaded, AI struggles. EQ, the emotional version of IQ, is a skill that can make an enormous difference in customer satisfaction and retention.
A more highly skilled job that I’ve recently seen performed by a human and a machine is quantitative research. A good machine model can do more interviews more quickly than a human interviewer, and the depth is much of a muchness. But a skilled interviewer with a thorough understanding of the objectives and a higher emotional attunement to the way people skirt around big topics could achieve greater depth and uncover richer insights. That requires both human IQ and EQ, which the machine doesn’t have. A human with these qualities is still needed to tune the AI to deliver its best outputs.
Which is why the idea of a four-day week based on AI efficiency is as utopian as the fear of massive job losses is catastrophist. The Dunning-Kruger effect, turbocharged by generative tools, will ruthlessly expose enterprises that mistake algorithmic speed for depth. Jobs and companies built on AI’s cold efficiency and unfounded self-confidence will soon be exposed.
The roundtable exposed a discussion on AI still stuck on threats and oblivious to skills. In the end, the danger isn’t that AI will outsmart us, it’s that humans will be too dumb to use it well.
Parnell Palme McGuinness is managing director at campaigns firm Agenda C. She has done work for the Liberal Party and the German Greens.
Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.