We are letting AI relieve us of the burden of thought – and with it our capacity to love
Albert Masker• July 31, 2025
It had been almost a year since I transitioned to a career in tech – and I was nervous. The software company I was working for was putting out a new product, and they’d put me in charge of the launch narrative.
The assignment was simple enough: come up with three concepts for what the “story” behind the campaign could be. Then, after presenting the concepts, make a recommendation of which one to go with and why. The Chief Marketing Officer (CMO) would make the final call from there.
I knew which one would work best. Not only did it communicate the value of the product clearly, it did so in a way that was bold, maybe even shocking by the low-energy, cookie-cutter standards of software marketing.
Then came the big day. The presentation went smoothly. I laid out the finer points of the story I wanted us to tell – why it would resonate with our target audience, the ways it deftly conveyed our strengths versus our competitors, and how it would convince customers that we’d make their lives easier.
Confident that he’d go with my recommendation, I stopped talking and watched the CMO. He took a breath and proceeded to stay silent just long enough to summon the first bead of sweat on my forehead before weighing in.
“I agree that the idea you have here does everything you say it does. I told you to be bold, and you were,” he began. “That said, we can’t go with this option.”
He saw the confusion on my face, and before I could even ask why, he started his explanation by asking a question: “What is the purpose of good marketing content?”
Yes, he conceded, good content did everything I said it would do. But there was a higher goal that all these secondary elements rolled up into, he said, before explaining what it was:
“The purpose of effective marketing content,” he went on, “is to relieve the audience of the burden of thought.”
In short: whatever the merits of the pitch under discussion, the CMO thought it was too high a cognitive load for our target audience to process.
The path to buying needed to be simpler, smoother, maybe even more enervating. Both appalled and impressed with his aphorism, I came away enlightened. That one moment taught me more about how marketing works than the entirety of my MBA program.
Now, years later, it’s a memory I reflect on anew as AI and large language model (LLM) tools – a type of AI that excels at processing, understanding and generating human language – transform how tech teams work.
“Creatives” in the corporate world have taken my CMO’s wisdom and turned it inwards. With each prompt we feed into ChatGPT, my colleagues and I willingly relieve ourselves of the burden of thought – specifically of the burden of creativity – a little bit more with each passing day.
AI may be turning marketers into mindless drones, but increasing numbers of ordinary people are also willing partners in society's own enfeeblement.
Long before anyone became the first to dim his prefrontal cortex by prompting an LLM to churn out an AI-authored essay, letter or job application, I noticed in the corporate sphere what George Orwell noticed in the bad writing of his day: “No one seems able to think of turns of speech that are not hackneyed.”
The marketing dialect of hackneyed speech has its own quirks. One of them is turning nouns into verbs and verbs into nouns. I’ll never forget the first of the ten-thousand-plus times I heard a colleague say, “We need to action on this plan ASAP.” Excuse me? Do you mean “take action on"? Well, yes – but saying “action on” sounds more expert and even more eloquent to the warped marketing mind.
Another common instance: “We need to cut spend on this initiative.” Shouldn't that be, “cut spending,” you ask? Yes, but we marketers live in a fallen world of fallen-off gerund endings.
The cumulative result of these junk-loads of jargon? Having to deal with the following type of word salad: “We need to action on optimising spend and empower cross-functional colleagues with a single source of truth” – a sentence bursting with the sort of non-thought that could easily be uttered in any of several meetings I attend next week.
Orwell saw it coming eighty years ago: “The writer either has a meaning and cannot express it, or he inadvertently says something else, or he is almost indifferent as to whether his words mean anything or not" – observations that apply to the vast majority of tech “content” available for “consumption” today.
He also said: “This mixture of vagueness and sheer incompetence is the most marked characteristic of modern English prose.”
And now, in the age of AI, the likes of ChatGPT, OpenAI and all their competitors are automating this vagueness and sheer incompetence at scale.
Readers may be familiar with the work of psychiatrist and philosopher Iain McGilchrist, who’s done more than anyone to illuminate the ways in which left-brain dominance and reductionism are damaging our world and lives.
I think of McGilchrist when sitting in quarterly planning meetings, where the idea of human-created writing and images is increasingly associated with “falling behind” and not being “forward-looking” enough. We are told from on high that we need to be more “aggressive” in leveraging LLMs; why not prompt Gemini, Claude or another AI tool to create content instead of doing so manually?
After all, we are told, the resulting end product will “drive more engagement” and be “less static". Thus the last islands of right-brain creative opportunities in the corporate world drown in a sea of cliché-coated, left-brain-prompted slop.
Actual writing – the human craft of giving form to thought through language – yields rewards both mental and spiritual that the latest “prompt engineering” using an LLM never will.
“What we stand to lose is not just a skill but a mode of being: the pleasure of invention, the felt life of the mind at work,” a Yale creative writing professor recently wrote in the New York Times about the ubiquity of this tech and its effect on our lives.
From the tech-bro trenches, it’s hard not to see that loss as a fait accompli in a corporate world rooted in reductionism. But when it comes to our own personal lives, though, there is hope.
I think of the great psychoanalyst Erich Fromm finding a glimmer of such hope in Orwell’s penning of 1984 – namely that in laying out the novel’s dystopian vision so clearly and disturbingly, Orwell could help the world avoid bringing it about.
That hope, however, “can be realised only by recognising … the danger of a society of automatons who have lost every trace of individuality, of love, of critical thought.”
There’s no bigger threat today to the life of the mind and the love that grounds it than the alternative world of non-thought that these AI tools are creating.
Photo: A visitor takes a picture with his mobile phone of an image designed with artificial intelligence by Berlin-based digital creator Julian van Dieken, inspired by Johannes Vermeer's painting 'Girl with a Pearl Earring' at the Mauritshuis Museum, The Hague, Netherlands, 9 March 2023. (Photo by SIMON WOHLFAHRT/AFP via Getty Images.)
Albert Masker is the pseudonym of a soul-crushed tech marketer