If years of reading draft opinion articles (op-eds) were to give rise to a belief system, one of its dogmas might be that all ideas and examples must come in threes. A stronger Lebanese economy requires better governance, reduced conflict and international partnerships. To fight climate change, we must involve governments, businesses and civil society. In the interest of a bit of iconoclasm, I’ll avoid giving a third example.
Communications consultants, whose fingerprints are often all over such drafts, call this the “rule of three”. They point to famous phrases that have shaped our world, like Julius Caesar’s “veni, vidi, vici” (“I came, I saw, I conquered”), or the US Declaration of Independence’s “life, liberty and the pursuit of happiness”.
So ingrained is this convention that there is a whole taxonomy for articulating ideas in threes. The three-part list itself is called a triad. If the words in all three elements are equal in length or the same part of speech, they’re called a tricolon. If they all express parts of a common idea, they’re a hendiatris.
We often don’t even notice the rule at work. A couple of years ago, I gave a talk on op-ed writing to a global PR agency’s UAE office. In a lame attempt at humour, I gave three reasons they should try to avoid the rule of three. I got a couple of courtesy laughs, but most of the audience studiously copied the reasons down in their notebooks.
Some think the appeal of the rule of three is natural – something from within our psyche makes it attractive. Maybe if I had been Caesar’s speechwriter or Thomas Jefferson’s copy editor, today history would be less inspiring.
Would today’s op-ed drafts, at least, be more interesting? There’s a theory that says no, probably not. The French philosopher Rene Girard, who died 10 years ago this week, argued that just about everything we want – and by extension, most of what we do – is the result of mimesis, or imitating the desires and behaviours of others. Usually, the model we imitate is someone smarter, more attractive, richer or more powerful than us. There are very few original thoughts or innate desires in the mimetic world. We don’t emulate Caesar’s pattern of speech because it was good style, but because it was Caesar’s. Had he come, seen, conquered and also done a fourth thing, today we’d have a rule of four.
This sort of thinking shrinks the gap between us and the large-language models (LLMs) that drive AI chatbots. Like them, we spend our lives training on the outputs of others. And we spit out some reformulation of information from this data set that is intended to sound “original”. If all of this fills you with a sense of ennui, try to remember that we are talking about French philosophy.
I don’t subscribe – at least, not entirely – to a mimetic worldview. If I didn’t believe people were capable of having original thoughts, I doubt I could really do my job. Yet, the rise of the LLMs is undoubtedly making the world a more mimetic place.
My colleagues and I on The National’s Opinion Desk now see it almost every day. Draft submissions that repeat the same sentence formulation – “X isn’t just Y—it’s Z” – in successive paragraphs have become so common they make me look back on the rule of three with deep nostalgia, like a screentime-weary millennial turning on the record player.
Linguists have a term for this sentence formulation, too, for those who like to get really technical. It’s called a contrastive correlative structure with a negated restrictive. It has become an infamous staple of ChatGPT-generated writing. People generally show a bias towards this type of phrasing when they want to emphasise a point – and there’s a lot of cognitive psychology literature out there on negation bias that explores why. But because we find it so compelling, and the LLMs are trained on us, they pump it out in spades, at which point it really starts to lose its appeal and even becomes annoying. It isn’t just a pastiche of human writing—it’s a caricature (sorry).
I recently scrolled upon a LinkedIn post in which a copywriter railed against editors for overcorrecting in their attempts to root out and reject AI-generated drafts. Using “it’s not X, it’s Y” and the rule of threes, they suggest, is just normal human writing and we’ve all “bullied writers into reverse evolution”.
Of course these rhetorical choices are human – after all, as I said, my efforts to bully people who over-use the rule of three long predate ChatGPT. And the LLMs’ overuse of them is, if anything, a testament to how excessively human they are. There’s no problem with writers relying on rhetorical devices to cultivate your attention, as they always have. But when these devices proliferate, whether through human or machine imitation, they become less valuable. Call it style hyperinflation.
Philosophically, I’m increasingly uncertain it makes much sense to consider a given op-ed’s origin – AI or human – an important marker of originality or quality. The bigger problem, for editors, at least, is what happens when the public domain reaches a critical mass of content that carries human bylines but was written by AI. When most of the writing out there is AI-generated or AI-assisted, and new generations of AI start training on that stuff.
Some AI researchers worry about this potential inbreeding of AI data sets. A widely cited paper last year by a team from British and Canadian universities called it “model collapse”. AI’s outputs become useless – a pastiche of a pastiche of a pastiche, a hall of funhouse mirrors.
Of course, in such a scenario, incomprehensible op-ed submissions will be the least of our worries.
Demis Hassabis, the Nobel-winning chief executive of Google DeepMind, told an audience in California earlier this year that he doesn’t consider model collapse to be a real threat:
“We know there's a lot of worries about this so-called model collapse. I mean, video is just one thing, but in any modality, text as well...I don't actually see that as a big problem. Eventually, we may have video models that are so good you could put them back into the loop as a source of additional data...synthetic data, it's called.”
Why do I feel not completely reassured?
Last weekend, at a summit convening global AI thought leaders (and some people like me) in Abu Dhabi, I listened to a few of the panellists talk about something called “agentic workflows”. “My company’s AI agents will execute transactions with your company’s AI agents,” a speaker explained, “without much human intervention.” An increasingly sophisticated and efficient corporate economy demands such things.
I suspect an increasingly sophisticated and efficient attention economy will demand it, too. Already, we know that many people use AI tools to read summarised versions of op-eds, and that some publishers are embedding such tools into the news websites themselves. Agentic workflow in the dystopian attention economy involves the full version of an op-ed being written primarily by AI and read primarily by AI.
We often think the goal of writing op-eds is the edification of others – “let me provide thought leadership” and maybe raise my profile in the process. If that is the goal, then the incentives drive you to produce maximum content quickly for maximum distribution, and having ChatGPT helps.
Instead, it might be worth taking a writer-centric view, in which the greatest value in a human writing an op-ed is for that human themselves. It’s a conclusion I only reached after putting in the hours required to think and write my way there. Op-ed writing is often exhausting. It takes a lot of wrestling with competing thoughts – others’ and your own.
Even if you believe the end product is entirely mimetic, going through the process yourself instead of outsourcing it to AI does make you intellectually stronger. And this, by the way, is important to audiences, too; the evident struggle it took to create something is usually a part of why others appreciate it. That is often what defines authenticity, even in the absence of originality. Choosing the harder path when the easier one is readily available is probably the most human decision you can make.


