In April 2024, I addressed the risks and benefits of utilising artificial intelligence tools to support your business’s efforts to manage corporate tax risks. This applies equally to value-added tax (VAT) and other government legislative-driven regimes.
We now know more about the impact our growing dependence on these solutions is having on people. This should cause some concern to organisations with separate internal functions as well as those dependent on external consultants.
I want to look at the cognitive damage being done and begin by highlighting that by way of an everyday illustrative example.
I have been driving for years. Beyond that skill I know how to refill a car with petrol and unlock the bonnet. I cannot guarantee that I would be able to pop the hood in every car. Changing wheels? Reasonably competent.
When there was an attempt in the UAE to have customers pump their own petrol, there were cases of people not knowing how to do it. Before you laugh at that memory, I confess to being a Luddite, just with different examples. How can this be?
As human beings we can get lazy and forget how things are done. We become dependent, losing our ability in that field of operation that allows a person to be objective. More importantly, we dull our investigative spirit.
A study in June 2025 released by academics at the Massachusetts Institute of Technology focused on the cognitive ability of three groups of people. One was not permitted to use the internet, another was and the last was allowed to use any large language model, AI tool, they desired.
Brain connectivity was analysed and those using AI, on aggregate, had the weakest results. Worse, some struggled to cite their own output when pressed. This makes sense as it was not their work.
Curiosity coupled with cautionary experience have been core elements of what have driven humanity forward for centuries. Are we replacing the bureaucratic buffers born of the memory of errors with an ideocracy that leans into a faceless system?
Risk-averse employees will naturally gravitate towards these solutions. Why wouldn’t they? Management can hardly fire AI, to which all blame will be directed when things go awry.
Ask yourself this: how intelligent is your regulatory regime function? Asked a question, do they respond with a deconstruction of the issue raised? Do they look around the query, teasing out aspects that were not initially considered? By way of a positive feedback loop in their response, is the questioner subconsciously taught to better consider and frame future questions?
Or is their answer delivered with a definitive affirmation, replying in a form that leaves little doubt that its foundation is rock solid. It might have some references to the law – and watch for this – which are particularly detailed in laying out the article, clause and sub clauses. How a solicitor responds formally in writing compared to a tax-advising accountant are akin to different languages.
If you are looking for a cautionary test, the one above would be the first red flag I would notice. It does not mean the advice is wrong, but suggests the source is outside the function and gives no guarantee that the provider truly understood it. Finally, it strongly hints that the issue was not deconstructed and considered.
Maybe it is the right time for a second set of eyes to gauge internal competence? An external provider who has had these concerns highlighted at the time of engagement will likely want to give comfort by spending more time than they might otherwise. The urgency of time and available budget may not always allow for this level of comfort.
Let us look over the fence at where the UAE's Federal Tax Authority has been developing and are deploying AI. Last month, the authority announced what they called “five key AI-driven tax initiatives”. The most interesting, which relates to this article, is their creation of an internal FTAgpt. Aimed at their own employees, this software is to support their ability to respond to external queries.



As anyone who has ever used one of these engines will tell you, learning how to ask questions in the correct manner is imperative to getting a useful result.
The easiest way to understand this is via this simple maths example. 2 * 3 + 4 = 10. However, 2 * (3 + 4) = 14. The adding of round brackets changes what element of the query is resolved first. Likewise with AI, the order and phrasing of a question are imperative.
Were the FTA to offer some guidance, or better still some detailed query-building information, to entities with questions for them, this would help speed up the process, delivering more effective results and welcome outcomes for all.

