Doom scrolling through social media to read AI news today is like a Rorschach Inkblot test: whatever you are looking for, you will find.
If you think AI is a massive waste of money, you will find that angle well covered. If you are vested in the industry and concerned about whether AI is a bubble there are many breathless takes on the topic. If you are looking for evidence that AI will end the world, ‘breaking news’ that will affirm that point of view abounds.
Amid the recursive, agentic AI-generated summaries of hallucination-ridden summarized slop, I had the good fortune to chat with some nice folk from Praxis who were doing great work with students about the urgent need for critical thinking skills.
That chat inspired this post.
The following is a synthesis of the top 10 things I would share with someone wanting to critically think through how AI is impacting our world.
1. Generative AI with discriminative humans is the new state of the world.
Outside of data science and AI groups, it may surprise some that until recently, most machine learning models were “discriminative” in nature, doing things such as anomaly detection, data analysis, and classification (with famous examples of AI models in the early 2010s focused on differentiating between cats and dogs).
Data analysts and data scientists then used those outputs to generate compelling narratives (a skill known as ‘data storytelling’), delivered through attractive reports and presentations.
Today, this dynamic has reversed — generative AI can produce those polished reports and presentations, but humans need to bring critical thinking — shaping the direction of content generation, discerning quality and providing true context (beyond the clever misnomer of the ‘context window’ in AI apps).
Put simply:
The requirements of a good piece of work have not changed — but the roles are reversing.
If you take nothing else away from this post, take away this — we are moving away from work characterised by discriminative AI and generative humans, to a world of generative AI, that needs discriminative humans.
2. Think critically about what types of AI to use, and whether to use AI at all.
Before we go too far, it is worth acknowledging that ‘AI’ is often an unhelpful term. While it is a well-established field of academic study, at present it is being used so loosely that it is becoming unmoored from its fundamentals.
Practically, AI encompasses a vast array of methods and technologies, and using AI as an umbrella term muddies the discussion and provides fertile ground for misunderstandings, trading nuanced and grounded discussions on the strengths and limits of different AI approaches for hype, name-dropping and unnecessary obfuscation.
For instance, classical machine learning techniques are highly efficient on small datasets, statistical methods are the right tool when you are interested in relationships between features, and symbolic methods which explicitly represent problems and knowledge solve for explainability. Each of these sub-branches of AI represents a robust and well-developed toolkit that solves problems that plague current large language models.
In that sense, AI is less like a hammer to throw at every problem, but more like a toolbox with a variety of tools, and applying the right type of AI to the right problem goes a long way in removing its mystiques and risks. Pushing for specific language the next time you hear ‘AI’ will bring you clarity.
I will outwardly smile but die a little inside if you ever use a large language model as a calculator.
3. Think critically about designing AI systems to assist you, or you may find them controlling you.
A well-cited paper characterizes two ways different people effectively interact with AI by likening them to ‘cyborgs’ and ‘centaurs’. Centaurs create a clear division of labour and treat AI as tools, while cyborgs integrate AI deeply into their thought and work processes in more flexible and dynamic ways.
Both are valid patterns of human-AI teaming, but what is most dangerous and insidious is the ‘reverse centaur’, coined by Cory Doctorow, where AI systems lead and direct, and AI treats humans as tools. An example is his description of delivery workers at the mercy of AI systems that optimize outcomes for the company by monitoring them to the nth degree, down to video cameras in vehicles tracking the movement of their eyeballs.
A related point on the ‘dark patterns’ of AI that continue to spread at pace is the realization that the goals that an AI system has are often the goals of AI companies, not AI consumers. Recommendation engines that power social media feeds to maximize engagement are a prime example, essentially arraying a force of engineers, psychologists, and designers to focus their talent against you to fuel advertising revenue machines. With addiction, misinformation, and other second-order ills an inconvenient but largely ignored fact.
This is particularly insidious as companies can also hide behind the narrative that ‘we are only giving customers what they want’. But in this case, companies are preying on our baser ‘system 1’ lizard brains (often effectively hijacking our minds by design), as opposed to serving the better intentions of deliberative ‘system 2’ brains.
Actively design AI systems such that they serve your best self.
4. Think critically about how Generative AI blurs out uniqueness and how to preserve your unique self.
A recent study showed one of the unintended consequences of large numbers of people using generative AI to produce content is that online content is increasingly looking the same. And this persists despite variations in systems, prompts and usage.
The same study also suggests people prefer content without generative AI — while the study found that not using AI leads to fewer posts online, content posted without generative AI has more positive engagement. This is unsurprising, and encapsulated well with the quote:
“Why would I bother to read something someone couldn’t be bothered to write?”
— BBC Feature
This suggests that both for maximising your external impact and for developing your internal identity, there has never been a more important time to find and stay true to your own voice.
5. Think critically about how using Large Language Models affect our brains and mental fitness.
A study by the MIT Media Lab compared brain activity on a task between people using 1) just their brains, 2) search engines, and 3) large language models, and their results present robust evidence that our brains work differently when assisted by technology.
The Brain‑only group exhibited the strongest, widest‑ranging brain activity; the Search Engine group showed intermediate engagement, and the LLM-assisted group elicited the weakest overall brain response.
Furthermore, LLM users had less ownership and had trouble quoting their own work. And over time, the LLM users “consistently underperformed at neural, linguistic, and behavioural levels”.
As we choose to use AI to help us with cognitive tasks, we lose our connection to the task and the benefits of completing the task ourselves, with long-term implications.
Just as moving away from manual work towards sedentary lifestyles introduces risks to our physical health, necessitating recommendations for deliberate physical activity to compensate, LLMs are already quietly endangering our mental fitness.
6. Think critically about how AI is impacting our worldview.
The previous point brings us to how we think about the impacts of AI. Much discussion centres around the impact that AI affects our work and threatens to automate away our jobs, but that is only a part of the story.
Firstly, just because a task is ‘exposed to AI’ does not mean it should be automated, and jobs are more than a collection of tasks. There are relationships, accountability and ethical judgement, not to mention human presence.
One irony of agentic AI is how little we talk about how we as humans have agency to design where are how we implement AI and point it in the right direction.
A more useful way to think through the effect of AI on any given area is through the ‘4 Ws’ — Workbench, Work, Workers, Worldview. Workbench is the tool or technology that is being used for work. Work is about the tasks and activities being performed and the structures that support them. Workers refer to the people doing the work and other stakeholders, and Worldview is about the unspoken assumptions and the way things work in a domain.
To take an example from education, where there are discussions on students using ChatGPT and similar AI systems for their homework and exams. There is a lot of hand-wringing on how new generative AI tools like ChatGPT (workbench) are used to do assignments (work). But rather than fixating on how to detect use of generative AI in isolation, a better approach would be to think about how students (workers) are changing in terms of them learning less of the subject matter while picking up AI literacy, and how the education system needs to adapt (worldview) to the new reality.
7. Think critically about the AI stories being told and look for the missing stories.
There is a massive amount of money at stake to the tune of over a trillion dollars for many of the world’s largest AI companies. This creates immense pressure for these companies to accelerate their flavour of AI adoption, and this drives AI ‘hype’ through marketing spend, high-profile media interviews, and PR machines that can spin facts in self-serving ways. Most recently, news broke of AI companies paying influencers $400,000-$600,000 to post about AI.
It is important to realise that many of the stories we are being told about AI overwhelmingly represent the views of people selling AI, rather than people genuinely experiencing it.
This has been called the AI story crisis, where the dominant narratives that shape the public discourse on AI are shaped by a skewed sample of storytellers, which may distract and mislead public understanding and conceptions about AI.
I would go further to point out that narratives shape more than ‘the public’, but extends into governments and companies, which raises the stakes.
AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can’t do your job.
— Cory Doctorow
In this environment, be discerning and look beyond content to contributors. Think through who is behind each AI story, and what drives them: is what you are reading coming from someone’s authentic opinion, or from someone being incentivised to frame the story a certain way? Question the framing of the story, and think about the stakeholders whose voices are not being heard.
And as far as authentic opinions go – one of the best ways to check the stories… is to experience AI for yourself first hand.
8. Think critically about the supply chain behind the AI industry.
As a data scientist, three important inputs to an AI model are a model’s training data, the labour used to annotate and process it, and the compute used in model training and usage (known as ‘inference’). Unfortunately, a large part of generative AI is built on a supply chain where each of these three components is far from ideal.
Karen Hao’s well-written book Empire of AI does a better job than I can in spelling out the dysfunctions. But in brief:
- Data used for the training of large language models is currently the subject of multiple lawsuits where AI companies are accused of illegally copying millions of articles to train AI models.
- Environmental issues abound with the current generation of AI models. Training activities are highly energy-intensive, and so is the energy used in running user queries (known as ‘inference’). Disclosure is often problematically sketchy, but points to a hefty climate footprint, with costs potentially being passed on to consumers.
- Labour in the AI industry may bring to mind well-paid data scientists and software engineers in slick city offices with free lunches, but in reality, large language models are also powered by large offshore workforces whose work involves flagging, annotating, and processing disturbing content, including toxic and harmful content, graphic violence, and worse. Much of this activity occurs at low-cost countries in exploitative conditions at a great cost to mental health.
There are better ways to create AI systems, and we should resist this from becoming the norm.
9. Think critically about adoption time horizons to parse the real impact of AI.
Coming full circle to our doom-scrolling, one lens suggests that the world is changing overnight, with the major providers announcing an average of two model releases a month in 2025.
However, the release of a new model is a far cry from changing the world. I find it useful to distinguish between invention (a new model breakthrough and its release), adoption (the said model being implemented in a usable product), and, most importantly, diffusion (when it slowly spreads through organisations and households over time).
Taking the narrative of AI replacing jobs as an example, jobs are far more than the sum of their tasks, with deep context, accountability, and relationships. In addition, while new foundation models are performing well in difficult exams such as in finance and medicine, there are significant lags between the invention of these models and their being broadly diffused into organisations and society.
In general, my experience in the context of large companies suggests that while invention may be measured in days as the knowledge sweeps through the organisation, the adoption of models into AI systems and products tends to take weeks, and diffusion is a much slower process that can stretch into years as habits form, work processes slowly reconfigures and technology slowly grinds through a host of individual, cultural and organisational barriers.
AI has been compared to tractors in its ability to displace workers in a similar way that tractors eventually displaced the use of horses for agriculture. With the benefit of hindsight, it is instructive that tractors took a full generation to overtake horses. And while there are arguments that in the digital world things move more quickly, it is likely that true diffusion will take years.
10. You can make a difference in the way we experience AI.
And in the meantime, despite popular narratives sounding like AI is something that happens to us in an inevitable way, the way we experience ‘AI’ is not like a train on rails with humanity tied to the train track and awaiting the proverbial train wreck.
It is more useful to think of AI like the early days of modern transportation itself. On one hand, we have a sense that it is a fundamental system that will shape our lives far into the future. But on the other hand, it is sobering to note that while the first modern car was invented around 1885, car door keys only came in 1908, the 3-point seat belt was only invented in 1958, and international road signs only became standardised in 1968.
This time gap between the initial adoption of modern cars and having the effective and widespread rules of the road is where we are at today for AI.
We have work to do — cars and their engines (AI applications and their models) need to be tested, car locks (AI security features) need to be installed, drivers need seatbelts and driving licenses (users need AI safety and accreditation), and road signs (AI regulations) need to be harmonized.
The future is one that you can steer today.
All images displayed above are solely for non-commercial illustrative purposes. This article is written in a personal capacity and do not represent the views of the organizations I work for or I am affiliated with. No generative AI was used in the drafting of this article. However, Grammarly was used as a spelling and grammar checker.
References:
Randazzo, Steven, Hila Lifshitz, Katherine C. Kellogg, Fabrizio Dell’Acqua, Ethan Mollick, François Candelon, and Karim R. Lakhani. “Cyborgs, Centaurs and Self-Automators: The Three Modes of Human-GenAI Knowledge Work and Their Implications for Skilling and the Future of Expertise.” Harvard Business School Working Paper, №26–036, December 2025.
Liu, Chaoran and Wang, Tong and Yang, S. Alex, Generative AI and Content Homogenization: The Case of Digital Marketing (July 26, 2025). Available at SSRN: https://ssrn.com/abstract=5367123 or http://dx.doi.org/10.2139/ssrn.5367123
Patel, Jaisal & Chen, Yunzhe & He, Kaiwen & Wang, Keyi & Li, David & Xiao, Kairong & Liu, Xiao-Yang. (2025). Reasoning Models Ace the CFA Exams. 10.48550/arXiv.2512.08270.
Kasagga A, Sapkota A, Changaramkumarath G, Abucha JM, Wollel MM, Somannagari N, Husami MY, Hailu KT, Kasagga E. Performance of ChatGPT and Large Language Models on Medical Licensing Exams Worldwide: A Systematic Review and Network Meta-Analysis With Meta-Regression. Cureus. 2025 Oct 10;17(10):e94300. doi: 10.7759/cureus.94300. PMID: 41230320; PMCID: PMC12603599.