The stages of AI
My thoughts on the stages of AI adoption, from disconnected to fully plugged into the Matrix.
Tags: ai opinionPosted on: 2026-05-03
I'm a technologist. I like to explore new technologies and try things out. But I'm also someone who looks past the hype, and if something doesn't have real benefits for the things that I do, then I don't adopt it. AI for me has always been a mixed bag: I cannot deny it's uses as a tool. I use AI on an almost daily basis, and have been using it since ChatGPT became public in 2023. But I also see the dangers of AI, from copyright issues, ethical issues, the risk for everyone's jobs and the toll it has on the planet.
Since late last year however, I've been stepping up how I use AI and I have to admit seeing a big difference in how it improves my day to day task. What I realized is that a lot of people make up their mind about AI based on how they use it, and I feel like there's a lot of utility that people just haven't seen yet. So in this post I'll go over what I consider to be the 4 stages of AI adoption. To be clear, the goal is not to sell AI to anybody, but more to see whether you're where you thought you were on the ladder.
Stage 0 - Unplugged
This is the baseline. Work is done entirely by humans, supported by traditional software (spreadsheets, search engines, email, etc.) and we make all the decisions. Automation exists at this stage, and I would argue a lot of what is today being done by AI might be better off staying with regular old school n8n workflows, Jenkins pipelines or even Python scripts, but that's a subject for another post.
While there is nothing wrong with stage 0, and everyone should be allowed to decide whether this is where they want to be for their hobbies and casual interests, I feel like for most knowledge workers, stage zero is increasingly a competitive disadvantage. The gap between someone doing research manually and someone using AI to do the same research is significant and widening. Coders are now expected to use Claude Code. Project Managers are expected to produce reports quickly. Designers cannot compete with instant graphics produced at a quality that many businesses consider to be good enough.
But this stage does have some advantages. There's no token cost, no privacy implication with sharing your data with LLMs, and no constantly evolving skill set.
Stage 1 - The Chatbot
The first point of entry for most people is the chatbot. You open a browser tab, type a question, and get an answer. It's reactive, stateless, and transactional. Each conversation starts fresh and the AI has no memory of who you are, what you worked on yesterday, or what your goals are.
This is how I think most people view AI. This is how most people are introduced to ChatGPT, and until late 2025, that's how most of my daily uses of AI was as well. You can get great results this way, like asking random questions about everyday things, or even by uploading a document to your chatbot and getting some useful insights. But very quickly you'll find yourself hitting a brick wall. This is where I feel a lot of people tend to criticize LLMs for not being that good, not remembering what they told them, being too condescending or not producing good results. It's certainly a cheap option, but the results go along with the cost.
Stage 2 - The Assistant
This is where I feel like a lot of people in tech and those who follow the field closely have been pivoting into. It's also where I land. The shift is subtle but profound: instead of a stateless tool you query occasionally, AI becomes a persistent collaborator that knows your context, has access to your documents, and is integrated into your actual workflow. This is possible thanks to projects, memory and selective context.
The way I work is by using Claude Code projects. The idea is simple: I create a project for each type of topic I want the AI to assist me with, and provide written instructions and files relevant to that context. For example, a project about finances might have my current savings, my financial goals, and my yearly budget. Another project about health and nutrition could have my current body measurements, latest lab results and charts from my Apple Watch showing how much I exercise and my weight over time.
Any time I want to chat with the bot about a specific topic, I open a new chat in the relevant project. That means I don't have to constantly repeat myself, or provide information I've already provided. It starts the conversation with all the data I deemed relevant. I stay in control of what I decide to share, and what I keep private. Similarly, because of this split context, the model doesn't get confused with financial data if I want to ask about a Python coding problem. Add to that the skills, connectors and plugins that modern LLMs provide, and you can add dynamic context from things like your calendar, emails, shared drives, and so on.
I feel like using LLMs as an assistant is a massive step up over the simple chat bot, and the requirements are barely any higher. All I needed to do is spend a few hours preparing these projects, and I find that the AI is far more productive in this mode. At a monthly fee of around $25, this gives me plenty of use for everything I would want my assistant to do in the month.
Stage 3 - The Agent
This year is the year of the agent, at least according to all the news reports that follow this field. Agents are indeed pretty cool, but what exactly makes an AI a proper agent? The idea is simple: You set a goal, the agent figures out how to achieve it, takes a sequence of actions, and reports back. The human's role changes from doing and reviewing to instead setting a goal and waiting for the result. This is how people run fleets of agents doing tasks for them overnight, or lead entire teams or whole companies with the help of AI.
Autonomous agents are being deployed today for tasks like competitive research, customer outreach, data pipeline management, and software testing. But they require careful design, and a lot of companies are stumbling, but I don't really think it's because of a lack of capabilities. I've tested agents and it's truly a sight to behold, even though they certainly aren't perfect. But what junior employee or even seasoned professional is?
Here's a simple use case example for what agents can do: A growth team at a startup wants to monitor competitor pricing every week and update their internal pricing strategy document accordingly. At stage two, someone would routinely ask an AI to help draft the analysis once they had gathered the data. At stage three, an agent does the whole thing: it searches competitor websites every day on a schedule, spawns sub-agents to extract the relevant pricing data, compare it to historical trends, draft a summary, and the main agent reviews the report before posting it to the team's shared workspace. All without a human in the loop, ready for when the team members log in for the day.
One thing worth noting however is that this stage may increase the efficiency, but it also raises the privacy concerns and cost. All of these agents and sub-agents cannot do their work through a chat interface, which means paying on a per-call API using tokens. This is how you rank up massive bills. Also, if you want your agents to do things behind your back, you often need to grant them more permissions: they can access your documents, send emails for you, connect to the network, etc.
Like I said at the start, I'm not trying to sell AI to anyone. I don't see these stages as something everyone should aspire to climb to the top. I don't know if I'll ever embrace stage four myself. But I do think a lot of people pass judgement on LLMs without really thinking about the way they use it, and hopefully now you have some more context on what's possible.