Back Original

How to Make Sense of AI

How to Make Sense of AI

This is Part 1 on a short series on sensemaking.

It is 2026 and AI hype is everywhere.

If you’re like most people, you’re probably feeling some fear that you’re falling behind. Perhaps your fellow company operators are talking about successful AI use in their companies, and you’re questioning if you need to retool everything. Perhaps your friends are freaking out about losing their jobs. Perhaps you’re spending countless cycles trying to predict what’s coming next.

This is understandable. Widespread panics are more common during technological revolutions. It feels scary when things are changing so quickly — and in ways that will impact your livelihood and therefore your life. If there’s anything that we’ve learnt during the Covid years, it’s that humans don’t like uncertainty. Who knows what the landscape of work would look like in a few short years? Nobody does.

And yet there is an effective way to make sense of these accelerating changes. With the right frame, you can maintain your equanimity and focus on the right things — that is, only the things that may affect your outcomes. This doesn’t mean sitting back and passively observing the changes around you. In some ways you’re going to take more action as you investigate the new capabilities of this technology. But you want to be able to investigate without flailing around.

The ideal response looks like this: you are able to make decisions without being emotionally affected, without feeling FOMO, and without the distraction and panic that has claimed so many in the business world. You will seem oddly quiet and determined, unfazed by any change that comes your way. You will ignore unfounded AI doomerism and unfounded hype equally; you are able to test assertions and measure outcomes without emotion. Done right, this frame will help you become effective at ‘fast adaptation under uncertainty’, so that you know how to direct your attention and therefore your actions.

This essay will be short. It will contain a method of sensemaking that you may adapt for your own circumstances, assuming that you are a business operator. (Note: if you are an investor, this essay is not for you. The sensemaking needs of investors are significantly more advanced). The sensemaking approach outlined here is generally useful; it applies to any new technology. This means that once you master this approach, you will be able to apply it to all other paradigm-changing technologies that will emerge over the course of your life.

In latter instalments, we will cover what sensemaking actually is, and how experts sensemake more effectively compared to novices. This is informed by research funded by the US Military decades ago, which was in turn motivated by a need to help warfighters and intelligence analysts make sense of uncertain, fast-changing scenarios. Then, with some theory at hand, we will examine why the method laid out in this essay is a good start, but is actually not enough.

The Method

First, let’s lay down a few ground rules.

The first ground rule is that your attention is a limited resource. You are inundated with news, opinions, Substack takes, and (god forbid) tweets, which are designed to elicit responses from you. Some of these responses will be helpful. Others will not.

At its core, sensemaking is the art of regulating attention. This is a fancier way of saying that you must know what to ignore. And then you must have the discipline of mind to ignore those things, so that you may focus only on things that will help you.

Both steps are difficult in their own ways, but this piece will focus only on the first step. The second step — actually sticking with this approach — is an exercise for the alert reader.

A second ground rule is the concept of ‘Outcome Orientation’. We’ve talked about putting Outcome Orientation to practice on Commoncog before — the idea is simple but remarkably effective:

At all times, whenever you are doing something or reading something, you should ask yourself the question:

“What is the outcome I am trying to achieve here?”

You may then continue with the action or consumption if you wish, but you must answer the question honestly first.

Applying Outcome Orientation to all your information consumption practices will cause something magical to happen: you will no longer feel information overload. At this point enough folks in Commoncog’s members community have applied it to their lives that I can say this with some confidence: simply noticing where your attention is going will change the way you allocate that attention.

That’s it, just two ground rules. Now let’s talk about the method directly.

The basic method, simply stated, is as follows:

  1. You must ignore all opinions, analysis, predictions, fanciful essays ‘from the future’, ‘situational awareness updates’ and scenario forecasts about AI. It doesn't matter how eloquent they are, how smart these authors seem, what seat they have, or whether their assessment of AI is compelling — you should ignore all of it.
  2. You will pay attention only to detailed field reports of use. This may take on any form: tweets, YouTube videos, screencasts, blog posts. A field report is acceptable only if it is adequately detailed, but you should also take into account who the author is, what their context of use might be, and what they are trying to accomplish. If the piece contains opinions or forecasts alongside the field report, you will pay attention only to the field report, and ignore the more speculative / opinionated bits. You will not take those subjective bits seriously — the right model here is that you should treat it like the mutterings of a friend who is high on LSD.
  3. Whilst paying attention to detailed field reports of use you are looking to answer the four questions of uncertainty. The four questions are: (a) What new outcomes are suggested by this field report? (b) What are some actions I may take in response to it? (c) What are the relative values of these possible outcomes to me (given who I am, what my company does, what I value, and what my goals are)? (d) What are the causal relationships here?

That’s it.

In truth, the important thing to focus on are the four questions in Step 3. When you are operating in your career or in your life, you are not actually interested in “the impact of new technology X on society” or “the impact of AI on job loss.” You are mostly interested in the impact of AI on your career, and your life outcomes (and of course the life outcomes of those you love).

You do not need to hold opinions on those broader questions in order to be effective.

Seeking the answers to these four questions will force you to sensemake for your specific situation. The failure mode that you want to prevent is speculating about useless things that feel like productive forecasting (but actually aren’t, because they aren’t directly relevant to your context).

Here are some example answers to the four questions:

  1. What are the possible outcomes here? On September 29 2025, Microsoft Deputy CTO Sam Schillace published I Have Seen The Compounding Teams, which describes observing ‘two or three teams’ that produce working, usable software at a high rate of output, without a single line of human-written code and without human code review. He linked to an open source repository by Microsoft Research called ‘Amplifier’ that aimed to accomplish this outcome. On February 11 2026, OpenAI published Harness engineering: leveraging Codex in an agent-first world. This report is a little more suspect given that the author’s organisation — OpenAI — has a vested interest in keeping the AI hype cycle going. Nevertheless there was enough detail in the field report to be useful for our purposes (again: around possible outcomes). On March 16 2026, Schillace published The Rise of Taste — reporting that he had successfully used multiple dev machines running autonomously for a few days each to make a) ‘a high fidelity clone of Microsoft Word using web technologies’, b) ‘OpenClaw but for Microsoft 365’ and c) a ‘security filter product for agents that is also meant for enterprise.’ Each of these applications were produced without a single line of human-written code and with minimal code review. So, a possible outcome here is that ‘it is possible to produce complex, usable software at high velocity without any manual human intervention, but it requires around six months of building scaffolding for the AI agents’.
  2. What are the further actions I may take? In response to Vaughn Tan’s field report about his ‘boring, tiny tools’ (or BTTs) and Craig Mod’s field report about building accounting software for his idiosyncratic tax situation: I conclude that I may use agentic coding tools to build BTTs in order to help me solve repetitious, brain-dead tasks in my business and in my life. Furthermore, I know that experimenting with these tools in service of building BTTs will take me no more than a few days of work for each tool.
  3. What are the relative value of outcomes to me? As a business owner, the value of being able to produce BTTs in my spare time is that I can reduce friction in certain parts of the business. (To be clear: I’ve already done this, so there’s no need to verify). However, the value of ‘compounding teams’ shipping large chunks of complex software with only a handful of engineers is a much higher value outcome. The question is if it’s worth it (for my specific business) to invest six months into custom infrastructure, tooling, and process experimentation to achieve this outcome, or if it’s better to wait for commercial versions of Amplifier to be released. I know, from my network, that various startups in Silicon Valley are all aiming for that outcome. A more important action might be to find folks in these teams and develop relationships with them, so that I may keep tabs on their discoveries.
  4. What are the causal relationships here? All the best practices for software engineering — test-driven development, blue-green deployments, continuous delivery with high cardinality, high dimensionality observability — turn out to be useful (even necessary) for higher velocity, AI-first software production.

Naturally, seeking answers to these four questions should cause you to change your behaviour:

  1. You may set up a Slack channel or a WhatsApp group to sensemake collectively. Brief your friends or colleagues on the logic for field reports, and then get them to share links to reports with the implicit understanding that everyone is trying to answer the four questions of uncertainty for their specific context. Commoncog’s private members forum has a thread set up specifically for this purpose; I make it a point to call out predictions as — basically — science fiction.
  2. You may want to take action to verify that various bits of information in the field reports are true. For instance, building an autonomous coding harness might take too much time, but a cheaper experiment might be to clone a major open source project, from scratch, with minimal human oversight, simply by using the existing test suite.
  3. You may also want to take action to verify that the various outcomes are a good fit for your situation. For instance, finding gaps in your existing life and then vibe-coding boring tiny tools in response to those gaps is a fairly cheap experiment to do, and will give you lots of context-specific information about how this new technology works in your situation.

In your downtime, you may speculate about the potential impact of all of this technological change, but you will not allow it to affect your sensemaking actions.

Why Does This Work?

A couple of years ago I observed that effective businesspeople do not ‘predict the future’ more successfully than less effective businesspeople. It is simply too difficult and too cognitively expensive to accurately (or reliably) predict the future. Instead, the best businesspeople do something else: they do fast adaptation under uncertainty.

This requires a different stance, and a different set of skills. For starters, forecasters tend to want to be right; businesspeople don’t care about being right, they just want to win. This stance means that the business should be set up for experimentation and information sharing. It implies that the business must be agile enough: it should be able to change directions in response to new information. It implies that advantage accrues to those who are able to sensemake more effectively than their competitors.

Another way to put this — a pithier way — is that you don’t have to predict the future if you can see the present moment clearly (and of course, that you can act on it).

When a new technology emerges, superior sensemaking is usually about:

Why ignore analysis, opinions, or forecasts, though? The answer is straightforward: when a new technology emerges, impacts are uneven and ultimately unpredictable. When the internet first arrived, say at the peak of the dotcom bubble in 1999, no one could’ve predicted that taxi drivers would be threatened. You are better served keeping a paranoid stance and observing how the technology changes in your specific context more than you will be by reading read the analysis of those who sit in different parts of the economy from you.

(This is, by the way, the meta-lesson of Only The Paranoid Survive — when the Internet arrived, legendary Intel CEO Andy Grove wasn’t sitting around reading pundits. He was periodically redoing Porter’s Five Forces Analysis on the Internet’s impact on Intel — and you can betcha he was basing it on actual capability reports, not opinion columns.)

This leads us to our second point: during times of upheaval, the attention economy creates large incentives for people to prey on fears in order to advance their followings. You should expect to see plenty of content that are perfectly tuned to go viral. All such content is for the author’s benefit, not yours. Over the past few weeks, various people in my circles have been overtaken by fear in response to viral essays, only to recover their senses after a few days (and with the subsequent revelation that the author was not believable). You can skip all this if you treat all AI takes equally: like trash.

Third, many of the opinions produced in response to a ‘terrifying’ new technology are produced for self-soothing reasons — they serve no purpose other than to comfort the author (and the people who share them on social channels). Witness the sheer number of “woe, programming has forever changed” articles that have emerged in the first quarter of 2026. Some of these reactions are embedded in otherwise informative field reports. I’ve learnt to tune those out, so that I can form my own opinions on the outcomes that I care about.

Finally, reading opinion pieces is just too time consuming. There’s simply too many takes out there; prognosticators tend to like to hear themselves talk. There are always more people who have an opinion about a thing than those who are willing to actually do things. (And, yes, creating a nonsense simulation of AI impact and then writing it up is equivalent to having an opinion about a thing. It is not equivalent to reporting from doing — that is, testing against reality.)

The truth is that you must dedicate some time to experimentation to verify the field reports that you find. This implies that reading other people’s opinions has a real opportunity cost. The information you generate from experimentation is some of the most valuable you will get. If nothing else, it lets you know the capabilities you must build in order to adapt.

In the end, all of this may be reduced back down to the four questions of uncertainty. When reading a piece, you should always ask yourself: “which of the four questions does this piece of content answer for me?” If the answer is “none of them”, then you should ask yourself: “why am I still reading it?”

You are allowed to say “because I want to soothe myself” or “because I want entertainment” and continue with the pieces. But you must be honest with yourself.

How to Start Putting This to Practice

Notice what I’m saying, by the way: consuming opinion pieces is NOT necessarily a waste of time in other spheres of life. For instance, I am not believable about geopolitics or war; like you, I read analysis in order to make sense of what’s going on in the Middle East. Of course, whether one should spend much time reading about geopolitics is an exercise for the alert reader. (What outcomes are you trying to accomplish there?)

The method presented in this essay recommends against reading takes because it is not as useful when dealing with an uncertain new technology — one that will likely impact your life directly. But because reading analysis is such a normal thing in other spheres of life, it is important to start with this if you want to put the method to practice.

The first step to using this method is to go cold turkey on reading analysis. This gives you the mental space (and the time) to do the other sensemaking activities described in this essay.

Start now. Spend a week just rolling your eyes at AI prognostication. Treat predictions and opinions with the same level of respect you would the babblings of a small child. (That is: respectfully, but not seriously). Overcompensate on this to compensate for your natural desire to consume opinions. The way I do this is that I imagine myself discarding opinions in conversations about AI; I remind myself to ask detailed questions of use.

After you’ve fixed this tendency to consume opinions, the next step is relatively straightforward: ruthlessly ask the four questions of uncertainty every time you read something about AI.

At the end of the day, it is the four questions that determine what you will do.

Wrapping Up

While the approach presented in this essay is a useful foundation for sensemaking changes in AI, it is not enough. In the next instalment, we will talk about the most useful theory for sensemaking that is currently known, so that we may identify problems with this approach. Then, after we are equipped with more precise language, we will talk about how to adapt this approach to be more effective.

The four questions of uncertainty in this essay is from author, consultant and business academic Vaughn Tan’s Not Knowing series. He developed this framework over the course of 2023 and 2024 to help organisations deal better with uncertainty.

While I am not available for consulting, Vaughn is. He runs workshops and training programs for various institutions. Perhaps you’d like him to improve the sensemaking of your organisation? You may contact him here.