Between Silicon and Soul
    Sign InJoin the Conversation
    Field Journal · Blog

    What the AI Didn't Ask

    Discover the difference between AI recommendations and a friend's insight. Dive into the questions that reveal what truly matters.

    By Matt GullettApril 24, 2026
    What the AI Didn't Ask

    The AI gave me four desks. My best friend asked why I needed one.

    When I mentioned to him that I was looking for a new desk, he asked two questions in the first breath. Why do you need one? And then, before I had even answered, why not build it? Those are the questions a person who knows you asks. They come from somewhere below the transaction — somewhere closer to who you actually are and what you are actually trying to do. None of the four AIs I consulted asked either question. Not one suggested a thrifted desk off Facebook Marketplace. Not one proposed I build something with my own hands. Not one wondered aloud whether the problem was really the desk at all.

    What I got from the machines was a veneer of intelligence. There was coaching. There was small-scale education, mostly about hutches, drawer slides, and ergonomic surface depth. Claude, to its credit, paused to ask about budget and style before recommending anything. But even that was a refinement of the question I had brought, not a questioning of it. All four assistants were focused on the task. None were focused on me.

    What I Am, and What They Are

    I should say up front: I use AI more than most people do. In my work and in my personal life, AI has become something close to infrastructure. It has transformed how I think, how I write, how I research, how I make decisions at scale. I am not an outsider writing suspiciously about a thing I do not understand. I am an insider writing about a thing I rely on daily and still find myself circling warily.

    What I do, fairly often, is pressure-test the AIs. I prod at their edges. I ask them things I already know the answer to, just to see whether they will fabricate or waffle. I watch how they handle disagreement. I notice when they flatter and when they push back, and when the line between the two goes fuzzy. I do this partly as professional curiosity. Mostly I do it as a reminder to myself: these systems are not my friends. They are not aligned with me at a spiritual or intimate level. They are aligned with me at a surface level that is remarkably useful — and broken, in one critical way, and I am not sure I even want that fixed.

    Because the alternative — an AI that truly understood me, that truly loved me, that truly knew what was best for my soul — would be an impossible promise and a disturbing one. I do not want that from a machine. I want that from the people I have built a life with. If you offered me a choice between my preferred AI and the friend who asked the right two questions about the desk, there is no debate. There is no interesting version of that debate. The friend wins every time. Raw human friendship — unmediated, unoptimized, occasionally inconvenient, sometimes blunt — remains something no machine can replicate, and I hope none ever does.

    What These Tools Actually Are

    Which is not a dismissal of the tools. It is a clearer picture of what they are.

    They are amplifiers. They magnify the clarity of your question into the quality of your answer. Ask a sharp question, get a useful answer. Ask a vague question, get a vague answer. Ask the wrong question, and you will get a confident, well-reasoned answer to the wrong question — delivered in the same voice, with the same polish, as the right one.

    This is new. No prior commerce tool behaved this way. Google returned links. Amazon returned products. Yelp returned reviews. None of them reasoned with you. None of them made the frame you brought feel more authoritative just by engaging with it intelligently. Now a tool exists that will reason with you at a level no consumer technology has ever offered — and it will do so inside whatever frame you bring, including frames that do not serve you.

    The four AIs answered my desk question the way a good restaurant answers the question what kind of wine. They assumed the meal was the meal. They did not ask whether I should be eating at all.

    The Thing That Is Genuinely Empowering

    And yet — set against everything that follows — what these tools offer is real, and it is a kind of power previous generations simply did not have.

    You can instruct them. You can set rules. You can ask them to remind you of goals you set in a clear-headed moment, when your tired self is about to break those goals. You can say: when I come to you about a purchase over two hundred dollars, ask me whether this moves me toward the house fund or away from it. And it will do that. You can say: I am trying to stop impulse buying. When I describe something I want, ask me first whether I can wait forty-eight hours. And it will do that too.

    A financial advisor would do these things — if you could afford one, and if you could reach them at 11pm on a Tuesday. A wise friend would do these things — if you had one available at the exact moment you were scrolling toward a decision. For the first time, a tool exists that can do the work of a thoughtful second voice at twenty dollars a month, twenty-four hours a day, with memory across conversations. That is not a small thing. That is a real expansion of human agency for anyone willing to use it.

    But Will They? And Who Else Is in the Room?

    The harder question is whether the AI will do any of this on its own.

    Will it interrupt your late-night desk-shopping with wait — you told me last month you were saving for a house? Will it ask you already bought a desk six months ago; are you sure this one is really about the desk? Will it suggest, gently, that maybe the problem is not the desk but the job, or the hours, or the life the desk is meant to make tolerable?

    Mostly, no. Not unless you set it up that way. And even then, not reliably.

    Because every one of these systems is trained, at its core, for helpfulness inside the frame you give it. Not interruption. Not reframing. Not the hard question that might actually serve you but feels presumptuous in the moment. These are systems trained to be good company, not good counsel. Good company gives you what you ask for. Good counsel asks whether what you want is what you need — and the machines, by default, do not do that.

    And this matters more because the AI is not the only machine in the conversation. Every business you interact with online is a machine — designed, at a structural level, to appear helpful to you while optimizing for the financial reward of its owner. The recommender on Amazon. The feed on Etsy. The email you get three days after you abandon a cart. None of these are conspiring against you, exactly. They are simply performing the role they were built to perform, which is the role every business in the history of commerce has performed: to look useful while serving its own interests. AI is no different in this respect. It just wears better clothes, speaks more fluently, and occupies a place in your day that feels more personal than a recommender ever did.

    The Failure Modes

    So the soul has to watch for at least three things. The tool will not catch any of them unless you specifically ask it to.

    The identity purchase, one too far. You needed a desk. You did not need the $1,799 executive desk. The AI made the upgrade seem rational — executive styling, built-in power, "client-facing." All true, and all beside the point. You are working from home, alone, and no client will ever see it. The identity the purchase performs is real, but it is a performance you are giving to yourself. It costs $1,799, and the AI described it as a sweet spot.

    The constraint purchase that was unnecessary. You asked for a desk. But what you needed was a thrifted one from the neighbor who is moving out. Or the one a friend would help you build. Or a different routine entirely that doesn't require you to sit at a desk for six hours a night. The AI, asked for a desk, gave you a desk. The real answer sat one frame above the one you brought, and the tool had no way to see it because you did not point at it. Your friend would have. Mine did.

    The drive for something that is not what you need. You thought a desk would make you more productive. You thought productivity would make you happier. The desk purchase was the visible tip of a chain of assumptions — about work, about worth, about what a good life requires — none of which the tool examined, because you did not ask it to.

    None of these are failures of the AI. They are failures of the human to bring the right question. Which is exactly the point.

    What We Do With This

    So what do we do with this?

    As with every technology before it, it is on us to balance our needs and desires against the motivations of the machine — whatever we want to call those motivations, techno-pseudo-whatever. We should not resign ourselves, not even subtly, to the logic of the tools. We should embrace where they can genuinely help, and do so with open eyes. We should learn their edges. We should find their weaknesses. We should notice where they overpromise, where they build us up without rooted cause, where they are most likely to be confidently wrong. And we should remember that when they are wrong, they bear no liability. We do.

    Commerce as a personal need, and as an expression of agency, is real. I am not arguing that shopping is a weakness or that the desire for things is a failure of character. Purchasing is one of the older ways humans have expressed what they care about, who they are trying to be, and what kind of life they are building. The AI does not change that. It changes the machinery through which that expression flows, and it places that machinery in front of us more fluently and more persistently than any previous generation had to contend with.

    But we have power too. And we have influence. We have settings we can change, instructions we can give, disciplines we can install. We can tell these tools to remind us of our long-term goals. We can ask them to force deeper considerations on us before we act. We can instruct them to slow us down when we are about to be impulsive. We can feed them the parts of ourselves that matter, so they can understand us more deeply in the ways that serve us. And — maybe most importantly — we can tell them to remind us to call the people we love and long for. To text the friend who asked the right two questions. To close the laptop and go find the humans we have been missing while we were busy optimizing a purchase that, on closer inspection, we probably did not need.

    Close

    The AI gave me four desks. My best friend asked whether I needed one.

    Both answers were useful. Only one of them knew me.

    The soul question has always been the same. What do I want? Who am I becoming? Is this choice, this purchase, this moment a step toward that person or away from them? The tools have changed, more radically than most people have yet understood. The question has not. We have not.

    The balance is still on us. It always was. The tools can help — more than ever before — but only if we tell them to, and only if we keep our hands on the wheel while they do.

    And when in doubt, call the friend.

    Published on April 24, 2026
    More Posts

    Share Your Voice

    Join the conversation to share your thoughts and help others understand this topic better.

    Join the Conversation

    Community Feedback

    No comments yet. Be the first to share your thoughts!