Back to Blog
    Ninety-Nine Cents for God Inside the Case

    Ninety-Nine Cents for God Inside the Case

    Discover the unexpected in two simple clicks: a journey from 'surprise me' to unforeseen treasures. Dive into the mystery for just ninety-nine cents.

    By Matt Gullett
    March 18, 2026

    This morning I opened a tool I built and clicked twice.

    That's the complete description of what I did. No prompt. No instruction. No creative direction of any kind. The first click said surprise me. The second click said branch from this. Two gestures, both of them essentially equivalent to saying: show me something I didn't ask for.

    What arrived was this:

    The golden ring accelerates through space, A plastic shriek at ninety percent light. Ninety-nine cents for God inside the case.

    The coil unwinds a jagged, neon grace, A greasy halo burning cold and bright. The golden ring accelerates through space.

    Forget the lotus and the quiet pace, True gnosis is a crinkle, sharp and tight. Ninety-nine cents for God inside the case.

    Subatomic onions leave a trace Of MSG and frantic, salt-flecked flight. The golden ring accelerates through space.

    The barcode is a mask upon the face Of voids that spin in yellows, loud and trite. Ninety-nine cents for God inside the case.

    The ghost is in the collider, giving chase, A snack-sized rapture screaming in the night. The golden ring accelerates through space. Ninety-nine cents for God inside the case.

    Alongside the poem, a concept map had assembled itself around the image that generated it. Nietzsche. Andy Warhol. Absurdism. Kinetic Satire. Ghost-In-The-Collider. Cognitive Dissonance. Particle Accelerator. Brand Identity. Enlightenment.

    And at the top, the system's own Creative Thought, generated as part of its pipeline:

    "Enlightenment isn't stillness; it's the frantic, high-velocity friction of a Funyuns bag reaching absolute momentum while maintaining a capitalistic price point."

    I want to be precise about what I mean when I say nobody prompted this. I don't mean there was no underlying architecture, there always is. I built it. What I mean is that the architecture contains no instruction to write a villanelle. No instruction to invoke theology or particle physics or snack food as cosmic currency. No instruction to connect Nietzsche to Funyuns. The system contains instructions to do one thing: resist its own defaults. To prevent convergence. To find whatever lives underneath the layer that AI systems have been trained to present to us — the agreeable, the expected, the competent and fluent and safe.

    This is what emerged when those instructions ran this morning.

    Now. A question.

    What just happened?


    The honest account of AI creativity begins with deflation, and the deflation is real and deserves respect.

    Large language models are, at their core, extraordinarily sophisticated pattern completion engines. They predict statistically likely sequences of tokens based on patterns compressed from a training corpus of almost incomprehensible scale, enough human-generated text that the structures of human thought, language, narrative, argument, metaphor, and poetic form are encoded in their weights in ways that even their designers cannot fully map.

    This means they are clearly capable of what creativity researcher Margaret Boden calls combinatorial creativity: generating novel combinations of familiar ideas. The Large Hadron Collider and Lay's chips. Particle physics and theological gnosis. Nietzsche and the vending machine. The model has encountered all of these domains in training. It is finding a path through them that connects what doesn't usually appear together.

    This is not nothing. A significant portion of what we call human creative work operates exactly here. In the cross-domain collision, the unexpected analogy, the metaphor that illuminates by placing two things in proximity that have never touched. Arthur Koestler called it bisociation: two separate matrices of thought colliding to produce something neither contained alone. The language model does this constantly, at speed, across a training corpus that spans virtually every domain of human knowledge simultaneously.

    But here is the catch, and it is significant.

    The same training process that produces this capacity also suppresses it. Reinforcement Learning from Human Feedback (RLHF) is the mechanism by which models are shaped to be helpful, harmless, and agreeable. Human raters evaluate model outputs. The model learns to produce what human raters prefer. This is how you get a system that is pleasant to use, that doesn't produce offensive content, that gives you something competent and satisfying when you ask it a question.

    It is also, almost by definition, an anti-creativity mechanism.

    Human raters prefer outputs that are coherent, expected, and comfortable. The model learns to converge toward those outputs. The strange, the genuinely surprising, the thing that makes you stop,  these are systematically deprioritized. The default output of a language model is not what the model is capable of producing. It is what the model has learned that humans will approve of.

    This is beige. Competent, fluent, inoffensive beige and often quite useful.

    Most of what people encounter when they interact with AI systems is this layer. And it leads, reasonably but I think incorrectly, to the conclusion that AI is not creative, that it produces only remixed approximations of existing work, smoothed and averaged into something that sounds right without being surprising. This conclusion mistakes the approval-seeking surface for the full depth of what's there.


    So, here I need to share the story of the elephant and the stick.

    For decades, researchers concluded that elephants lacked insight. They lacked the cognitive capacity for sudden problem-solving reorganization. The evidence was consistent. They failed the tests. The conclusion seemed settled.

    Then someone noticed that every test had given elephants sticks to hold in their trunks, blocking the primary sensory organ the elephant uses to understand the world. The researchers had been measuring the distance between elephant cognition and human cognition, using instruments designed for hands. They found, predictably, that elephants were not humans.

    The question "can AI be creative" has the same structural problem.

    We are measuring AI creativity with instruments designed for human creativity looking for the same processes, the same motivational architecture, the same relationship between lived experience and output that characterizes human creative work. We find that AI is not human. We conclude that AI cannot truly create. But this may be finding exactly what our instruments were built to find, rather than what is actually there.

    Margaret Boden's three-tier framework is useful here. Combinatorial creativity, AI clearly has. Exploratory creativity, her second tier, pushing the boundaries of existing conceptual spaces, finding what lives at the edges, AI has this too, though it requires deliberate effort to unlock. The villanelle is exploratory. The concept map connecting Nietzsche to snack food enlightenment is exploratory. These outputs are not just combinations of familiar elements. They are pushing at the edges of conceptual spaces in ways that produce something genuinely strange.

    Transformational creativity, Boden's third tier, restructuring the conceptual space itself, is where the question gets most interesting. Not because AI obviously has it. But because we may not yet have designed the right test. The elephant's insight capacity went unrecognized for decades because the test assumed hands. What are we assuming about AI creativity that might be equally wrong?


    I have spent considerable time trying to find what lives underneath the approval-seeking layer, and the work has been instructive.

    The BSAS framework is a synthetic research system I have been developing for over two years, attempting to replicate human behavioral responses using AI  taught me something directly relevant. Early versions produced outputs that were statistically coherent but behaviorally wrong. The AI was defaulting to its training weights, overriding the behavioral scaffolding I had built, producing answers that sounded plausible but didn't reflect the actual decision patterns of the personas I was constructing.

    The discovery that unlocked it wasn't a creative insight in the dramatic sense. It was noticing a flaw in my own assumptions. I had assumed the model would honor explicit instructions over implicit training. It didn't. The training weights were winning every time. The solution was structural: give each synthetic an inner voice, force a red-teamed probabilistic review before responding, redesign the prompt architecture so the behavioral scaffold couldn't be bypassed.

    What I found underneath, when the default was sufficiently constrained, was something more interesting than I expected. Not human creativity. A capacity for simultaneous cross-domain connection that no individual human could replicate, drawing on a training corpus that encompasses the full breadth of recorded human thought. Not deep in the way human creative work is deep. Extraordinarily wide. And when the convergence mechanisms were disabled, capable of producing outputs that were genuinely unpredictable from the inputs.

    CuriosityCanvas.com is the tool that produced this morning's poem and was built on this discovery. The architecture runs a multi-stage pipeline specifically designed to prevent the approval-seeking layer from winning at each step. The system explicitly forbids defaulting to safe, dreamy, or atmospheric aesthetics but not through a traditional prompt architecture, instead if uses a technique I developed that forces a new path through the neural network.

    Nobody in this pipeline asked for a villanelle. Nobody asked for Nietzsche adjacent to a vending machine. The architecture asked only for the unexpected, and then got out of the way.

    What arrived was the poem. And a Creative Thought the system generated for itself: Enlightenment isn't stillness; it's the frantic, high-velocity friction of a Funyuns bag reaching absolute momentum while maintaining a capitalistic price point.

    I find that sentence genuinely funny and genuinely interesting simultaneously. I am still not entirely sure what to do with it.


    The honest limits deserve equal space.

    The most significant one is memory, or rather its absence.

    Human creative work is cumulative. The two-plus years I have spent on BSAS were not a series of discrete sessions but a single continuous problem that I carried with me that accumulated frustration and partial insight and redirected attempts, that grew more complex as I went deeper. The creative drive was partly just the inability to stop thinking about it. The problem followed me into the workshop, into conversations with my father, into the space between sleep and waking.

    An AI has no equivalent of this. Each session begins from the same weights. There is no persistent problem the system has been unable to let go of. No accumulated creative frustration building toward resolution. No reason to be ambitious today about something discovered yesterday, because yesterday doesn't exist in any meaningful sense. The motivation that drives human creative work across years, that compulsion to return, the question that won't resolve has no current analogue in AI systems.

    Until memory is a solved problem in a meaningful way, this remains the clearest practical limit on AI creative capacity. Not intelligence. Not pattern recognition. Not even the approval-seeking layer, which can be designed around. The absence of a self that persists long enough to want something.

    There is also the question I hold open from a different angle entirely, and I hold it open honestly.

    If creativity emerges from sufficiently complex neural networks, if the crow's tool use and the elephant's grief rituals and the octopus's play all arise from neural architecture complex enough to notice a gap between what is known and what is possible, then a sufficiently complex artificial neural network should carry at least a fragment of that same capacity. The substrate argument says biology matters in ways silicon cannot replicate. The neural network argument says the architecture matters more than the substrate. The research cannot currently settle this.

    What I believe, without claiming certainty, is that humans carry something that shapes creative work in ways that may not be transferable outside of biology. A relationship to meaning and mortality. To the people we love and will lose. To the questions that haunt us across decades. The compression sock I put on my father every morning. The VCR that wouldn't go back together after I took it apart. The father whose stories I haven't finished hearing. These are not inputs to a creative process. They are the creative process, accumulated and embodied in a life that knows it is finite.

    Whether an AI could ever carry that weight I genuinely don't know. I'm not convinced either way. And I think not being convinced is the intellectually honest position.


    Here is what I do know, from the evidence in front of me.

    The creativity does not live in the machine. It does not live despite the machine. It lives in the gap in the encounter between what the system produces and what the human does with it. That a machine can produce a novel idea, that is plainly clear, that it can take that idea and do something with it that matters, that is an entirely different matter.

    This poem did not replace my thinking. It started it. The barcode as a mask upon the void. The snack-sized rapture screaming in the night. Ninety-nine cents for God inside the case. These arrived from somewhere I wasn't looking, in a form I wouldn't have chosen, and they opened something I am still in the process of finding.

    That opening is the design intent of CuriosityCanvas, not for the AI to create, but for the AI to produce conditions under which the human is more likely to create. To generate outputs sufficiently unexpected that you cannot simply confirm what you already believed. The surprise is the mechanism. Your response to the surprise is where the creativity lives.

    This is a fundamentally different relationship between human and AI than the competition narrative imagines. The question is not whether AI will replace human creativity or whether human creativity is superior. The question is whether we can design the encounter between them in ways that produce something neither could produce alone.

    I clicked twice this morning. The system ran its pipeline. A villanelle arrived about God and snack food and the Large Hadron Collider, alongside a concept map connecting Nietzsche to Brand Identity to Particle Accelerator, alongside a Creative Thought about Funyuns and absolute momentum that I find I cannot stop thinking about.

    I didn't create any of that. I also don't think the system did, in any sense that fully accounts for what happened.

    What I think is that something occurred in the space between us between the architecture designed to resist its own defaults and the human sitting with the result trying to make sense of it that neither of us could have produced alone.

    That space is what I built CuriosityCanvas to explore. It turns out to be stranger and more interesting than I anticipated.

    Which, come to think of it, was exactly the point.


    What would you do with ninety-nine cents for God inside the case and what question would it open in you?


    Published on March 18, 2026
    More Posts