
The Shrinking of Curiosity
Discover how digital tools shape our curiosity, subtly training us to want less. Are we becoming less interesting? Dive into the shrinking of curiosity.
How the tools we trust are training us to want less.
There is a particular flattery built into every digital tool we use, and it is slowly making us less interesting.
It works like this: you search for something, click a result, read it, move on. The system notices. The next time you search, it nudges you toward things adjacent to what you clicked before. Over time, your results tighten. The system is learning you, or more precisely, it is learning the smallest, most measurable version of you: the version that can be captured in a click stream.
This is called personalization. We accept it as a convenience. We should recognize it as intellectual atrophy dressed in better UX.
— — —
I've spent more than two decades working in the market research industry as a technologist, which means I've spent more than two decades watching how human beings form beliefs, make decisions, and explain themselves when they think someone is listening carefully. One thing I can tell you with confidence: curiosity is not a fixed trait. It is a muscle. And like any muscle, it atrophies when you stop using it in ways that challenge you.
The digital environment we've built is extraordinarily good at eliminating challenge.
Search engines are not designed to deliver what you are looking for, they are designed to deliver what the search engine believes will serve you and it best. Over time it optimizes for your patterns. Chat bots use history to slowly tailor themselves to us. Commerce platforms leverage recommendation algorithms to monitor our favorites, our purchases, our searches to optimize what we see. All of which is mostly helpful – except that it can lead to a narrowing of our perception of the world.
We have built an entire information infrastructure whose primary design goal is to minimize your surprise.
And we are surprised by less and less because we are exposed to less and less surprises.
— — —
Here is what bothers me about this as someone who has spent decades trying to stay genuinely curious about the world: the systems aren't malicious. That's not the problem. The problem is subtler. The problem is that they are optimizing for engagement. Engagement is not inherently bad, but I believe it comes with a hidden and critical cost – a loss of surprise. A loss of the cognitive skill of making connections in surprising places. Genuine curiosity, the kind that leads somewhere genuinely new, is often uncomfortable. It requires following a thread into unfamiliar territory. It requires sitting with confusion long enough to let it become understanding. It requires, occasionally, being wrong about something you were confident about.
Discomfort does not maximize engagement. So the systems sand it away.
What's left is curiosity's hollow cousin: novelty consumption. We scroll, we click, we feel the brief dopamine hit of new content, we scroll again. We confuse motion for exploration. We confuse the sensation of information for the experience of learning. Yet in reality the learning is by the algorithm to tune itself to the path of least resistance for us.
These are not the same thing. Not remotely.
— — —
Most AI tools, as they are currently designed and deployed, deepen the personalization trap. They learn your patterns. They optimize for your satisfaction. They become very good at telling you what you already think, in cleaner prose than you would have produced yourself. That is genuinely useful for a wide range of tasks. I find that this is actively harmful for creative exploration, for intellectual growth, for the kind of thinking that produces ideas that didn't exist before you thought them.
But AI systems are also capable of something different, almost completely opposite. They contain compressed representations of an extraordinary range of human knowledge, across domains that rarely speak to each other in ordinary life. Used in a particular way, they can surface connections that no single human mind would naturally form. They can introduce genuine surprise into a thought process — not the algorithmic fake-surprise of a recommendation engine, but the real cognitive jolt of an unexpected association between two things you had no reason to think were related.
That is a radically different use of technology. It's the use I've been most interested in lately.
I've been building something called Curiosity Canvas. This is a tool designed to do the opposite of what most AI tools do. Instead of learning your patterns and reinforcing them, it generates unexpected connections, cross-domain conceptual leaps, and deliberate divergences from whatever you thought you were exploring. It's still early. But the premise is simple: if the problem is tools that narrow the world to what you already expect, the solution is a tool explicitly engineered to widen it.
More on that in the posts ahead. But first, we need to talk about the two kinds of thinking that make creativity possible, and why almost everything in your environment is currently training you out of one of them.
You can check out Curiosity Canvas here: https://curiositycanvas.com