Back to Blog
    Why Great Researchers Struggle with New Tools (And Why That's About to Change Everything)

    Why Great Researchers Struggle with New Tools (And Why That's About to Change Everything)

    Discover why top researchers struggle with new tech tools and how this shift could revolutionize their craft.

    By Matt Gullett
    September 2, 2025

    From Matt Gullett at Between Silicon and Soul

    I work with researchers who can turn a knot of messy data into something clients can actually use. That craft still amazes me.

    But I've also been in the room when these same great researchers hit today's rails—social platforms, creator culture, and now AI. Someone throws one prompt at ChatGPT ("analyze everything"), gets a thin answer, and walks away convinced AI isn't worth their time.

    Why does this happen to smart, experienced professionals?

    It's not a research problem—it's a human problem. For decades, the Gen X and Boomer approach worked perfectly: find tools that deliver quality, master them deeply, stick with what works. This mindset built careers and entire industries. It's rooted in wisdom.

    But something fundamental shifted. In the AI and digital age, the half-life of specific tools and methods has accelerated dramatically. What used to change every 5-7 years now changes every 2-3 years, or faster. Meanwhile, competitors are moving faster, stakeholders can switch suppliers with a few clicks, and flashy new methods (some good, some snake oil) compete for attention daily.

    The old approach—deep tool mastery over long periods—suddenly creates vulnerability. Not because the underlying research knowledge is wrong, but because the delivery mechanisms are shifting under our feet while we're still perfecting last decade's workflows.

    The real challenge: We need to stay grounded in research truth and wisdom while becoming more flexible with tooling. We need to think less like tool specialists and more like digital natives—with governance, not abandon, but with urgency and adaptability.

    Why "Half-Life" Matters in Research

    The Gen X/Boomer approach to professional development made perfect sense for decades: master your tools deeply, build expertise over years, stick with what delivers quality results. This created entire careers and built our industry's reputation for rigor.

    But the game changed. In the AI and digital age, competitive pressures have accelerated in three critical ways:

    1. Tool half-life shortened dramatically. What used to evolve every 5-7 years now changes every 2-3 years or faster
    2. Switching costs dropped. Stakeholders can research new suppliers, methods, and platforms in minutes, not months
    3. Shiny object syndrome increased. Flashy new approaches (some legitimate, some not) compete for attention daily

    The result: The old model of deep, long-term tool mastery now creates vulnerability. Not because the underlying research knowledge is wrong—it's more valuable than ever—but because the delivery mechanisms keep shifting while we're still perfecting last decade's workflows.

    Think of it like materials science:

    Your core research knowledge has a long half-life. Sampling theory, understanding bias, question design principles—these fundamentals don't expire. The human elements of working tactfully with diverse groups? Also timeless.

    But the specific skills that express that knowledge have much shorter half-lives. The exact UI clicks, file formats, platform rules, prompt structures—these decay fast in today's environment.

    When we don't consciously tend those skills, two things happen:

    1. We still "know it" but can't keep up with it
    2. Old muscle memory chooses slower (and sometimes biased) paths because the rails changed under our feet

    Meanwhile, competitors who adapt faster start winning business, and stakeholders begin questioning whether traditional research approaches can keep pace.

    The solution isn't to abandon rigor—it's to separate knowledge from delivery methods. Guard the knowledge fiercely, but hold the specific tools lightly.

    The Excel and PowerPoint Reckoning

    Here's where this gets real for most researchers: Excel and PowerPoint—the absolute bedrock of our industry—are facing their biggest disruption in decades.

    Think about your current workflow. You probably export data to Excel for analysis, build charts and tables, then copy-paste everything into PowerPoint for client delivery. It works. You've refined this process for years, maybe decades.

    But digital deliverables are already replacing static slides. AI-powered datasets can surface insights directly from raw data. Chat-based discovery tools let stakeholders ask questions and get answers without waiting for the next deck revision.

    This creates a bifurcated world: Some clients will always want the traditional Excel analysis and PowerPoint presentation—the comfort of familiar formats, the ability to review every number, the ritual of the deck review meeting. But others are already asking for interactive dashboards, real-time data access, and conversational interfaces with their research.

    The challenge: You need to serve both worlds simultaneously. The forward-looking clients expect digital fluency and rapid iteration. The traditional clients still value the thoroughness and polish of established approaches.

    What this means practically:

    • You'll need to maintain Excel mastery while learning Python/R or no-code analysis tools
    • PowerPoint skills remain essential, but you'll also need dashboard and visualization platforms
    • Traditional reporting stays relevant, but you'll add API integrations and automated updates
    • Static deliverables continue, but interactive and self-service options become table stakes

    This isn't about choosing sides—it's about expanding your toolkit to meet clients where they are, while staying ready for where they're going.

    What Needs to Change (And What Doesn't)

    Knowledge (the "why" that lasts):

    • Sampling theory and bias sources
    • Validity and reliability principles
    • How stories land in organizations
    • Ethics and human behavior insights
    • Your industry expertise

    Skills (the "how" that needs upkeep):

    • Building quotas in today's panel interfaces
    • Stitching together panel + social + community samples
    • Setting up platform ad tests (TikTok, Meta, YouTube)
    • Running MaxDiff/TURF in current toolchains
    • Prompting LLMs to analyze verbatims
    • Creating feed-native content from research decks

    Quick test: If it explains why something works regardless of the tool, it's knowledge. If it depends on where you click or what a platform allows, it's a skill that needs regular updates.

    The Learning Challenge Ahead

    Here's the part that stings a bit for those of us with gray hair: we need to embrace what younger generations naturally understand about rapid tool adoption, while they need what we know about rigor and stakeholder dynamics. It's bi-directional learning, and it's not optional anymore.

    This isn't about age—it's about different mental models. The magic happens when we learn from each other instead of defending our approaches.

    The opportunity: Combine the wisdom of "slow and right" with the necessity of "fast and adaptive." Guard research principles fiercely while staying flexible with the tools that express them.

    The bottom line: Great researchers struggle with new tools not because they lack intelligence or capability, but because they're applying an old mental model (deep tool mastery over long periods) to a new reality (rapid tool evolution and competitive pressure).

    The solution isn't to abandon what made us great—our rigor, standards, and deep craft knowledge. It's to evolve how we learn and adapt.

    Coming next in this series: How to actually make this work with AI tools, and a practical 90-day plan for keeping your skills fresh without losing your sanity.

    Published on September 2, 2025
    More Posts