Can LLMs Out-Think the Best Trend Researchers?
What I’ve Learned Building an Automated Trends System
Can ChatGPT scale the nuance gap? Can they do what (I think) I do? These are questions I keep circling back to - especially now that I’ve been tasked by AI startup WALDO to build an end-to-end, automated trends research process. For years, my job as a trends researcher has been about chasing the early signals, interviewing those at the edge, and turning scattered data points into actionable insight. Now, the challenge is: can large language models (LLMs) like ChatGPT really replicate—or even outpace—the nuanced, context-driven work of an experienced human trend researcher?
Since the beginning of 2025, I’ve been running a series of experiments by researching and publishing trend reports that compare my own process (let’s call it the human/PSFK approach) with what you get when you put AI to work on the same data. The result? What I found was a a definite “nuance gap”between my work and the machine (more on nuance gaps in this article here). More interestingly, it’s got me thinking deeply about two things:
How do human researchers really create value in an AI-driven world?
How do we design better, more systematic, and genuinely creative AI trends systems—to get to new ideas and insights faster, and at scale?
Below, I’ll share a peek behind the scenes of the system I’m building for tech firms like WALDO - one that blends the best of both worlds and my latest thinking on what it means for anyone working in trends, foresight, or innovation.
Comment on the topics in this newsletter on the LinkedIn version
Beyond Headlines: How Modern Trend Research Actually Works
Let’s clear up a misconception right away: Automated trend research isn’t just “AI search + spreadsheet.” The reality - at least in my workflow - is much more layered, with a clear split between data gathering and analysis, and multiple rounds of prompt engineering, tagging, and editorial QA.
Step 1: Building a Living Database of Signals
The backbone of my system isn’t a Google search. It’s a living, constantly updated database of early signals: launches, pilots, partnerships, patents, product drops, and company moves. I use Airtable for this but other trends leaders like Dan Gould use Notion.
The signals come from hundreds of sources—RSS feeds, curated newsletters, web scrapers, social APIs, market databases, and direct submissions.
Both automated agents and human researchers are involved: tagging, categorizing, and filtering each new entry to make sure we’re capturing what’s emerging, not just what’s loudest or most recent.
Every signal is tagged by industry, sector, tech, company type, and often by customer journey, audience, or innovation theme. I want to make it easier to take a slice of data based on category to help focus an LLMs work at a later stage.
Step 2: Prompts & Process - Data Gathering & Trends Analysis
This is where the system gets pretty, pretty powerful. There are really two sets of prompt workflows:
A. Data Gathering Prompts: These are designed to scan, filter, tag, and structure raw inputs as they come in. They surface the “weak signals” that might later become trends: Which launches, pilots, or pivots are truly new? Are we missing signals in a fast-moving sector? Is this an actual case study or just another PR puff piece?
B. Trends Analysis Prompts: This is where the magic (and the risk) happens.
Between the Two: Tagging, Rewriting, & Feedback Loops
The flow isn’t just linear. Tagging and recategorization happens as data moves through the system. If a signal starts as “food tech” and gets reclassified as “AI in food service” or “hyper-local retail,” that’s intentional. Feedback from the analysis side can inform what to look for in the next round of data gathering.
What AI Still Gets Wrong (Unless You Fix It)
Here’s what I’ve seen repeatedly:
AI over-filters or under-filters. If you don’t instruct it, it either floods your database with hype (“AI revolutionizes everything!”) or misses the weak signals entirely.
AI loves “trend inflation.” Give it enough data, and it will cluster by the shallowest pattern (shared keyword, surface similarity). Result: reports full of “Sustainability,” “AI,” “Experience Economy”—with little substance.
Context and “why now” go missing. Without clear prompt logic, LLMs skip the hard work of asking, What’s new about this? Why here, why now, why this way? That’s where human researchers shine—and where prompt design needs to evolve.
Naming is lazy unless forced. “Digital Renaissance,” “Frontiers of Wellness,” “Luxury Reimagined.” These sound big but mean little. If you want titles that mean something to an executive, you need to ban metaphor and demand specificity.
Typically, I just use a human (me) to edit mistakes but I appreciate the challenge that Justin Wohlstadter has given me for my work with WALDO which is to build prompts and processes that really do fix them.
What Human Researchers Must Do Differently—And How to Build Better Trend Systems
1. Obsess about your sources, and tag everything. If you want to build an AI system that’s more than a headline chaser, start with a database that’s broad, deep, and carefully tagged for future pattern recognition. Treat every new signal as a possible 'trend brick'—not a finished idea. (Ooh, I like trend brick - the LLM editing this piece came up with that term, not me).
2. Separate the data gathering and analysis pipelines. Build one set of prompts, agents, and QA for surfacing, filtering, and tagging weak signals. Build another for clustering, contextualizing, and narrating trends. Don’t conflate the two.
3. Ban lazy clustering and buzzword naming. Set rules that require trends to be supported by multiple, diverse, recent signals—and to be named in a way that would make sense as a board report chapter, not a marketing conference keynote.
4. Build in editorial and agentic feedback. Allow your system to re-tag, re-classify, and surface outliers. Create feedback loops so new signals can reshape trend clusters over time.
5. Always allow for the “standalone weak signal.” If your system tries to force every signal into a trend, you’ll miss the real edge. Allow for “not yet a trend, but worth watching.”
6. QA for value at every step. Use a human editor or a final AI pass to strip out metaphor, generic hype, and vague “trend language.” Demand clarity, evidence, and a real story.
Why It Matters
I’m convinced that the future of trends work isn’t human vs. machine, but human with machine—provided we architect our systems for nuance, context, and defensibility. When clients or stakeholders ask “Why is this a trend?” you can point to the signals, the examples, the context, and the narrative—not just the volume of headlines.
If you’re only using LLMs to summarize a Google folder of trend reports, for example, you’ll get what everyone else gets. If you invest in a real system—one that values tagging, QA, multi-stage prompts, and editorial feedback—you’ll actually see (and act on) what’s coming next.
If you’d like to know more about the AI trends systems I’m building - or an introduction to Waldo - drop me a line - piers@psfk.com
Comment on the topics in this newsletter on the LinkedIn version