What is your data strategy?
Your first thought might be around collecting first-party data, performance metrics, or metadata for targeted content. But that’s not what I’m getting at.
Instead, I ask: What is your strategy for shaping thought leadership, crafting stories, building your brand message, and delivering sales materials that captivate and persuade?
“Wait a minute,” you might say. “Isn’t this a content strategy?”
Well, yes. But also no.
Is it content or data?
Generative AI is blurring the boundaries between content and data.
When You Think about your articles, podcasts, and videos, you probably don’t think of them as “data.” But AI providers do.
AI vendors don’t talk about their models learning from “interesting content” or “well-crafted stories.” Instead, they talk about accessing and processing “data” (text, images, audio and video). AI vendors often use the term “training data.”” as an essential way of referencing the datasets they rely on for model development and learning.
This perspective is not wrong – it has its roots in the history of search engines, where patterns and frequency determined relevance and search engine “indices” were just big buckets of unstructured files and text (i.e. data).
Nobody ever pretended that search engines understood this Meaning in their giant bucket of every content type imaginable. It seemed appropriate to reduce it to “data.”
But AI companies now attribute understanding and intuition to this data. They claim they have all this information And the ability to rearrange it and guess the best answer.
But let’s be clear: AI doesn’t do that understand. It predicts.
It generates the most likely next word or image – structured information without intent or meaning. Meaning is and remains a human construct that results from the intentionality of communication.
Fight for meaning
This difference underpins the growing tension between content creators and AI providers.
AI providers argue that the Internet is a vast pool of publicly accessible data — available to both machines and humans — and that their tools help create deeper meaning.
This is what content creators argue People Learn from content that is imbued with intent, but AI just steals the products and rearranges them without regard to the original meaning.
Interestingly, the conflict arises over something they both agree on: that the machine determines meaning.
But it doesn’t.
The Internet provides data (content) to AI, but only humans can make sense of it.
And that makes the distinction between content and data more important than ever.
What’s the difference?
This is what a recent study found Consumers are less positive Word of mouth and loyalty when they believe emotional content was created by AI and not a human.
Interestingly, this study did not focus on whether participants could tell whether the content was AI-generated. Instead, the same content was presented to two groups: one was told it was created by a human (the control group), while the other was told it was created by AI.
The conclusion of the study: “Companies must carefully consider whether and how they disclose AI-authored communications.”
Spoiler alert: no one will.
In another studyHowever, researchers tested whether people could distinguish between AI-generated and human-generated content. Participants correctly identified AI-generated text only 53% of the time – barely better than random guessing, which would achieve 50% accuracy.
Spoiler alert: No, we can’t.
We are programmed to misunderstand this
In 2008, science historian Michael Shermer coined the word “Exemplary.” In his book “The Believing Brain,” he defines the term as “the tendency to find meaningful patterns in both meaningful and meaningless noise.”
He said that people tend to “infuse these patterns with meaning, intention and agency” and this phenomenon “Agenticity.”
So as humans we are predisposed to make two types of mistakes:
- Type 1 errorwhere we see the false positive – we see a pattern that doesn’t exist.
- Type 2 errorwhere we see the false negative – we miss an existing pattern.
When it comes to generative AI, humans are at risk of making both types of mistakes.
AI vendors and people’s tendency to humanize technology are setting people up for Type 1 errors. That is why the solutions are marketed as “co-pilot”, “assistant”, “researcher” or “creative partner”.
A data-driven content mindset leads marketers to chase patterns of success that may not exist. You risk confusing quick first drafts with agile content without questioning whether the drafts provide real value or differentiation.
AI-generated “strategies” and “research” appear credible simply because they are clearly written (and the providers claim that the technology unlocks deeper knowledge than humans possess).
Many people equate these quick answers with accuracy and overlook the fact that the system only outputs what it has recorded – truthful or not.
And here lies the irony: our awareness of these risks could lead to Type 2 errors and prevent us from realizing the benefits of generative AI tools. We might miss patterns that actually exist. For example, if we accept that AI always produces average or “not entirely accurate” content, we will not see the pattern that shows how good AI is at solving complex processing challenges.
As technology improves, risk settles for “good enough” – both for ourselves and for the tools we use.
CMI’s recent research highlights this trend. In the Study “Career Outlook 2025 for Content and Marketing”, The most commonly cited use for AI among marketers is “brainstorming new topics.” However, the next five most common answers – each cited by over 30% of respondents – focused on tasks such as summarizing content, writing drafts, optimizing posts, writing email copy, and creating social media. Media content.
However, the CMI study “Benchmarks, Budgets and Trends for B2B Content Marketing” shows a growing reluctance towards AI. 35 percent of marketers cite accuracy as their biggest concern with generative AI.
While most respondents report only a “medium” level of trust in the technology, 61% still rate the quality of AI-generated content as excellent (3%), very good (14%), or good (44%). Another 35% rated it mediocre and 4% rated it poor.
So we use these tools to create content we find satisfactory, but are unsure of their accuracy and have moderate confidence in the results.
This approach to generative AI shows that marketers are inclined to use it to produce transactional content at scale. Instead of delivering on the promise that AI will “unleash our creativity,” marketers risk giving in to the possibility of shutting us out.
Look for better questions rather than quicker answers
The essence of modern marketing is part data, part content – and a lot of understanding and meaning for our customers. It’s about uncovering their dreams, fears, longings and desires – the invisible threads that lead them forward.
To paraphrase my marketing hero Philip Kotler: Modern marketing is not just about sharing minds or hearts. It’s about share spiritsomething that goes beyond narrow self-interest.
How can we modern marketers balance all of these things and deepen the meaning of our communications?
First, recognize that the content people create today becomes the data set that defines us tomorrow. Regardless of how it is created, our content has inherent biases and different values.
For AI-generated content to provide value beyond the data that already exists, move beyond the idea of simply using the technology to increase the speed or volume of creation of words, images, audio, and video.
Instead, use it as a tool to improve the ongoing process of generating meaningful insights and cultivating deeper relationships with our customers.
If generative AI is to become more effective over time, it will require more than just technological sophistication – it requires People grow. People need to become more creative, empathetic and wiser to ensure that both the technology and the people who use it do not become something pointless.
Our teams will need it moreNo fewer roles that can gain valuable insights from AI-generated content and convert them into meaningful ideas.
The people who fill these roles don’t necessarily have to be journalists or designers. But they have the ability to ask thoughtful questions, engage with customers and influencers, and transform raw information into meaningful insights through listening, conversation, and synthesis.
The qualities required are similar to those of artists, journalists, talented researchers or subject matter experts. Perhaps this could even be the next evolution of the role of the influencer.
The path that lies ahead of us is not yet complete.
One thing is clear: If generative AI is to be more than a distracting novelty, companies need a new role – a meaning manager – to guide the way AI-driven ideas are transformed into actual value.
It’s your story. Say it well.
HANDPICKED RELATED CONTENT:
Cover image by Joseph Kalinowski/Content Marketing Institute