What’s the future of content in the generative AI age?

Example pattern for mobile
Example pattern for desktop


  • The dichotomy between human-generated and AI-generated content is a fallacy.
  • A continuum of four types of content is emerging, and over time it will be difficult to distinguish among them.
  • While we don’t know exactly how the future will play out, there are key areas to focus on to help us get to a complementary relationship between humans and technology.

For leaders in marketing, communications and other fields who are thinking about what generative AI means to their brand and business, the role of generative AI in creating content is top of mind. But the debate may be missing the point, and the supposed dichotomy between human-generated and AI-generated content may be a fallacy. Both ignore several important types of content, as well as the most likely scenarios for how content could develop. They also ignore the most important question of all: How to use generative AI for content responsibly and ethically, so that it fosters trust among content creators and content consumers and benefits society.

From all-human to all-AI: Four types of content

Purely human-generated content

From prehistoric times when humans painted on cave walls to the modern day where we type on laptops or draw on a digital canvas, we produce content. We do so to communicate with others, express our feelings, leave a legacy for future generations and for many other reasons.

Purely AI-generated content

At the other end of the spectrum is purely AI-generated content. People are excited about the opportunity to scale, customize, personalize and produce content in an economical way. The fear comes from the potential for this technology to generate large quantities of marginally better content, or worse, to undermine trust with content that’s biased or outright false. Underlying both the excitement and fear is the question of believability, especially as purely AI-generated content is increasingly indistinguishable from human-generated content.

What’s arguably more interesting — and often ignored — are the intermediate points where humans and AI will collaborate.

Human-generated AI-augmented content

This is where a human uses AI generation tools to create unique content. The initial inspiration can come from either the human or the AI, but there’s co-creation between the humans and the technology. For example, I can ask the AI chatbot for provocative blogs to write about a specific topic like “generative AI.” Based on the AI suggestions, I might pick one and ask the AI to give me a few key points. I can take those, refine them, and ask it to expand on each of those points while continually prompting it to change the style of the content, the details of the content, the tone of the content. We can extend this co-creation model to all types of creative arts, including writing, drawing, painting, composing, and movie-making. This type of content will likely feature in use cases that require medium levels of creativity and lower to medium volume.

AI-generated, human-validated content

Another type of human-AI collaboration could relegate the human to verifying or modifying AI-generated content, to increase its quality and instill trust. This type of content might be mass-produced and hyper-personalized with a number of use cases, such as news articles describing stock market movement, narratives around regulatory filings or personalized movie trailers. This type of content is likely to be cost-effective for high-volume/low-creativity use cases, though it will need to be deployed within a responsible AI framework in order for creators and consumers to trust it.

Chart showing the four types of all-human to all-AI content

What to expect in the next 3 to 5 years

The natural question is, which type of content will likely dominate the internet in the next several years?

Here are three scenarios we can envision based upon what we know and see today.

AI-centric: One extreme scenario is the domination of AI-generated content (including some that is human-validated). This technocratic view of the world misses the intrinsic utility that humans derive from their work or creation. Just because the best chess-playing AI software will always beat me doesn’t mean that I’ll lose interest in chess and stop playing chess. People will continue to write, compose and sing, even if the AI does a better job, because they derive intrinsic pleasure in these activities. More importantly, people will value human-generated content more than AI-generated content once the initial excitement around AI’s capability wanes. Consider, for instance, if soccer robots start performing better than human soccer players. Would we stop watching soccer matches?

Human-centric: The other extreme scenario is societal backlash — maybe even the eventual banning of generative AI tools, in response to a failure to use them responsibly and ethically. This technophobic view of the world undermines the history of human evolution in which we have continuously accepted innovations to further our own biological and cognitive evolution. Access and affordability of content on the internet has not devalued content. On the contrary, it has democratized it, though trust in content has sometimes suffered.

Thirty years back, the availability of information was dependent on your proximity to a library and the depth and breadth of its collection. Today, you have a significant proportion of all information written since the dawn of humanity at your fingertips, and a significant subset of that at zero marginal cost. While the search engines democratized the availability of information, the generative AI “answer engines” are democratizing the availability of knowledge. As a result, we humans will have to strive to add ‘value’ or ‘insights’ to the information and not just access it.

Co-creation: A more plausible scenario is somewhere in the middle. Human-AI co-created content will likely make up the largest share of the internet, with a small proportion of highly valued human-generated content and highly creative and/or highly repetitive AI-generated content. This scenario could push humans to really add value and genuinely be more creative. Or we might engage in these activities for self-actualization as opposed to commercial gains. It may also push the people working on AI-generated content to fix its current flaws to improve its trustworthiness.

Preparing for the future

So what do we need to do so that we don’t end up in either extreme scenario? How can we move toward a world where we can enjoy the entire spectrum of human and AI-generated content? Here are three ideas.

  • Embrace and envision it. Creators of all kinds should embrace generative AI tools and begin experimenting and envisioning how co-creation might work. This applies to those in creative industries like entertainment, media and the arts as well as traditional industries where these tools can be applied in creative-oriented use cases, such as in marketing.
  • Build it responsibly. The people and companies developing generative AI should work to embed trust by design: making these tools responsible, with attention to both risk minimization and ethical use. Besides the data scientists and AI engineers building these tools, the venture capitalists and businesses funding and marketing generative AI should also focus on responsibility and trust.
  • Use it broadly — to increase trust. The value of AI-generated content goes beyond the creative sector or creative use cases in traditional industries. It has a real productivity benefit in more mundane generative activities such as mass-producing or mass-customizing narratives. But confirming that this AI-generated content is trustworthy — accurate, relevant, complaint, bias-reduced and ethical will be critical. That will require responsible AI practices, including AI-specific governance with an appropriately trained human in the loop to verify content and modify it as needed.

So what is the future of content in the generative AI age? We don’t think that either the (pure) AI-generated or (pure) human-generated content will dominate. We believe that we are in the process of building an exciting and creative AI-augmented human society, with a broad spectrum of co-creation. But we need to act to make sure ethical and responsible AI practices for generative AI, so that the content it helps create will also generate trust.

Generative AI

Lead with trust to drive sustained outcomes and transform the future of your business.

Learn more

PwC’s Responsible AI

Helping you harness AI you can trust through frameworks, templates and code-based assets.

Learn more

Contact us

Anand Rao

Global AI Lead; US Innovation Lead, Emerging Technology Group, Boston, PwC US


Bret Greenstein

Data and Analytics Partner, PwC US


Matt Labovich

Analytics Insights Leader, PwC US


Derek Baker

Principal, Marketing Transformation, PwC US


Next and previous component will go here

Follow us