No Match Found
For leaders in marketing, communications and other fields who are thinking about what generative AI means to their brand and business, the role of generative AI in creating content is top of mind. But the debate may be missing the point, and the supposed dichotomy between human-generated and AI-generated content may be a fallacy. Both ignore several important types of content, as well as the most likely scenarios for how content could develop. They also ignore the most important question of all: How to use generative AI for content responsibly and ethically, so that it fosters trust among content creators and content consumers and benefits society.
From prehistoric times when humans painted on cave walls to the modern day where we type on laptops or draw on a digital canvas, we produce content. We do so to communicate with others, express our feelings, leave a legacy for future generations and for many other reasons.
At the other end of the spectrum is purely AI-generated content. People are excited about the opportunity to scale, customize, personalize and produce content in an economical way. The fear comes from the potential for this technology to generate large quantities of marginally better content, or worse, to undermine trust with content that’s biased or outright false. Underlying both the excitement and fear is the question of believability, especially as purely AI-generated content is increasingly indistinguishable from human-generated content.
What’s arguably more interesting — and often ignored — are the intermediate points where humans and AI will collaborate.
This is where a human uses AI generation tools to create unique content. The initial inspiration can come from either the human or the AI, but there’s co-creation between the humans and the technology. For example, I can ask the AI chatbot for provocative blogs to write about a specific topic like “generative AI.” Based on the AI suggestions, I might pick one and ask the AI to give me a few key points. I can take those, refine them, and ask it to expand on each of those points while continually prompting it to change the style of the content, the details of the content, the tone of the content. We can extend this co-creation model to all types of creative arts, including writing, drawing, painting, composing, and movie-making. This type of content will likely feature in use cases that require medium levels of creativity and lower to medium volume.
Another type of human-AI collaboration could relegate the human to verifying or modifying AI-generated content, to increase its quality and instill trust. This type of content might be mass-produced and hyper-personalized with a number of use cases, such as news articles describing stock market movement, narratives around regulatory filings or personalized movie trailers. This type of content is likely to be cost-effective for high-volume/low-creativity use cases, though it will need to be deployed within a responsible AI framework in order for creators and consumers to trust it.
The natural question is, which type of content will likely dominate the internet in the next several years?
Here are three scenarios we can envision based upon what we know and see today.
AI-centric: One extreme scenario is the domination of AI-generated content (including some that is human-validated). This technocratic view of the world misses the intrinsic utility that humans derive from their work or creation. Just because the best chess-playing AI software will always beat me doesn’t mean that I’ll lose interest in chess and stop playing chess. People will continue to write, compose and sing, even if the AI does a better job, because they derive intrinsic pleasure in these activities. More importantly, people will value human-generated content more than AI-generated content once the initial excitement around AI’s capability wanes. Consider, for instance, if soccer robots start performing better than human soccer players. Would we stop watching soccer matches?
Human-centric: The other extreme scenario is societal backlash — maybe even the eventual banning of generative AI tools, in response to a failure to use them responsibly and ethically. This technophobic view of the world undermines the history of human evolution in which we have continuously accepted innovations to further our own biological and cognitive evolution. Access and affordability of content on the internet has not devalued content. On the contrary, it has democratized it, though trust in content has sometimes suffered.
Thirty years back, the availability of information was dependent on your proximity to a library and the depth and breadth of its collection. Today, you have a significant proportion of all information written since the dawn of humanity at your fingertips, and a significant subset of that at zero marginal cost. While the search engines democratized the availability of information, the generative AI “answer engines” are democratizing the availability of knowledge. As a result, we humans will have to strive to add ‘value’ or ‘insights’ to the information and not just access it.
Co-creation: A more plausible scenario is somewhere in the middle. Human-AI co-created content will likely make up the largest share of the internet, with a small proportion of highly valued human-generated content and highly creative and/or highly repetitive AI-generated content. This scenario could push humans to really add value and genuinely be more creative. Or we might engage in these activities for self-actualization as opposed to commercial gains. It may also push the people working on AI-generated content to fix its current flaws to improve its trustworthiness.
So what do we need to do so that we don’t end up in either extreme scenario? How can we move toward a world where we can enjoy the entire spectrum of human and AI-generated content? Here are three ideas.
So what is the future of content in the generative AI age? We don’t think that either the (pure) AI-generated or (pure) human-generated content will dominate. We believe that we are in the process of building an exciting and creative AI-augmented human society, with a broad spectrum of co-creation. But we need to act to make sure ethical and responsible AI practices for generative AI, so that the content it helps create will also generate trust.