“The medium is the message.”
Marshall McLuhanAs McLuhan understood, technologies reshape not just what we can do, but how we think about doing it. When a new technology enters the mainstream, it brings with it not just new capabilities, but new patterns of interaction, ways of thinking, working, and making sense of what’s possible. Some of those patterns endure. Others are transitional; scaffolding that helps us get from one stage to the next.
Prompting has played that role in the early stages of language model adoption. It emerged as both an interface and a skillset, offering a way to bridge the gap between open-ended models and human intent. In the absence of structure, we learned to create our own: writing prompts that resembled mini-programs, layering clarity, tone, and goals into a few lines of carefully constructed text. It worked, but only for those who knew how to do it.
But prompting, as a primary interaction model, reflects a particular moment in the evolution of these systems. It belongs to an era when the model knew very little beyond what was placed directly in front of it, when context was thin, and memory (if it existed at all) was short-lived. In that environment, the burden was placed almost entirely on the user to be precise, complete, and imaginative.
That’s beginning to change. Not because prompting no longer works, but because the system is starting to carry more of the weight. Context now stretches across sessions. Models are increasingly grounded in external sources of truth. Tools and APIs provide structured access to knowledge and actions. And intent detection, once brittle, is growing more robust. All of this shifts the balance between what the user must supply and what the system can infer.
The result isn’t the disappearance of prompting, but its gradual reframing. And that transition, subtle as it may seem, opens the door to a much broader and more inclusive range of uses.
This progression is easiest to see in everyday use. Take a simple task: analysing last quarter's sales data. Even a few months ago, this required a paragraph of context: “You are a business analyst. Here is our sales data in CSV format. Please calculate year-over-year growth...” Today, you might simply ask: “How did we do last quarter?” The system already knows who you are, can access your data, and remembers what metrics you care about from previous conversations.
This isn’t a rejection of what came before. It’s a continuation, another point on a trajectory that has always moved toward interfaces that ask less, understand more, and quietly adapt to the people using them.
The move away from explicit prompting isn’t the result of a single breakthrough. It’s the outcome of several quiet but compounding changes in how these systems are architected and deployed. Together, they reduce the amount of work required from the user, not by asking less of the system, but by changing how the system understands what it’s being asked to do.
Perhaps the most significant change is the expansion of context. In the earliest language models, every interaction began from a blank slate. Whatever understanding the model had was confined to a single prompt and the static parameters of its training. Today, context stretches further. Models retain memory across sessions. They reference prior exchanges, retrieve relevant data from external sources, and incorporate tool outputs into their reasoning. This broader context window means that the user no longer needs to re-establish the conversation every time. You can pick up where you left off, just as you would with a colleague.
This development is also deeply tied to the emergence of structured grounding. Rather than relying purely on general training data, models now frequently draw from curated sources: search results, documents, APIs, and internal tools. These connections allow the model to respond not just based on patterns, but based on facts, anchoring responses in specific, verifiable inputs. That means the user’s prompt doesn’t need to carry all the necessary information. It can simply point the model in the right direction.
The third transformation is in intent recognition. Early models could be powerful but brittle, hyper-literal in some cases, distractible in others. Today’s models are better at inferring what you’re likely trying to do and filling in the blanks with appropriate defaults. It’s not perfect, but it’s improving rapidly. And as that capacity strengthens, the need for the user to spell everything out declines.
What we’re seeing isn’t just smarter models, it’s a quiet reallocation of effort. The system is being asked to do more. The user, by design, is asked to do less. The prompt still exists, but it no longer carries the full weight of the interaction. It’s more like a pointer than a payload, enough to set the direction, but not to define the path.
Of course, this transition isn't uniform. Complex creative tasks, specialised domains, and novel requests still benefit from careful prompting. But for everyday interactions (the queries and tasks that make up most usage), the burden is increasingly moving from user to system.
That realignment isn’t always visible, but it’s foundational. It changes what the system needs to be good at. And it changes what the user needs to do to be successful.
On the surface, these changes can seem subtle. Less prompting. More memory. A bit more helpfulness in how the model interprets intent. But taken together, they mark a deeper evolution, one with wide-ranging implications for accessibility, adoption, and system design.
First, there’s the matter of access. When success with AI depends on knowing how to phrase things “just right,” the circle of effective users stays narrow. As prompting recedes, the interface becomes more forgiving. You don’t have to be clever. You don’t have to be precise. You just have to show up and ask. That lowers the barrier to entry, not just technically, but psychologically.
It also opens up new patterns of use. Interactions that once felt like single transactions start to behave more like ongoing relationships. You don’t need to reestablish context each time. You can build on what’s already there. That continuity allows for more compound, cumulative work and makes it easier to embed AI into the rhythm of existing workflows.
For organisations, this prompts a rethinking of how systems are evaluated and built. If prompting is no longer the bottleneck, then the emphasis moves to what surrounds the model: memory, orchestration, tool routing, knowledge integration. These aren’t peripheral concerns, they’re the new core. Designing the environment around the model becomes just as important as tuning the model itself.
This transition also matters because it makes room for a different kind of scale. Systems that rely on carefully crafted input don’t scale well across a diverse user base. In fact, strong prompting requirements often work against broad adoption. Systems that learn from usage, that hold state, that retrieve and route intelligently, those can be used broadly without specialised instruction. They’re more robust to variation. More resilient to ambiguity. More aligned with how people actually work.
This movement points toward a future where AI isn't just more capable, but more equitable. When grandparents can get the same results as prompt engineers. When non-native speakers aren't penalised for phrasing. When expertise means knowing your domain, not how to talk to the model, that's when these tools achieve their real promise.
This is how AI becomes something more than a tool to be mastered. It becomes an environment to step into: one that adapts to you, not the other way around.
Scale AI for your business