When machines know our needs

As tech anticipates, not reacts, design and intent become acts of trust

  • Blog
  • 5 minute read
  • January 30, 2026
Gary Goldhammer

Gary Goldhammer

Marketing Communications Lead, AI & Emerging Tech, PwC United States

Technology has stopped waiting for us to ask

For a long time, Mo Almarhoun, Senior Engineer, PwC United States, didn’t trust the machines.

In his house, the Alexa stayed unplugged. He didn’t like the idea of a device listening in from the corner of the room, quietly collecting fragments of daily life. “I always thought someone, somewhere, could hear everything,” he says. His wife would ask why the speaker wasn’t connected, why she couldn’t just ask it to check the weather or see who was at the door, and he’d tell her it was better that way. Safer.

Almarhoun describes himself as a private person. He doesn’t post much online. He double-checks every privacy setting. For years, that instinct shaped how he thought about technology: the more connected things became, the more he wanted to disconnect.

Then came his first research project at our Emerging Tech R&D lab: a study on the future of human–machine interaction. He started out sceptical, expecting to confirm his worst assumptions about surveillance and overreach. Instead, the research surprised him. The more he learned, the more his perspective began to shift.

He saw that these systems weren’t built to intrude. They were designed to anticipate, to sense when to help and when to stay quiet, and to make interaction feel more natural. Somewhere between his new Apple Watch and the voice assistant in his Tesla, his attitude began to change. “I stopped seeing it as spying,” he says. “I started seeing how it could make life easier, if you let it.”

The rise of anticipatory technology

Almarhoun’s experience mirrors the shift underway in our own work on human–machine interaction. For decades, technology waited for us to act. We learned its language—typing commands, tapping icons, swiping screens—to make it respond. But that era is ending.

Today’s systems are learning to read our signals instead. They sense stress, attention, and intent. They dim the lights when we focus, silence notifications when we’re overwhelmed, and nudge us when our posture sags or our tone shifts. It’s technology that no longer asks for permission because it doesn’t need to.

Our EmTech research team calls this the rise of anticipatory technology. In this new paradigm, machines are not just reactive tools but adaptive partners that learn our patterns and respond accordingly. The goal is to remove friction and restore focus—helping humans work, drive, or live with fewer interruptions and more flow.

It’s an appealing idea, but also a complicated one. When systems begin to act on our behalf, they raise questions about boundaries and consent. What happens when convenience starts to feel like intrusion? How do we design for dignity and control in a world where technology already knows what we need before we ask?

For Almarhoun, the answer lies in how we use it. “If you interact with technology in the right way, it can save you time and make you more efficient,” he says. “But if you use it without intention, it can easily distract you. It’s really about how we adapt to it.”

The fine line between help and dependence

For Almarhoun, the beauty of human–machine interaction lies in its potential to make life simpler, not noisier. The technology itself isn’t the problem—it’s how we choose to engage with it. “We created these tools to help us manage information,” he says. “But instead, they’ve become constant sources of distraction.”

Our research shows how far that problem has spread. On average, Americans check their phones more than 200 times a day. Knowledge workers receive over a hundred emails before lunch. Notifications and alerts chip away at focus until attention itself becomes the scarcest resource.

That overload is what anticipatory technology hopes to fix. By quietly sensing context—where you are, what you’re doing, even how you’re feeling—machines can protect your attention instead of demanding it. They can pause interruptions when stress is high, summarise what matters most, or optimise the environment for deep work.

The promise is compelling: a world where our surroundings adapt to us instead of the other way around. But the danger is subtle. The same systems that free us from noise can also condition us to expect constant help. Every small act of automation replaces a small act of awareness. Over time, convenience can turn into dependence, and a helpful machine can start making choices that were once ours.

That tension runs through Almarhoun’s daily life—from Alexa to his Tesla to the Apple Watch that now sits on his wrist. He’s learned to appreciate the time saved and the ease they bring, but he still draws the line at surrendering too much control. “I used to see all this as invasion,” he says. “Now I see it as collaboration—but only when I stay involved.”

Designing for every kind of human

As human–machine interactions become more ambient and embedded in daily life, not everyone will experience them the same way. For some, they’ll feel intuitive and freeing. For others, especially those new to technology or uncomfortable with it, they could feel overwhelming—or even unsafe.

Almarhoun worries about both ends of that spectrum. “There’s no guidance, no restrictions,” he says. “There’s nothing that really says how old you should be to use these devices, or how ready you have to be.” His infant son will grow up knowing how to talk to Alexa. His parents, on the other hand, find even simple digital interactions confusing and intimidating.

The challenge isn’t just about design. It’s about responsibility. Ambient systems are built to anticipate needs, but they don’t yet understand who’s on the other end of the interaction. They can’t tell whether a child or a grandparent is responding, or whether the consent they’re relying on was truly informed.

Our research calls this a gap in contextual awareness—the system’s ability to adapt not just to physical cues like movement or heart rate, but to human differences like age, ability, and digital fluency. Accessibility, equity, and safety aren’t afterthoughts; they’re the foundation of trust. For anticipatory technology to deliver on its promise, it must learn not only what we need, but who we are when we ask.

Learning to live wisely with what comes next

Even after months of studying human–machine interaction, Almarhoun hasn’t completely quieted his privacy instincts. He still questions how much these systems should know, or how easily a convenience can cross into exposure. But now, his scepticism is paired with curiosity. He’s no longer trying to keep technology out of his life; he’s learning how to live with it more wisely.

His outlook mirrors a growing truth in our research: the future of human–machine interaction isn’t about erasing fear. It’s about learning from it. Every new capability—from sensors that track stress to cars that remember conversations—forces us to rethink where we draw the line between help and harm. The goal isn’t blind adoption or total rejection. It’s balance.

“I started out thinking technology was invading our lives,” Almarhoun says. “Now I see that it’s up to us to decide how much to let it in.”

That may be the real measure of progress in this next phase of interaction. The smartest systems will be the ones that stay within the limits we set. The rest depends on whether we remember to stay present, even as our machines learn to anticipate our every move.


This webpage includes AI generated content

Explore our services

Scale AI for your business

Next Tech Agenda