As geopolitical risk rises, so too does the cost of misplaced trust. A more volatile external environment is amplifying cyber threats in the Middle East, increasing the sophistication of attacks and raising the stakes for organisations that rely on digital systems to operate at speed. What was once a technical concern is now a broader resilience issue, with the potential to disrupt operations, erode confidence and expose critical vulnerabilities at scale. In this context, trust can no longer be assumed, it must be actively built into how controls are designed and operated, making it a central question of resilience.
This challenge has intensified as generative artificial intelligence (GenAI) reshapes the threat landscape. It is changing the economics of cyber deception, allowing attackers to prepare campaigns faster, tailor them more precisely and present them with greater credibility. The issue is not just new tools, but the removal of traditional barriers to attack, such as time, effort, and believability, while organisations still rely on outdated trust models.
Earlier attacks were constrained by manual effort and specialist capability. Reconnaissance took longer, phishing was often generic and malicious content was easier to spot. Automation widened the threat. GenAI has now accelerated it, reducing the time and cost required to produce persuasive messages, credible impersonation attempts and adaptive attack preparation at scale.
This acceleration is part of a broader structural shift. Recent industry analysis shows that AI is not only increasing the volume of vulnerabilities discovered, but also collapsing the time between discovery and exploitation, from weeks to hours, giving attackers a persistent speed advantage.1
This shift is not theoretical. The UK’s National Cyber Security Centre highlights that advanced ‘frontier AI’ models are already accelerating key stages of cyber operations, reducing the time, cost and expertise required, while enabling attacks to scale more easily. Organisations should assume these capabilities are already in use by adversaries and prepare accordingly.2
In practice, this includes the use of large language models embedded in offensive toolchains to automate reconnaissance, generate tailored payloads and iterate attack scenarios with minimal human input.
In some cases, AI systems are now capable of autonomously identifying vulnerabilities and generating working exploits without human guidance, signalling a step-change in how cyber-attacks are developed and deployed.3
That raises the pressure on defenders. Activity that once required sustained effort from experienced operators can now be assembled quickly, refined rapidly and deployed across a wider set of targets. The warning signs many organisations still depend on are becoming less visible. Poor grammar, unusual phrasing and obvious inconsistencies are no longer reliable indicators of fraud. In their place are messages, calls and requests that look credible enough to move straight into routine decision-making.
The challenge is no longer only identifying content that appears malicious. It’s recognising when something that appears legitimate still needs to be challenged.
Large language models can generate phishing messages that are grammatically sound, contextually relevant and tailored to specific roles, transactions and internal priorities. Voice cloning and deepfake tools add another layer of pressure, making impersonation attempts feel familiar, urgent and entirely plausible within the normal flow of business. When those techniques are combined with publicly available information from social media, corporate websites and professional networks, attackers can build narratives that mirror real reporting lines, commercial activity and executive intent. These models can ingest open-source intelligence (OSINT) such as organisational structures, recent transactions and public communications to dynamically generate context-aware lures that align with real business activity.
In the GCC, governments are expanding digital service delivery. Businesses are embedding AI more deeply into operations. Many organisations are also managing high-value transactions, compressed approval cycles and complex ecosystems of third parties, platforms and service providers. These shifts are creating real value. They are also increasing the cost of misplaced trust.
Cyber resilience, therefore, depends not only on defending systems, but on redesigning how organisations verify identity, challenge instruction and authorise action.
Many organisations have not yet evolved their control environments and continue to rely on assumptions shaped by an earlier threat period, when phishing was cruder, trusted channels were treated as inherently reliable, and periodic awareness exercises were seen as sufficient. As AI-enabled deceptions become more sophisticated, the focus must shift from detection alone to verification.
This shift reflects a deeper reality: traditional risk models, detection approaches and response processes were designed for slower, human-paced attacks, and are increasingly misaligned with machine-speed threats.4
High-risk actions, such as payment detail changes, financial transfers, privileged access requests, and sensitive data disclosures, should be subject to formalised, out-of-band checks that do not rely on a single email, message, or video interaction, however convincing it may appear.
Detection approaches must also evolve. This includes shifting towards behavioural analytics, identity-based detection and user and entity behaviour analytics (UEBA), which can identify subtle deviations in access patterns, transaction flows and decision-making sequences. When attackers are better able to mimic normal activity, defenders need to focus less on static indicators and more on behaviour, identity misuse and anomalies in decision-making. Unusual timing, unusual combinations of approvals, unexpected access patterns and deviations from established workflows may now reveal more than the message or attachment itself. This requires closer alignment between cyber teams, fraud teams, identity specialists and business control owners, because the signals of compromise increasingly sit across operational as well as technical domains.
Testing must also reflect the threat environment. Organisations should be testing the scenarios they are now more likely to face – persuasive impersonation, authority-led urgency, identity compromise and attacks that move quickly across functions rather than staying neatly within technical silos.
Leading security guidance increasingly emphasises that organisations should prepare for a higher frequency of simultaneous, high-impact incidents, driven by AI-enabled attack scale and automation.5
The objective is not only to test whether employees hesitate. It is to test whether controls, escalation paths and decision rights hold up when the signal is ambiguous and the pressure feels real.
For leaders today, this is no longer a question of whether cyber controls exist on paper. It is whether those controls reflect how deception now works in practice. GenAI is not creating a separate category of cyber risk. It’s intensifying existing risks by making deception easier to scale, harder to challenge and quicker to operationalise.
The next phase of cyber resilience will depend on whether they can adapt trust itself. Those that continue to rely on familiar signals, informal assumptions and legacy approval logic will become easier to exploit.
In an environment where cyber deception increasingly looks legitimate, resilience will be defined by the discipline to question what appears genuine, before it leads to loss.
Waad Albayyali: Senior Manager, Cybersecurity, PwC Middle East
Esha Nag: Manager, Lead Editor and Writer - Thought leadership, insights and reports, PwC Middle East