Has AI reduced the human risk in Cybersecurity, or multiplied It?

Blog Post-Cyber Bytes Phil Toblowski

Global, Apr 20, 2026

By Phil Tobolski, Director, Cybersecurity, Logicalis US

Artificial intelligence (AI) is often positioned as a solution to human error in cybersecurity. The logic is straightforward: automate decisions, remove manual processes, and the risk created by people should naturally decrease. That is the promise.
In practice, the opposite outcome is increasingly visible.

While AI has improved detection, response, and operational efficiency, it has also reshaped the mechanics of cyber-attacks. Modern threats no longer rely on exploiting technology alone. They increasingly exploit trust, context, and human judgment at machine speed and global scale.

For today’s security leaders, the critical question is not whether AI strengthens cyber defence. It is whether that same technology has shifted risk onto a less controllable and more vulnerable surface: people.

From human error to human targets

Historically, people were considered a risk because of mistakes: clicking the wrong link, reusing passwords, or misconfiguring access. Training and controls focused on reducing human error.

AI-enabled threats operate differently. They are not dependent on crude phishing attempts or easily recognizable warning signs. Instead, they are designed to manipulate trust, context, and timing, often blending seamlessly into legitimate business activity. Generative AI enables attackers to produce interactions that are not only convincing, but adaptive, credible, and increasingly indistinguishable from authentic communication.

In these scenarios, people are not bypassing security controls or acting carelessly. They are being methodically targeted because human judgment, under the right conditions, offers a faster and more reliable path to access than technology alone.

AI adoption is outpacing Security readiness

This shift is occurring at the same time organisations are accelerating AI adoption. According to the Logicalis Global CIO Report 2026, 94 percent of CIOs say their organisation’s appetite for AI is growing, yet more than half believe adoption is moving too fast.

That tension matters for cybersecurity.

When AI is deployed faster than governance, skills, and security frameworks can mature, gaps inevitably emerge. Sixty-two percent of CIOs admit they have already compromised on AI governance due to limited knowledge, and fewer than half say they fully understand the risks associated with AI adoption.

Those gaps create opportunities not just for innovation, but for attackers.

Why human risk has increased, not decreased

AI has not eliminated human involvement in cybersecurity decisions. It has increased the number of moments where human judgment is required.
Employees are now expected to:

  • Assess whether a message, voice, or video is real
  • Decide when AI output can be trusted
  • Know when to bypass speed in favor of verification
  • Navigate which AI tools are approved and which are not

At the same time, attackers are using AI to remove friction from deception. Deepfake impersonation, AI generated phishing, and adaptive social engineering are designed to trigger action before doubt has time to set in.

The result is a widening mismatch between how quickly threats evolve and how prepared people feel to respond.

Why training alone no longer works

Many organisations still approach human risk primarily through cybersecurity awareness training. While education remains essential, it is no longer sufficient in an AI driven threat environment.

AI enabled attacks are engineered to bypass rational decision making by exploiting authority, urgency, and emotional cues. In those moments, knowledge does not always translate into behaviour.

Reducing human risk now requires a broader approach:

  • Embedding verification into workflows
  • Redesigning approval and escalation processes
  • Making pause and validate culturally acceptable
  • Aligning identity, access, and AI governance

In short, organisations must design for deception, not assume it can be trained away.

The real risk for CIOs

The Logicalis report highlights another dimension of human risk. Fifty-seven percent of CIOs say employees are putting data security at risk through how they use AI tools, and thirty-four percent say AI has introduced new security blind spots.

If CIOs cannot validate the security of the AI tools employees are using, or see where confidential company information is being shared, trust breaks down and risk rises. In that environment, human behaviour becomes a direct blocker to scaling AI and realising business value.

So, has human risk reduced or multiplied?

AI has reduced some operational risk, but it has also made deception faster and more convincing. The opportunity lies in responding to that shift by redesigning security around how people make decisions and take action.

Make the secure path the easy path. Add verification where it matters, such as payments, password resets, and access changes, and clearly define which AI tools and data uses are business approved. Done well, critical thinking becomes a habit rooted in confidence, not a drag on speed.

In an AI first world, cybersecurity is not just about securing technology. It is about protecting entire organisations from bad decisions. Get it right, and AI does not just multiply risk. It multiplies resilience.

Further reading

 

Topic

Related Insights