Stop Cosplaying Cybersecurity and Start Fixing the Real Problems

Peter CummingsPeter Cummings
14 min read
Stop Cosplaying Cybersecurity and Start Fixing the Real Problems

If I see one more security leader proudly present a 200-slide deck from a  very reputable firm with an impressive logo that cost more than their entire security team's annual salaries, I might actually lose it. 

You know the presentation I'm talking about. The one with the maturity models. The capability heat maps. The three-year roadmap with swimlanes that look like they were designed by someone who's never actually logged into a Linux server. The one that makes the board nod approvingly while your actual infrastructure is held together with duct tape, forgotten sudo rules, and a prayer that nobody discovers the service account from 2019 that still has NOPASSWD access to production. 

Welcome to cybersecurity cosplay: where looking secure matters more than being secure, where presentations trump protection, and where the biggest risk to your organization isn't the threat actor — it's the fact that you have no idea what's actually happening in your environment. 

And here's the part that should terrify you: you're out of time to fix it. 

The Race You've Already Lost 

Let me paint you a picture of your new reality. 

Right now, as you're reading this, AI agents outnumber human identities in your enterprise by 82 to 1. They're operating at machine speed, making thousands of decisions per minute, with permissions you probably can't even enumerate. And unlike human attackers who need sleep, make mistakes, and operate sequentially, these agents work 24/7 across hundreds of targets simultaneously. 

Chinese state actors just demonstrated what this looks like in practice: they used Claude AI to autonomously execute 80-90% of a cyber espionage campaign. Reconnaissance, vulnerability discovery, lateral movement, credential harvesting, data exfiltration — all performed by AI agents operating at "physically impossible request rates" with minimal human oversight. 

Read that again. Eighty to ninety percent automated. The human operators only intervened for authorization at critical escalation points. 

So when you tell yourself you'll "get around to" implementing identity visibility "next quarter," ask yourself this: How long do you think you have once an AI agent discovers that forgotten sudo rule you don't even know exists? 

The answer used to be "maybe days, if we're lucky." Now it's minutes. Maybe seconds. 

Your quarterly access review won't catch it. Your annual compliance audit won't flag it. And your expensive SIEM dashboard will just show "authorized sudo usage" from "valid credentials" while an autonomous agent escalates from that orphaned jenkins_deploy account to full domain compromise faster than you can finish reading this sentence. 

From Slideware to Security 

Let’s talk about the most carefully managed risk in many enterprises: career risk. 

When something goes wrong, nobody wants to be the person who says, “We made a judgment call and did the hard work ourselves.” It’s much safer to say, “We followed industry best practice, here is the 180‑page strategy pack from a very reputable firm with an impressive logo.” 

You know the drill: beautifully branded decks, rainbow heat maps, three‑year roadmaps with more swimlanes than an Olympic pool. Everyone nods. The board feels reassured. The regulators see the right buzzwords. And your actual sudoers files, service accounts and non‑human identities remain exactly as mysterious as they were the day before. 

This isn’t about naming and shaming any particular consultancy. In fact, many of them employ brilliant people who genuinely want to fix things. The problem is structural: incentives are aligned around producing presentations, not reducing blast radius. It’s much easier to model your future operating state than to do the unglamorous work of finding the ancient service account that can still sudo to root on a production box. 

So we end up in a strange kind of theatre. Everyone plays their part perfectly, the documentation is immaculate, the steering committee minutes are pristine — and the one forgotten NOPASSWD rule is still sitting there, patiently waiting for the first curious human or AI agent to notice it. 

The point is not “don’t work with consultants.” The point is: if your security strategy consists mainly of slideware and status reports, you’re optimising for plausible deniability, not resilience. Partner with whoever you like — just make sure someone is actually looking under the floorboards instead of only updating the floorplan. 

 

The Problem You're Not Solving (And Why It Matters More Than Ever) 

Here's what actually happens in most organizations: 

Your security team can tell you when someone logs in. They can show you authentication events. They might even have a SIEM dashboard that lights up like a Christmas tree with alerts nobody has time to investigate. But ask them a simple question — "Who can shut down our production database right now?" — and suddenly it's crickets. 

They don't know. And they won't know until they manually SSH into a dozen servers, grep through sudoers files, cross-reference with Active Directory groups, check cloud IAM policies, and then compile everything into a spreadsheet that'll be outdated before the next standup meeting. 

Meanwhile, an AI agent with compromised credentials is running sudo -l on your jenkins_deploy account, discovering it has permission to run /usr/bin/find, escalating to root, harvesting SSH keys valid on 89 production servers, and installing persistence mechanisms across your entire estate. 

Elapsed time? About three minutes. 

You didn't get hacked. You got logged in. At machine speed. 

And your quarterly access review? The one that costs $500/hour in consulting fees? It didn't catch it because the review happened three months ago, the drift happened last week, and the AI agent exploited it this morning while your security team was still in their standup discussing last quarter's roadmap. 

Flying Blind Into the AI Storm 

The uncomfortable truth is this: most organizations have poured millions into identity platforms, SIEM systems, PAM solutions, and compliance tools — all ostensibly for "visibility" — yet they still can't answer the most basic security question: Who can do what, right now? 

Your IAM system knows who's in the group. Your PAM can rotate the password. Your ITDR can detect suspicious usage patterns. But none of these tools can tell you what the entitlement actually does. Does PRD_4682_SYSOPS grant read access to logs or the ability to drop the entire production database? Nobody knows. And when nobody knows, access reviews become rubber stamps, separation of duties policies remain theoretical, and Zero Trust initiatives stall at the PowerPoint stage. 

This is the visibility gap. And it was dangerous when you only had to worry about human attackers. 

Now? With AI agents that can autonomously conduct reconnaissance, discover vulnerabilities, harvest credentials, and move laterally across your entire infrastructure without human intervention? It's not just dangerous — it's catastrophic. 

89% of organizations have already incorporated AI agents into their identity infrastructure. 58% estimate that within a year, half or more of their cyberattacks will be driven by agentic AI. And most CISOs admit they don't have full visibility into their non-human identities. 

Let me translate that: You're about to face the most sophisticated, fastest, most relentless adversaries in history, and you can't even see your own attack surface. 

The IVIP Wake-Up Call (Before It's Too Late) 

There's a reason analysts are calling out Identity Visibility & Intelligence Platforms (IVIP) as the missing layer in enterprise security. And yes, I know — another acronym, another buzzword, another thing for vendors to slap on their marketing materials. 

But underneath the hype is a real problem: have we really been doing identity security without visibility and intelligence this whole time? 

The answer, unfortunately, is yes. 

IVIP represents the missing layer between policy and reality. It's the difference between knowing who has access and understanding what that access actually means. It's the bridge between your IGA's org chart and the actual privileges scattered across your Linux estate, your cloud tenants, and your SaaS sprawl. 

Think of it this way: you wouldn't fly a plane without instruments. But somehow, we've been running enterprise security for decades with the equivalent of a blindfold and a GPS that only updates quarterly. We've had authentication logging (we know who got on the plane), we've had provisioning workflows (we know who we gave boarding passes to), but we've had exactly zero real-time visibility into who can actually fly the damn thing. 

And when I say "visibility," I don't mean another dashboard. I mean continuous, identity-aware, contextual understanding of privileges as they actually exist — not as your ticketing system says they should exist. 

This was the missing piece when you only had to worry about human threat actors who work business hours and make mistakes. 

Now, with AI agents that operate 24/7, never make the same mistake twice, and can execute attacks at thousands of harmful decisions per minute? Visibility isn't just important. It's survival. 

The Blast Radius You're Not Calculating 

Let me tell you what keeps me up at night. 

One compromised service account used to mean one server at risk. Maybe a handful if the attacker was skilled and had time. You'd have days, maybe weeks to detect and respond. 

One AI agent with that service account's credentials means every system that agent can touch — potentially your entire cloud estate — gets compromised before your first alert fires. 

Think about that. 

Every database. Every storage bucket. Every API endpoint. Every connected system. All accessible at machine speed, with surgical precision, by an adversary that can: 

  • Make thousands of decisions per minute 
  • Operate across hundreds of targets simultaneously 
  • Parse results, flag valuable data, and group findings by intelligence value automatically 
  • Generate tailored attack payloads in real-time 
  • Never sleep, never tire, never make the same mistake twice 

This isn't science fiction. Autonomous AI attacks have already been documented in the wild by state-sponsored actors: using AI agents to independently query databases and systems, harvest credentials, move laterally, and exfiltrate data at rates that are "physically impossible" for humans. 

And here's the nightmare scenario: when this happens in your environment, every system log will show "valid user," every authentication event will look "normal," and every privilege escalation will appear "authorized" because technically, it is. 

You gave jenkins_deploy that sudo permission three years ago during an incident. You just forgot about it. Your IAM system never tracked it. Your quarterly review never caught it. 

But the AI agent found it in 1.2 seconds. 

Welcome to the new reality: you are not hacked, you are logged in — at machine speed, with blast radius you can't even calculate. 

Why We Keep Making the Same Mistakes (And Why We're Out of Time to Fix Them) 

Here's the cycle: 

  1. Something bad happens (breach, audit finding, compliance violation) 
  2. Leadership panics and demands a solution 
  3. Security team rushes to evaluate tools based on vendor pitches 
  4. Organization selects the "safe" choice (usually the most expensive) 
  5. Implementation takes 18 months and costs triple the estimate 
  6. Tool goes live, solves part of the problem, creates new gaps 
  7. Security team moves on to the next fire 
  8. Drift happens. Configuration degrades. Exceptions accumulate. 
  9. Nobody notices because nobody's actually looking 
  10. Go to step 1 

We're solving problems we can't see with tools we don't understand, guided by consultants who've never actually run a security program. Then we wonder why 80% of organizations believe their cybersecurity investments are failing. 

The reason is embarrassingly simple: we're automating the mess without cleaning it up first. 

You can't implement Zero Trust when you don't know who has access to what. You can't enforce least privilege when you can't define what "least" means in your environment. You can't detect anomalies when you have no baseline for normal. 

And now, with AI agents proliferating at 82:1 versus humans, you can't afford to wait 18 months for your next IGA implementation to maybe sort of partially address the problem. 

The window for "we'll get to it next quarter" just closed. 

2026 is what security analysts are calling the "tipping point" where AI agents fundamentally reshape cyber risk. The shift from passive AI assistants to active autonomous agents is outpacing security programs. Traditional security models are breaking under the strain. 

You're not securing your infrastructure. You're just documenting your ignorance in increasingly expensive ways while the attack surface explodes faster than you can inventory it. 

The Visibility-First Alternative (While You Still Have a Chance) 

So what's the answer? 

Stop buying solutions to problems you can't articulate. Stop letting vendors define your security strategy. And for the love of all that's holy, stop hiring consultants to tell you what you should already know about your own environment. 

Start with visibility. Not the fake kind — not another log aggregator or dashboard that shows you authentication events. Real visibility: 

  • Discovery: What identities exist in your environment? All of them. Human, machine, service accounts, API keys, SSH keys, cloud roles, AI agents, the works. 
  • Context: What can each identity actually do? Not what the policy says. What the sudoers file says. What the IAM role permits. What the nested group membership grants. What the AI agent can access. 
  • Continuous monitoring: What's changing? Who added that exception? When did that service account gain admin rights? Which accounts haven't been used in six months? Which AI agents have accumulated privileges across multiple systems? 
  • Risk assessment: What's the blast radius of each identity? Which accounts can escalate to root? Which roles can access sensitive data? Which misconfigurations create critical paths? Which AI agents can autonomously execute workflows across your infrastructure? 

This is what IVIP is supposed to provide. Whether you call it IVIP, identity observability, or just "doing our damn jobs," the principle is the same: you can't protect what you can't see. 

And here's the uncomfortable part: this work isn't sexy. It's not going to generate impressive board presentations. It won't win you awards at RSA. It's the cybersecurity equivalent of dental hygiene — boring, unglamorous, and absolutely essential. 

You're going to find orphaned accounts from contractors who left three years ago. You'll discover service accounts with privileges nobody can explain. You'll uncover sudo rules added at 2 AM during an incident that were supposed to be temporary. You'll realize your "least privilege" environment is anything but. You'll find AI agents with access to systems you didn't know they touched. 

It's going to be embarrassing. 

But you know what's more embarrassing? Explaining to your CEO how an autonomous AI agent strolled in using valid credentials you didn't know existed, escalated privileges through a sudo rule you forgot about, and exfiltrated your entire customer database before your quarterly access review even started. 

Less is More, More is Less (And Speed Matters Now) 

Once you have visibility, you can start doing what should have been done years ago: cleaning up. 

Every account that can't be justified gets disabled. Every privilege that isn't actively used gets revoked. Every role that overlaps with three others gets consolidated. Every exception that "was supposed to be temporary" gets reviewed and either made permanent with proper justification or removed entirely. Every AI agent gets scoped to least privilege with continuous monitoring. 

This is the "less is more" principle: fewer identities, fewer privileges, fewer exceptions = smaller attack surface, clearer risk picture, and actually manageable security posture. 

The flip side is equally true: more accounts, more complexity, more tool sprawl, more consultant-driven initiatives = less actual security. Because "more" without understanding just means more places for things to hide. 

And when an AI agent can discover and exploit those hiding places in seconds instead of days, "more" becomes "catastrophic." 

You can't reduce what you can't measure. And you can't measure what you can't see. And you can't compete with adversaries operating at machine speed when you're still compiling spreadsheets quarterly. 

The Call to Arms: It's Now or Never 

It's 2026. We have the technology to do this right. We have frameworks, we have tools, we have analyst guidance. What we lack is the will to do the unglamorous work of actually understanding our environments before we try to secure them. 

And now, we lack time. 

AI agents are already in your infrastructure. They're proliferating faster than your security team can inventory them. And autonomous AI attacks have crossed from theoretical to operational — proven in the wild by state-sponsored actors. 

So here's my challenge to every CISO, security leader, and identity professional reading this: 

Stop cosplaying. 

Stop attending vendor roadshows for tools you don't need. Stop commissioning maturity assessments that tell you what you already know. Stop presenting three-year transformation roadmaps that'll be obsolete before the first milestone and completely irrelevant by the time an AI agent discovers your first forgotten sudo rule. 

Start seeing. 

Implement continuous identity visibility NOW. Map your actual privileges, not your policy privileges. Understand who can do what, right now, in production, not in your IGA tool's database. And do it before AI agents make your current approach completely untenable. 

Start fixing. 

Clean up the mess. Remove the orphans. Revoke the excessive privileges. Eliminate the drift. Scope your AI agents. Do the boring, essential work that actually reduces risk instead of just documenting it. 

Start measuring. 

Track your attack surface in real-time. Monitor for privilege drift continuously, not quarterly. Measure your time-to-detection for configuration changes in minutes, not days. Report on actual risk reduction, not just "security investments made." 

And most importantly: stop flying blind. 

Because the alternative isn't another breach you could have prevented. It's an autonomous AI attack that completes 80-90% of its operations — reconnaissance, exploitation, lateral movement, credential harvesting, data exfiltration — before you even know it started. 

They won't hack you. They'll log in with the access you forgot you gave them, operate at machine speed, and be gone before your quarterly review flags anything unusual. 

And when you're standing in front of the board explaining how it happened, that expensive maturity model isn't going to save you. But you know what might have? Actually knowing what was in your environment. In real-time. Before the AI agent found it. 

Visibility first. Everything else second. 

The attackers already understand this. They're already using AI agents that operate faster, more comprehensively, and more relentlessly than any human team you can field. 

Maybe it's time we caught up. 

Because in 2026, you're not competing against human adversaries who work 9-to-5 and make mistakes. 

You're competing against AI agents that never sleep, never stop learning, and operate at a velocity your current security model wasn't designed to handle. 

Either you implement continuous identity visibility now, or you explain later why you didn't see it coming when an autonomous agent escalated from a forgotten NOPASSWD rule to full domain compromise in 90 seconds. 

The window just closed. It's now or never. 

Want to stop flying blind? LinuxGuard delivers real-time identity visibility and privilege intelligence for Linux environments — because you can't secure what you can't see. And you can't compete with AI-speed attacks using quarterly reviews. Start with visibility. Everything else follows. 

Peter Cummings

Peter Cummings

Peter Cummings — IT Security & AI expert with 20+ years’ experience. Founder of LinuxGuard. Passionate about automation, least privilege, and scalable cloud solutions.