AI agents are revolutionizing productivity, but theyβre also creating a massive, growing security crisis for businesses. In this episode of Today in Tech, we chat with Clarence Hinton, Chief Strategy Officer at CyberArk, exploring the alarming findings from a new identity security survey covering over 2,600 cybersecurity leaders across 20 countries.We discuss: Β· Why 94% of organizations are using AI for security, but only 32% have controls in place Β· The rising threat of machine identities and identity silos Β· The dangers of AI-powered phishing, deepfakes, and voice scams Β· Why most companies are underestimating the privilege risk of AI agents Β· The explosion of shadow AI and unregulated tools in the enterprise Β· Human behavior: still the weakest security link If you're not thinking about how to secure AI agents now, you're already behind. π Donβt miss this essential conversation for CISOs, IT leaders, and tech strategists. π Watch, like, and subscribe for more tech security insights every week.
Register Now
Keith Shaw: The rise of AI agents will likely make security even harder for companies. As machine identities with varying levels of access start requesting and grabbing data from company resources, this could potentially create even more problems.
And that's just the tip of the iceberg when it comes to other security threats companies are dealing with in 2025. We're going to take a look at the current security threat landscape on this episode of Today in Tech. Hi, everybody.
Welcome to Today in Tech. I'm Keith Shaw. Joining me in the studio today is Clarence Hinton. He is the Chief Strategy Officer for CyberArk. Welcome to the show, Clarence. Clarence Hinton: Hey Keith, it's a pleasure to be here. Thanks for having me.
Keith: All right, Iβm a big fan of CyberArk. We've had them on the show before β I think last year we did a virtual session via Zoom with someone talking about the 2024 Identity Security Landscape Report. This is one of my favorite surveys.
So again, I'm happy to do this every year with you guys if you want. This year, the report surveyed 2,600 security leaders across 20 different countries and asked them all sorts of questions about AI, machine identities, and identity silos.
In addition, this yearβs report shows a rapid escalation in identity-centric cyber risks β driven by the explosion of machine identities, the rise of AI agents, and fragmented identity systems. Did I get that right? Clarence: You got it perfect.
Keith: One of the big results that blew my mind was this stat: 94% of organizations said they use AI to enhance security, but only 32% have security controls for AI tools. Did that surprise you in terms of the gap?
Or were there other big surprises in this yearβs survey? Clarence: That was definitely high on the list of things that caught my attention. First, just the near 100% usage rate of AI for security β that was really, really high.
The lack of deployment of security controls wasnβt as surprising. But when you put those two together, the gap is astounding. Keith: Were there other parts of the survey that stood out to you before I get into specifics?
Clarence: Yeah, for me β this is something weβve seen before β but the definition of "privilege" was even more human-focused this time around than it was last time, even though everyoneβs acknowledged the proliferation and power of machine identities. That really did catch me by surprise.
Keith: So were you expecting the amount of privilege assigned to human identities to go down? Clarence: I was expecting it to go up, but not as much as it did.
I thought more respondents would acknowledge that machine identities are, in fact, highly privileged users β and classify them that way. But that actually went down. Keith: One of the points from the survey I wanted to touch on was that AI-enabled phishing attacks are also on the rise.
What have you seen in the field? Because we always figured the bad guys would start using generative AI to craft better phishing emails. One of the first ways you could recognize spam was poor grammar and spelling. But now AI can fix all of that.
And it turns out, thereβs even more happening that makes phishing harder to detect, right? Clarence: Absolutely. It goes beyond email.
Even if you keep it to email β now grammar is cleaned up, and AI can mine publicly available data sources to get detailed information about a company, its people, and their roles. You can make very specific, fine-tuned, targeted messages β laser phishing, not just spear phishing.
Keith: Laser phishing β thatβs a new one. Clarence: Right. And itβs underestimated. Weβre also getting close on voice and video. Voice is already there. We've seen instances of that. Keith: The voice stuff blows my mind too. I worry not just about emails but phone calls.
If my kids get a call from what sounds like me β βHey, this is Dad, Iβm in jail. I need $100 to get outβ β and itβs coming from a spoofed number, they might fall for it.
So I have to remind them about our secret code word, just like when they were little. Clarence: Thatβs smart. Keith: But they keep forgetting the code word.
And now I have to say, "If it sounds like me and asks for money, itβs not me." Imagine now Iβm a CEO. This goes beyond business email compromise β this is business everything compromise.
You can send the email, follow it up with a text, and then a phone call. Thatβs my version of multifactor authentication: three different media. Clarence: Right. And companies will need to develop the equivalent of that β layers of intelligent controls.
If something looks suspicious, trigger different layers of security automatically. Keith: Like just-in-time prompts or validation steps on both sides? Clarence: Exactly. Pop-up alerts, session monitoring, multi-factor. You canβt rely on any one method.
Keith: Another go-to defense of mine: "I'm only giving you money if I see you in person." Clarence: Thatβs becoming more relevant. There was a story in Hong Kong where someone used an AI avatar on a Zoom call to impersonate a CEO.
The visuals, the voice β it was all fake. Keith: Thatβs just too much. And itβs just the beginning. Clarence: Yes. There are already ways to detect video deepfakes, but it's going to get harder. Keith: I think your report mentioned a scam in Italy. Can you talk about that?
Clarence: Sure. Scammers posed as the Italian Ministry of Defense and targeted high-net-worth individuals, including Giorgio Armani. They claimed to be raising funds to free a journalist. Eventually, they escalated the scam to a phone call supposedly from the Italian defense minister.
That was a voice deepfake β and it worked. Someone lost 4 million euros. Keith: Thatβs amazing β and terrifying. Are companies going to have to ramp up training? Clarence: Definitely. Basic training for phishing and vishing needs to be enhanced.
But thereβs also a new chapter: attacks enabled by AI. That content needs to be added to training. Keith: Update the PowerPoints. Clarence: Exactly. And at the same time, ramp up the actual defenses β code and solutions.
Keith: So going back to the machine identities story β this is a big deal. Companies are using more machine-to-machine interactions, including AI agents. But this concept of βprivileged accessβ was a little confusing to me. The survey indicates that machine identities were given elevated access, correct? Clarence: Absolutely.
Keith: But not privileged access? That designation remains primarily human? So it looks like companies are still trusting humans more than machines? Clarence: Itβs really more of a definitional issue. On one hand, respondents said a higher portion of their machine identities have elevated access than their human counterparts.
But when asked to classify βprivileged users,β 88% still said βhumans.β Different types of humans. So thereβs a disconnect β people arenβt quite ready to consider machines as highly privileged users, even though, by their own definition, they are. Keith: That leaves enterprises exposed.
Youβre not applying the same rigor to securing machine identities as you would for humans. If someoneβs a domain admin, you know their role and how to secure them. But with a machine identity accessing multiple databases β you donβt know how to treat that. Clarence: Exactly.
And now weβre building AI agents and assigning them tasks. Are companies giving them easier access because of trust or speed? If I work under you, I donβt get the same access you do. But are agents just being given access across the board?
Keith: So are they somewhere in the middle β or are they getting different levels of access like employees? Clarence: Even before agents, machine identities in general are given high levels of privilege.
If you're an application, companies tend to give you broad access because they donβt want to break something. Thatβs why things like secrets management are critical β you broker access and treat machines like privileged users. But most companies arenβt doing that. The coverage isnβt robust.
Keith: So we already have a problem with machine identity management β and now weβre layering AI agents on top of it. Clarence: Correct. These agents can behave like applications or databases. And you also layer in the βhuman-likeβ aspect. Itβs a bit of a Wild West scenario.
The platforms building these agents are trying to embed some security controls, but theyβre often not security experts. Weβre working with several of them to improve that, whether inside or outside the platforms. Keith: Sounds like the agents are being built with elevated privileges by default. Clarence: Exactly.
Which is why concepts like least privilege, zero standing access, and just-in-time access need to be implemented. Otherwise, the blast radius of a compromised agent is massive.
Keith: The growth in the number of agents is going to be a problem too, right? If youβre managing access for five people, thatβs doable. But what if you suddenly have 50, 500, or 5,000 agents? Clarence: Thatβs exactly the issue.
You can't give them all access by default β it creates massive problems. And on top of that, agents can be ephemeral or short-lived, like temp workers. If theyβre long-standing, you can treat them more like employees. But for temp agents, thatβs a different scenario. Keith: Like temp vs.
full-time vs. contractor. Are people working on a protocol or best practice for this? Clarence: Thereβs the MCP standard, which helps standardize communication between agents and systems. It doesnβt solve the problem outright, but it helps reduce the risk and makes collaboration easier.
Keith: Earlier, you mentioned other concerns about agents. Unlike asking AI a question, you're now telling agents to do things β like the Iron Man/J.A.R.V.I.S. model. What data are these agents picking up? Are they getting intercepted? Clarence: Thatβs a real concern.
If you have an agent with elevated privileges, adversaries will want to compromise it β just like they do with humans. Theyβll try to issue new orders or change the mandate of the agent, or even let it continue its job while secretly harvesting data. Keith: A double agent scenario.
Clarence: Exactly. Theyβll use that access to avoid detection, which makes it more dangerous. Keith: Sounds like man-in-the-middle attacks, but worse. Is the agent landscape susceptible to that? Clarence: Definitely. Itβs the same concept, but much more dynamic and powerful.
Agents can be manipulated like a file in transit β and attackers can read, change, or piggyback onto it. Keith: So the man-in-the-middle problem could re-emerge as βagent-in-the-middleβ? Clarence: Yes. The techniques evolve, but the attack vectors are often the same: gain entry, move laterally, escalate access, and extract value.
Keith: Could we see news stories about agents being the attack vector β like βCompany Xβs agent was breachedβ? Clarence: Absolutely. A basic agent can be compromised and gradually turned into a powerful, dangerous tool if you donβt have the right guardrails.
Keith: That really does sound like a double agent scenario. Companies need a way to detect and shut that down. Clarence: Thatβs where session controls and behavioral analysis come in. If something looks out of the norm, you can trigger additional validation or shut it down.
The tools exist β you just have to implement them.
Keith: I like the idea of an AI double agent. Maybe that'll show up in the next Tron movie β AI double agents infiltrating the system. How much are platform creators actually thinking about security protocols? Clarence: From the companies we talk to, they take it very seriously.
Everyone in the security world understands the threat potential. The platform vendors are trying to build in some level of security, but theyβre typically not security vendors. So, theyβre partnering with companies like CyberArk to augment security β both inside and outside their platforms, potentially leveraging standards like MCP.
Theyβre taking it seriously, but that doesnβt mean the problem is solved. Keith: It sounds like this needs to happen before agent use scales to millions or billions. Companies looking at agentic platforms really need their security teams involved at the start. Clarence: Yes, those conversations must happen early.
Ask the tough questions up front to ensure the security is there. Too often, security is brought in late β βOh, hey, weβre already doing thisβ¦ should we check if itβs secure?β Keith: I might be exaggerating, but that sounds familiar. Clarence: Youβre not exaggerating.
Itβs exactly how the public cloud rollout happened. People started using it before the security teams knew what was going on. And now, the majority of attacks are in the cloud β because thatβs where the data is.
Keith: So platform vendors have had to step up their security game β and now weβre at a similar inflection point with AI agents. Clarence: Right. But with agents, the potential blast radius is far greater β especially when weβre talking hundreds, thousands, or even millions of agents.
Keith: Some really scary stuff. Letβs shift from agents to another part of the report: shadow AI. For longtime tech folks, shadow IT is a familiar concept β end users adopting unapproved tools to get work done. Now weβre seeing the same thing with AI.
So, whatβs the biggest concern for security teams? Clarence: First and foremost: data leakage. Even if it's a legitimate model, your proprietary data could leak depending on how secure that model is. Some models are porous. If your data goes in, it may be out in the world now.
Keith: And some AI tools might not even be legitimate? Clarence: Correct. There are malicious tools posing as helpful AI engines.
If you're an attacker, you want one of those sitting in an app store, waiting for someone to click βTry it.β Keith: So employees are using tools that could be leaking data or acting as an entry point into the enterprise. Clarence: Exactly.
And even legitimate tools can be overused or misused. People start to over-trust AI output. Thatβs another risk. Keith: In your survey, 47% said they canβt secure shadow AI use β mainly due to the speed of innovation and internal pressure from users.
Weβve heard from other guests that employees want AI tools β theyβre excited to be more productive. Clarence: Thatβs right. Weβre past the fear stage. Most employees want to use AI to help them work better.
But when companies take too long to approve tools, employees find a way around it.
Keith: So, IT and security teams need to either say βYou must waitβ or find ways to speed up onboarding and secure those tools β kind of like what we did with SaaS and credit card purchases in the past. Clarence: Yes.
With SaaS, we eventually introduced SSO and MFA to get a baseline of security. We need a similar framework for AI models. Keith: And now, itβs not just individual tools β many established software platforms are embedding AI into their existing offerings.
On our other show, DEMO, about 80% of companies say, βYeah, weβve added AI.β Clarence: Thatβs another supply chain risk β another third-party vector that must be evaluated. Keith: I donβt envy the security folks. Thereβs never a dull moment. Letβs talk about another topic: human behavior.
Humans still stink at security. You shouldβve just used that as the report headline. Clarence: That wouldβve worked. Keith: According to the 2024 survey: 60% of people used a personal device to access work-related apps, emails, or systems in the last 12 months.
Do you just shake your head when you see that? Clarence: I go back to one of my favorite customer quotes: βWe realized our employees will go to great lengths to give away their credentials.β Keith: Thatβs brilliant. Still, the weakest point in security is the human firewall.
Even if the ratio is 82 machine identities to every one human, attackers say, βI'll go after that one human.β Clarence: Itβs often easier than cracking a machine. Keith: 36% reuse passwords for personal and work accounts. 65% admit to bypassing security policies in the name of productivity.
40% habitually download customer data. Itβs astonishing. Clarence: It really underscores the need to finish the job β especially for human privilege controls. Lock it down. Keith: As you see the survey results and talk to CISOs and security leaders, whatβs the biggest priority now?
Clarence: Across three categories β humans, machines, and AI: Humans: Finish the job. Apply privilege controls consistently. Donβt leave anything open. Machines: Adversaries are shifting here. Machine identities are exploding in number β orders of magnitude larger than humans β so we must apply robust controls here too.
AI & Agents: This is a footrace. Get shadow AI under control. Protect models. Protect usage. Treat agents like highly privileged humans and machines. Apply least privilege, zero standing access, just-in-time permissions β everything weβve got. Keith: Got it. Can you make a prediction for next yearβs survey?
Are we at the peak, or will the numbers keep rising? Clarence: It depends whether we break out agents as a separate category. But conceptually, the growth is infinite. You could have millions of agents even in a modest-sized enterprise β especially when you count temporary agents.
This problem will grow in scope and nature. Weβll likely see a cat-and-mouse game β security controls improving, adversaries adapting. The conversation will shift more toward machines and AI agents. But humans will continue to cause problems. Keith: Humans are always the problem.
I always ask this when I have a security guest on the show: Are you optimistic or pessimistic? Do you sleep well at night? Clarence: Iβm driven. Itβs a massive challenge, and cybersecurity leaders have a responsibility to take it on. The adversary is always a few steps ahead.
But we try to think like attackers and stay ahead that way. Keith: So β motivated, but cautiously optimistic? Clarence: Exactly. Weβre going to fight the good fight and give them hell. Keith: Love it. Clarence Hinton from CyberArk, thank you so much for being on the show.
Clarence: Thanks for having me. Itβs been a pleasure. Keith: Thatβs going to do it for this weekβs episode. Be sure to like the video, subscribe to the channel, and leave a comment below. Join us every week for new episodes of Today in Tech.
Iβm Keith Shaw β thanks for watching! Β
Sponsored Links