Special Features

Spotlight on RSA

AI red-teaming tools helped X-Force break into a major tech manufacturer 'in 8 hours'

Hint: It's the 'the largest' maker of a key computer component


RSAC An unnamed tech business hired IBM's X-Force penetration-testing team to break into its network to test its security. With the help of AI automation, Big Blue said it was able to do so within hours.

While he can't name names, Chris Thompson, global head of X-Force Red, said this particular customer is "the largest manufacturer of a key computer component in the world." 

Thompson says the senior hacking team scheduled three weeks for the project. "And that's based on going after similar technology companies," he added. "We allocated three resources to it for three weeks."

IBM's red team (along with everyone else in the world) has been building out its AI capabilities. This includes using generative and predictive AI for penetration testing from a platform the team code-named Vivid, which they used to help with the unnamed computer component manufacturer break-in. 

"With the automation that we've built out, we managed to hack into that company within eight hours," Thompson told The Register during an interview at the RSA Conference in San Francisco last week.

With the automation that we've built out, we managed to hack into that company within eight hours

"Technology's finally caught up to where we need it to be to solve these really big data analysis problems, because that's really what red teaming is," he added. "You have all the data in the world, you have to collect it really quietly, but then you have to go through lines and lines of code and connect the dots."

While AI tools can "never replace dedicated hackers, truly the most skilled people out there, we can take a load off. There's a lot of fluff out there around AI. But there's also a lot of really interesting things that are happening."

In this particular case, the X-Force crew and its AI tooling found a flaw in the manufacturer's HR portal, exploited this to upload a shell, and then waited to see if they would get caught. They didn't, so they pushed further, escalating their privileges on the host, and used a rootkit to cover their tracks and avoid being detected. 

"Then we just sat and waited, mapped up their internal network over time, and eventually got to the design of that key computer component," Thompson said. 

The team is completing similar jobs for similarly huge technology providers, as well as some of the world's biggest banks and defense manufacturers, he noted, adding that ultimately AI helps them "put the dots closer together.

"The attack paths that we needed to leverage were actually there day one, it just took us two weeks to put it together because there's just this fire hose of information and it's really difficult to know what to focus in on," he explained. 

"Now that we have more tools for this offensive data analysis problem, it's just accelerating our work so we can free up our really smart people to solve more interesting challenges instead of just doing that crazy data analysis," Thompson added. 

Crims like offensive AI, too

Of course, criminals and government-backed intruders are also seeing how they can use machine-learning tools to make their jobs more efficient, and Thompson said he believes the pace at which this technology is changing and improving is only going to accelerate from here on out. 

He cited an AI security event held during this year's RSA Conference and attended by officials from US Cyber Command and the NSA. 

"Everyone was in agreement that in two years, the models will be ten-times more powerful that they are today," Thompson said, adding that the discussion during the event centered around "how do we leverage advancements in AI security to better defend us when our adversaries are going to be using that to attack us? It's a scary thought."

Currently, nation-state crews are the ones investing in offensive AI tools, likely because they have the deeper pockets compared to their criminal counterparts.

But, Thompson noted, as more open source projects and research gets published, these types of penetration tools will become "more accessible to the average hacker," who may turn around and use these for nefarious purposes without having to make the initial financial investment upfront. 

"On the flip side: There's a positive spin because a lot of vendors want to invest money into proactively using AI to defend themselves and proactively discover, hold and take action on weaknesses," he opined.

"I think you will see a big shift, enterprise portfolio-wide, on proactive vulnerability management and things like that to get ahead of it. It's not all doom and gloom." ®

Send us news
7 Comments

IBM: Insurance industry bosses keen on AI. Customers, not so much

Fewer than 30% of clients happy dealing with a generative AI virtual agent

AI firms and civil society groups plead for passage of federal AI law ASAP

Congress urged to act before year's end to support US competitiveness

Voice-enabled AI agents can automate everything, even your phone scams

All for the low, low price of a mere dollar

Anthropic's latest Claude model can interact with computers – what could go wrong?

For starters, it could launch a prompt injection attack on itself...

Cast a hex on ChatGPT to trick the AI into writing exploit code

'It was like watching a robot going rogue' says researcher

Open source LLM tool primed to sniff out Python zero-days

The static analyzer uses Claude AI to identify vulns and suggest exploit code

AI-driven e-commerce fraud is surging, but you can fight back with more AI

Juniper Research argues the only way to beat them is to join them

Gary Marcus proposes generative AI boycott to push for regulation, tame Silicon Valley

'I am deeply concerned about how creative work is essentially being stolen at scale'

OpenAI loses another senior figure, disperses safety research team he led

Artificial General Intelligence readiness advisor Miles Brundage bails, because nobody is ready

US Army turns to 'Scylla' AI to protect depot

Ominously-named bot can spot trouble from a mile away, distinguish threats from false alarms, says DoD

Wanted. Top infosec pros willing to defend Britain on shabby salaries

GCHQ job ads seek top talent with bottom-end pay packets

Sorry, but the ROI on enterprise AI is abysmal

Appen points to, among other problems, a lack of high-quality training data labeled by humans