Irregular raises $80 million to secure frontier AI models

Date:

Share post:

On Wednesday, AI security firm Irregular announced $80 million in new funding in a round led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. A source close to the deal said the round valued Irregular at $450 million.

“Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction,” co-founder Dan Lahav told TechCrunch, “and that’s going to break the security stack along multiple points.”

Formerly known as Pattern Labs, Irregular is already a significant player in AI evaluations. The company’s work is cited in security evaluations for Claude 3.7 Sonnet as well as OpenAI’s o3 and o4-mini models. More generally, the company’s framework for scoring a model’s vulnerability-detection ability (dubbed SOLVE) is widely used within the industry.

While Irregular has done significant work on models’ existing risks, the company is fundraising with an eye towards something even more ambitious: spotting emergent risks and behaviors before they surface in the wild. The company has constructed an elaborate system of simulated environments, enabling intensive testing of a model before it is released.

“We have complex network simulations where we have AI both taking the role of attacker and defender,” says co-founder Omer Nevo. “So when a new model comes out, we can see where the defenses hold up and where they don’t.”

Security has become a point of intense focus for the AI industry, as the potential risks posed by frontier models as more risks have emerged. OpenAI overhauled its internal security measures this summer, with an eye towards potential corporate espionage. 

At the same time, AI models are increasingly adept at finding software vulnerabilities — a power with serious implications for both attackers and defenders.

Techcrunch event

San Francisco
|
October 27-29, 2025

For the Irregular founders, it’s the first of many security headaches caused by the growing capabilities of large language models.

“If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models,” Lahav says. “But it’s a moving target, so inherently there’s much, much, much more work to do in the future.”

Published Date : 2025-09-17 21:52:00
Source : techcrunch.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Updates...

The real danger of Tylenol has nothing to do with autism

Social media and news feeds are filled with unverified claims about a possible connection between acetaminophen and autism...

The pass of the century then brutal reality: the football gods won’t let the Bears have nice things

A playoff game often pivots on a single moment. The Bears thought they had theirs. Down a score,...

‘What to Eat Now’ with Marion Nestle (Rebroadcast)

This website uses cookies so that we can provide you with the best user experience possible. Cookie information...