Irregular Raises $80M to Harden AI Security
AI security startup Irregular raised $80 million in a round led by Sequoia and Redpoint at a reported $450M valuation. Known for its SOLVE scoring and high-fidelity simulated attack-and-defend environments, the company focuses on detecting both known and emerging model risks before release. The funding will scale proactive testing as AI capabilities—and threats—accelerate.
AI security firm Irregular announced an $80 million funding round led by Sequoia Capital and Redpoint Ventures, with additional participation from Wiz CEO Assaf Rappaport. A source close to the deal pegged the company's valuation at roughly $450 million.
Formerly known as Pattern Labs, Irregular has already become an influential voice in AI safety evaluations. Its SOLVE framework for scoring a model’s vulnerability-detection ability is widely cited, and the company’s work is referenced in assessments of models including Anthropic’s Claude 3.7 Sonnet and several OpenAI models.
From vulnerability scoring to emergent-risk hunting
Irregular isn’t just measuring present weaknesses. With this new capital the founders say they’ll expand efforts to detect emergent behaviors—flaws or attack vectors that don’t appear until models interact in complex environments or with other systems.
Co-founder Dan Lahav told TechCrunch that economic activity will increasingly arise from human-on-AI and AI-on-AI interactions, and that these interactions will expose gaps across the security stack. Irregular’s approach aims to find those gaps before models reach production.
Simulated attack-and-defend environments
A core part of Irregular’s stack is an elaborate system of simulated environments where models play both attacker and defender roles. These complex network simulations reveal where defenses hold and where they fail, enabling targeted hardening before models are released.
That matters because modern models are increasingly good at finding software vulnerabilities and exploiting system weaknesses—skills that could be used for red-team testing or, in the wrong hands, real-world attacks.
Why this funding is timely
AI security has moved from niche to urgent. Frontier labs tightened internal measures this year amid concerns like corporate espionage and model misuse. Investors see commercial potential in companies that can validate model safety, quantify risk, and offer continuous evaluation as models evolve.
Irregular’s timing aligns with demand from enterprises, cloud providers, and regulators that need repeatable, auditable evidence models are safe to deploy. The new funding will let Irregular scale simulations, broaden SOLVE’s adoption, and invest in tools to surface emerging threats earlier.
What organizations should do next
Security and ML leaders should treat model release like software release: require pre-launch adversarial testing, maintain continuous monitoring for emergent behaviors, and use standardized scoring to compare models over time. Investing early in these capabilities reduces surprise exposure and speeds safe deployment.
As Irregular scales, the industry will get more rigorous benchmarks and clearer signals about where models fail. That’s good for builders, buyers, and regulators who need objective evidence that AI systems are resilient in the real world.
For now, Irregular’s $80 million raise underscores a larger truth: securing AI is a running battle. As co-founder Omer Nevo put it, testing in simulated attacker-and-defender roles gives teams a chance to see how models behave under stress—before those behaviors show up in production.
That approach—aggressive pre-release evaluation, reproducible scoring, and ongoing red-teaming—is the same analytical mindset QuarkyByte brings when helping organizations operationalize AI risk controls across development pipelines and enterprise deployments.
Keep Reading
View AllChina Bans Nvidia AI Chips and Shuts Out Market
China's regulator barred domestic firms from buying Nvidia AI chips, blocking RTX Pro 6000D and escalating hardware and geopolitical risk for AI projects.
Periscope Founders Launch Macroscope AI for Code Insights
Periscope alumni launch Macroscope, an AI code-understanding engine that summarizes changes, finds bugs, and gives leaders real-time product insights.
Voice AI Upends Market Research with Faster, Cheaper Interviews
Keplar raises $3.2M to automate customer interviews with voice AI, delivering quick, low-cost insights for brands like Clorox and Intercom.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can help security and ML teams adopt Irregular-style pre-release evaluations at scale, applying adversarial simulations and vulnerability scoring to your models. Explore how our analytical approach reduces exploit exposure, improves release confidence, and operationalizes emergent-risk monitoring for enterprises and government bodies.