Be part of our every day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Study Extra
New York-based Jericho Safety has secured $15 million in Sequence A funding to scale its AI-powered cybersecurity coaching platform. The funding, introduced right this moment, follows the corporate’s profitable five-month execution of a $1.8 million Division of Protection contract that put the two-year-old startup on the cybersecurity map.
“Inside minutes, a classy attacker can now create a voice clone that sounds precisely like your CFO requesting an pressing wire switch,” mentioned Sage Wohns, co-founder and Chief Government Officer of Jericho Safety, in an unique interview with VentureBeat. “Conventional cybersecurity coaching merely hasn’t saved tempo with these threats.”
The funding spherical was led by Jasper Lau at Period Fund, who beforehand backed the corporate’s $3 million seed spherical in August 2023. Extra buyers embody Lux Capital, Sprint Fund, Gaingels Enterprise Fund and Gaingels AI Fund, Distique Ventures, Plug & Play Ventures, and a number of other specialised enterprise companies.
Army cybersecurity contract established credibility in aggressive market
Jericho’s profile rose considerably final November when the Pentagon chosen the corporate for its first generative AI protection contract. The $1.8 million award by way of AFWERX, the innovation arm of the Air Power, charged Jericho with defending army personnel from more and more refined phishing assaults.
“There was a extremely publicized spear-phishing assault concentrating on Air Power drone pilots utilizing pretend consumer manuals,” Wohns famous in an earlier interview. The incident underscored how even extremely educated personnel can fall sufferer to fastidiously crafted deception.
This federal contract helped Jericho stand out in a crowded cybersecurity market the place established gamers like KnowBe4, Proofpoint, and Cofense dominate. Trade analysts worth the safety consciousness coaching sector at $5 billion yearly, with projected development to $10 billion by 2027 as organizations more and more acknowledge human vulnerability as their major safety weak spot.
How AI fights AI: Automated adversaries that be taught worker weaknesses
In contrast to standard safety coaching that depends on static templates and predictable situations, Jericho’s platform employs what Wohns calls “agentic AI” — autonomous programs that behave like precise attackers.
“If an worker ignores a suspicious e mail, our system would possibly comply with up with a textual content message that seems to come back from their supervisor,” Wohns defined. “Similar to actual attackers, our AI adapts to conduct, studying which approaches work greatest in opposition to particular people.”
This multi-channel strategy addresses a elementary limitation of conventional safety coaching: most packages put together workers for yesterday’s assaults, not tomorrow’s. Jericho’s simulations can span e mail, voice, textual content messaging, and even video calls, creating personalised assault situations based mostly on an worker’s position, conduct patterns, and former responses.
The corporate’s shopper dashboard exhibits which workers fall for which kinds of assaults, permitting organizations to ship focused remediation. Early information means that workers educated with adaptive, AI-driven simulations are 64% much less more likely to fall for precise phishing makes an attempt than those that obtain conventional safety consciousness coaching.
Singapore CFO loses $500,000 to deepfake government impersonation
The monetary stakes of those new threats grew to become clear in a case Wohns highlighted involving a finance government deceived by artificially generated variations of firm management.
“A CFO in Singapore was deceived into transferring practically $500,000 throughout a video name that appeared to incorporate the corporate’s CEO and different executives,” Wohns recounted. “Unbeknownst to the CFO, these contributors have been AI-generated deepfakes, crafted utilizing publicly out there movies and recordings.”
The assault started with a seemingly harmless WhatsApp message requesting an pressing Zoom assembly. Through the name, the deepfake avatars persuaded the CFO to authorize the switch. Solely when the attackers tried to extract extra funds did suspicions come up, finally involving authorities who recovered the preliminary switch.
Such incidents have gotten alarmingly widespread. In accordance with Resemble AI’s Q1 2025 Deepfake Incident Report, monetary losses from deepfake-enabled fraud exceeded $200 million globally throughout simply the primary quarter of 2025. The report discovered that North America skilled the best variety of incidents (38%), adopted by Asia (27%) and Europe (21%).
Trade stories have documented staggering development charges in recent times, with some research displaying deepfake fraud makes an attempt rising by greater than 1,700% in North America and exceeding 2,000% in sure European monetary sectors.
New menace horizon: When AI programs assault different AI programs
Wohns recognized an much more regarding rising menace that few safety groups are ready for: “AI brokers phishing AI brokers.”
“As AI instruments proliferate inside firms from buyer help chatbots to inside automations, attackers are starting to focus on and exploit these brokers immediately,” he defined. “It’s now not simply people being deceived. AI programs at the moment are each the targets and the unwitting accomplices of compromise.”
This represents a elementary shift within the cybersecurity panorama. When organizations deploy AI assistants that may entry inside programs, approve requests, or present data, they create new assault surfaces that conventional safety approaches don’t handle.
Self-service platform opens entry to smaller companies as assault targets broaden
Whereas main enterprises have lengthy been major targets for stylish assaults, smaller organizations are more and more discovering themselves in cybercriminals’ crosshairs. Recognizing this pattern, Jericho has launched a self-service platform that permits firms to deploy AI-powered safety coaching with out the enterprise gross sales cycle.
“The self-service registration is along with our enterprise gross sales strategy,” Wohns mentioned. “Self-Service is designed to offer no-touch/low-touch for Small to Medium Companies.”
Customers can join a seven-day free trial and discover the product with out gross sales conferences. This strategy stands in distinction to {industry} norms, the place cybersecurity options usually contain prolonged procurement processes and high-touch gross sales approaches.
Future-proofing safety as AI capabilities speed up
The $15 million funding will primarily fund three initiatives: increasing analysis and growth, scaling go-to-market methods by way of partnerships, and rising Jericho’s workforce with a deal with AI and cybersecurity expertise.
“One in every of our greatest technical challenges has been preserving tempo with the fast evolution of AI itself,” mentioned Wohns. “The instruments, fashions, and methods are enhancing at a unprecedented fee, which implies our structure must be versatile sufficient to adapt shortly.”
Early clients have responded enthusiastically to Jericho’s strategy. “Clients have been exceedingly pissed off on the lack of innovation with incumbent options and the next decline in efficacy,” Wohns famous. “Inside 30 days, clients determine vulnerabilities throughout a number of channels and construct extremely personalised and dynamic remediation packages based mostly on up to date threats and methods.”
Because the boundaries between human and machine communications blur, the very nature of belief in digital environments is being redefined. The chief on a video name, the pressing e mail from IT help, or the customer support chatbot won’t be what they seem. On this new actuality, Jericho Safety is betting that the very best protection isn’t simply instructing workers to be suspicious — it’s displaying them precisely how they’ll be deceived earlier than the actual attackers get the possibility
Source link