News

Hirundo Secures $8M to Revolutionize AI Reliability Through Machine Unlearning

Artificial intelligence is everywhere these days, quietly shaping everything from the way we write emails to critical decisions in healthcare and finance. But as AI becomes more woven into daily life, its downsides—like hallucinations (those plausible-sounding but flat-out wrong responses), persistent biases, and the looming threat of data leaks—have started to feel a lot more personal and risky.

That’s where Hirundo steps in with something genuinely different. The Tel Aviv-based startup just secured $8 million in seed funding to tackle these AI headaches, with backing led by Maverick Ventures Israel and a strong bench of other investors. Instead of endlessly tuning models or filtering bad outputs, Hirundo is pioneering something called “machine unlearning.”

Machine unlearning is what it sounds like—teaching AI to forget things it shouldn’t know or behaviors we don’t want it to repeat. Imagine giving your AI model a targeted amnesia for bad habits, sensitive info, or unwanted bias—after it’s already been trained and deployed. No need to start from scratch or undertake a lengthy retraining process. It’s a little like neurosurgery: the platform identifies the exact parameters inside a model that are triggering problems, then plucks those out with surgical precision, keeping everything else running smoothly.

This matters most in places where AI mistakes could lead to more than confusion—think legal briefs, healthcare advice, or financial recommendations. Hallucinations in those fields aren’t just odd—they could mean lawsuits or shattered trust. Hirundo’s approach means organizations can address these risks directly inside the AI, rooting out the causes rather than just patching over the symptoms. Early pilots in industries like banking, health, and even defense are already seeing models that produce more reliable, less risky results.

What’s more, Hirundo’s technology is built to scale. It recognizes mislabeled data and weird outliers automatically, traces the roots of odd behaviors, and lets teams clean up AI models—live, and often in just one step. There’s no disruption to current systems and workflows. It works across data types, supports both generative and non-generative models, and can be deployed however security-conscious businesses want: as a SaaS tool, in their own private cloud, or even in locked-down, air-gapped environments that never touch the public internet.

Behind the scenes are founders blending academic muscle and hands-on tech know-how: Ben Luria, Michael Leybovich, and Professor Oded Shmueli. With deep backgrounds in computer science, data security, and large-scale AI, they’re well-positioned to steer the conversation around AI trust and reliability.

It’s not surprising, then, that investors are paying attention. “Hirundo is taking on one of AI’s most urgent challenges—making sure these systems don’t just sound convincing but are founded on truth, not discrimination or dangerous data,” said Maverick Ventures’ Yaron Carni. With the tech world waking up to the fact that AI trust is non-negotiable, Hirundo’s vision for post-training machine unlearning feels not just timely, but necessary.

As AI’s footprint expands into ever more sensitive domains, it’s clear that making AI “forget” its mistakes may be just as important as teaching it new things. Hirundo’s approach points to a future where AI models can be both powerful and dependable—a crucial leap for everyone who relies on the technology.

For more details, see the original story at Unite.AI.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Comments are closed.