Skip to content

Interview with Ben Luria, CEO of Hirundo, covering 5 crucial points

Data Innovation Center had a conversation with Ben Luria, CEO of Hirundo, an Israeli business specializing in creating top-tier model modification tools to tackle hallucinations, biases, and security loopholes in Large Language Models (LLM). Luria elaborated on the functioning of their...

Interview with Ben Luria, the CEO of Hirundo
Interview with Ben Luria, the CEO of Hirundo

Interview with Ben Luria, CEO of Hirundo, covering 5 crucial points

In the rapidly evolving world of artificial intelligence (AI), one Israeli company, Hirundo, is making waves with its innovative approach to fixing problems in AI models. Unlike most industry practices, Hirundo focuses on redaction rather than addition, aiming to repair the "mental wiring" inside the model itself.

This groundbreaking strategy is attracting the attention of major enterprises such as Samsung and Airbus, who are leveraging Hirundo's Machine Unlearning platform to reduce risks like hallucinations and privacy violations. By repairing faulty information within the model, Hirundo is making AI systems more reliable, compliant, and aligned with business goals, without the need for retraining.

The core challenge in fixing problems in trained AI models is the cost and time required to selectively remove specific information or behaviours. However, Hirundo's engine employs a "neurosurgical" method to pinpoint and alter specific values in the model's weights and vectors, effectively erasing unwanted information.

This transformation of AI development from a trial-and-error process to a more controlled adjustment makes models safer, more reliable, and easier to align with evolving requirements. Responsible AI and AI Safety teams, as well as frontier AI labs that spend significant time on post-training fixes, benefit from Hirundo's platform, as it shortens iteration cycles and improves outcomes.

Hirundo's Machine Unlearning platform is particularly valuable for teams working on mission-critical, high-risk, or regulated AI systems. The platform can remove personally identifiable information, confidential knowledge, and toxic behaviours, while preserving the model's overall utility.

In addition, the platform is designed to address hallucinations, bias, and security flaws in large language models (LLMs). It has been shown to quickly and effectively erase specific information from LLMs, reducing hallucinations, biases, and vulnerabilities by more than 50 percent in a short processing time.

Unlike earlier unlearning research, Hirundo's work delivers a repeatable, production-ready solution that preserves the model's overall utility. The company's platform is delivering enterprise-grade results, with up to 85 percent fewer jailbreak vulnerabilities, more than 55 percent fewer hallucinations and biased responses, and stronger overall model stability.

As society's expectations for AI continue to grow, Hirundo is laying the foundation for AI that can keep pace, scaling responsibly without sacrificing safety or control. With its Machine Unlearning platform, Hirundo is leading the charge towards a safer, more reliable, and more compliant future for AI.

Read also:

Latest