Lamini’s Targeted Hallucination Reduction May Be a Big Deal for Job Automation

post by sweenesm · 2024-06-18T15:29:38.490Z · LW · GW · 0 comments

Contents

No comments

Lamini recently posted a paper explaining their “memory tuning” methodology of using a Mixture of Memory Experts to significantly reduce LLM hallucinations over a limited domain of knowledge. They describe using this technique with a Mistral 2 open-source LLM model to achieve 95% accuracy on a text-to-SQL query task for a business, using specific facts about that business’ database schema. They claim a 10X reduction in hallucinations from 50% to 5% using this technique.

They also have a high-level write up that mentions achieving 88% accuracy for product suggestions over a 50,000 product database. If these claims are true and businesses can use Lamini’s method to tune LLM’s for significantly fewer hallucinations over the business’ relevant specific knowledge domains, it seems like significant job automation in these domains might not be far behind.

0 comments

Comments sorted by top scores.