University of North Carolina
More Factual LLMs: FactTune, a method to fine-tune LLMs for factual accuracy without human feedback
Large language models sometimes generate false statements. New work makes them more likely to produce factual output.
2 Posts
Stay updated with weekly AI News and Insights delivered to your inbox