Hirundo is building a “machine unlearning” system, designed to surgically remove unwanted data and behaviors from already-trained AI models. These include hallucinations, plausible-sounding but wrong outputs, plus embedded biases and adversarial vulnerabilities.
Helping AI to forget its mistakes
This entry was posted in Science & Technology. Bookmark the permalink.