Auditing is a critical technique in the effort to build fair and equitable AI systems. But current auditing methods may not be up to the task.
What’s new: There’s no consensus on how AI should be audited, whether audits should be mandatory, and what to do with their results, according to The Markup, a nonprofit investigative reporting outfit.
What’s Happening: Auditing firms are doing brisk business analyzing AI systems to determine whether they’re effective and fair. But such audits are often limited in scope, and they may lend legitimacy to models that haven’t been thoroughly vetted.
- HireVue, a vendor of human resources software, used an independent company to audit one of its hiring tools by interviewing stakeholders about possible problems. But the audit stopped short of evaluating the system’s technical design.
- An audit of hiring software made by Pymetrics, which evaluates candidates through simple interactive games (illustrated above), did examine its code and found it largely free of social biases. But the audit didn’t address whether or not the software highlighted the best applicants for a given job.
- AI vendors are under no obligation to have their systems audited or make changes if auditors find problems.
Behind the news: In the U.S., members of Congress and the New York City Council have proposed bills that would require companies to audit AI systems. The laws have yet to be passed.
Why it matters: AI systems increasingly affect the lives of ordinary people, influencing whether they land a job, get a loan, or go to prison. These systems must be trustworthy — which means the audits that assess them must be trustworthy, too.
We’re thinking: Makers of drugs and medical devices must prove their products are effective and safe. Why not makers of AI, when its output can dramatically impact people’s lives? The industry should agree on standards and consider making audits mandatory for systems that affect criminal justice, allocating health care resources, and offering loans.