Microsoft tightened the reins on both AI developers and customers.
What’s new: The tech titan revised its Responsible AI Standard and restricted access to some AI capabilities accordingly.
Taking responsibility: The update is intended to support six core values.
- Accountability: Developers should assess how a system will affect society, whether it’s a valid solution to the associated problem, and who bears responsibility for the system and its data. Additional scrutiny should be devoted to AI products in socially sensitive areas like finance, education, employment, healthcare, housing, insurance, or social welfare.
- Transparency: Systems should be thoroughly documented. Users should be informed that they are interacting with AI.
- Fairness: Developers should assess a system’s fairness to different demographic groups and actively work to minimize differences. Developers should publish details to warn users of any risks they identify.
- Reliability and Safety: Developers should determine a system’s safe operating range and work to minimize predictable failures. They should also establish procedures for ongoing monitoring and guidelines for withdrawing the system should unforeseen flaws arise.
- Privacy and Security: Systems should comply with the company’s privacy and security policies, ensuring that users are informed when the company collects data from them and that the resulting corpus is protected.
- Inclusiveness: Systems should comply with inclusiveness standards such as accessibility for people with disabilities.
Face off: To comply with its new guidelines, the company limited AI services offered via its Azure Cloud platform.
- New customers of the company’s face recognition and text-to-speech services must apply for access.
- The face recognition service no longer provides estimates of age, gender, or emotion based on face portraits. Existing customers will be able to use these capabilities until June 2023.
Behind the news: Microsoft published its first Responsible AI Standard in 2019 but concluded that the initial draft was vague. The new version is intended to give developers clearer directions for compliance. To that end, the company also provides nearly 20 tools intended to aid developers in building responsible AI systems. For instance, HAX Workbook helps make AI systems easier to use, InterpretML helps explain model behavior, and Counterfit stress-tests security.
Why it matters: Regulation in the United States and elsewhere lags rising concern that AI is growing more capable of causing harm even as it becomes enmeshed in everyday life. Microsoft’s latest moves represent a proactive effort to address the issue.
We’re thinking: Hundreds of guidelines have been drafted to govern AI development. The efforts are laudable, but the results are seldom actionable. We applaud Microsoft for working to make its guidelines more concrete, and we’re eager to see how its new standards play out in practice.