Amazon published a series of web pages designed to help people use AI responsibly.
What's new: Amazon Web Services introduced so-called AI service cards that describe the uses and limitations of some models it serves. The move is an important acknowledgment of the need to describe the workings of machine learning models available to the general public.
How it works: The company documented three AI models: Rekognition for face matching, Textract AnalyzeID for extracting text from documents, and Transcribe for converting speech to text.
- A section on intended use cases describes applications and risks that confound the model’s performance in each of those applications. For instance, the card for Rekognition lists identity verification, in which the model matches selfies to images in government documents, and media applications, which match faces found in photos or videos to a set of known individuals.
- A section on the model’s design explains how it was developed and tested and describes expectations for performance. It provides information on explainability, privacy, and transparency. It also describes the developer’s efforts to minimize bias. For example, this section for Textract AnalyzeID describes how the developers curated training data to extract text in documents from a wide range of geographic regions.
- A section on deployment offers best practices for customers to optimize the model’s performance. This section for Transcribe suggests that users keep close to the microphone and reduce background noise. It also explains how customers can deploy custom vocabularies to help the model transcribe regional dialects or technical language.
- Amazon will update each service card in response to community feedback. It provides resources for customers who build models using SageMaker to create their own cards.
Behind the news: In 2018, researchers including Margaret Mitchell and Timnit Gebru, who were employed by Google at the time, introduced the concept of model cards to document a model’s uses, biases, and performance. Google implemented a similar approach internally the following year.
Why it matters: Model cards can help users take advantage of AI responsibly. Hundreds of thousands of people use cloud services that offer AI functions including prebuilt models. Knowing what the models were intended to do, what their limitations are, and so on can help users deploy them effectively and avoid misuses that could lead them into ethical or legal trouble.
We're thinking: We applaud Amazon’s efforts to increase transparency around their models. We look forward to service cards for more models and, hopefully, tools that help developers increase the transparency of their own models.