Machine learning researchers tend to trust international organizations, distrust military forces, and disagree on how much disclosure is necessary when describing new models, a new study found.
What’s new: A survey of accomplished machine learning researchers by Cornell University, University of Oxford, and University of Pennsylvania probed their stances on key ethical issues and compared them with those of the U.S. public.
What they found: The study drew on responses from 534 researchers whose work had been accepted by NeurIPS or ICML. The respondents were 89 percent male and came mostly from Europe, Asia, and North America. The findings include:
- Safety: 68 percent of respondents said the AI community should place a higher priority on safety, defined as systems that are “more robust, more trustworthy, and better at behaving in accordance with the operator’s intentions.”
- Openness: The respondents valued openness in basic descriptions of AI research — to a point. 84 percent believed that new research should include a high-level description of methods, and 74 percent said it should include results. Only 22 percent believed that published research should include a trained model.
- Trust in militaries: Respondents generally supported uses of AI in military logistics. One in five strongly opposed AI for military surveillance. 58 percent were strongly opposed to the development of AI-driven weapons, and 31 percent said they would resign if their job required them to work on such projects.
- Trust in corporations: Among top AI companies, respondents deemed Open AI the most trustworthy followed by Microsoft, DeepMind, and Google. Respondents showed the least trust in Facebook, Alibaba, and Baidu.
Behind the news: Technologists have been nudging the industry towards safe, open, and ethical technology. For example, the Institute for Electrical and Electronics Engineers introduced standards to help its members protect data privacy and address ethical issues. Sometimes engineers take a more direct approach, as when 3,000 Google employees signed a petition that censured their company’s work for the U.S. military, causing it to withdraw from a Defense Department computer vision project.
Why it matters: AI raises a plethora of ethical quandaries, and machine learning engineers are critical stakeholders for addressing them. Machine learning engineers should play a big role in understanding the hazards, developing remedies, and pushing institutions to follow ethical guidelines.
We’re thinking: The machine learning researchers surveyed were markedly more concerned than the U.S. public about competition between the U.S. and China, surveillance, technological unemployment, and bias in hiring. These disagreements suggest an active role for the AI community in navigating the myriad challenges posed by AI.