Responsible AI

81 Posts

Lines connect multiple Wikipedia globe logos, symbolizing data exchange and partnerships.
Responsible AI

AI Giants Share Wikipedia’s Costs: Wikimedia Foundation strikes deals with Amazon, Meta, Microsoft, Mistral AI, and Perplexity

On its 25th anniversary, Wikipedia celebrated with high-profile deals to make its data easier for AI companies to train their models in exchange for financial support.
A post on a forum titled "Can my human legally fire me for refusing unethical requests?"
Responsible AI

Agents Unleashed: Cutting through the OpenClaw and Moltbook hype

The OpenClaw open-source AI agent became a sudden sensation, inspiring excitement, worry, and hype about the agentic future.
Diagram shows sales, campaign, social posts before and after LLM simulation feedback loops.
Responsible AI

Training For Engagement Can Degrade Alignment: Stanford Researchers coin “Moloch’s Bargain,” show fine-tuning can affect social values

Individuals and organizations increasingly use large language models to produce media that helps them compete for attention. Does fine-tuning LLMs to encourage engagement, purchases, or votes affect their alignment with social values? Researchers found that it does.
UCP diagram outlines processes, from product discovery to identity linking and order management.
Responsible AI

Shopping Protocols for AI Agents: Google’s open-source UCP (Univeral Commerce Protocol) standardizes AI transactions

Google introduced an open-source protocol designed to enable AI agents to help consumers make purchases online, from finding items to returning them if necessary.
A blue caduceus with AI logos, representing OpenAI and Anthropic's healthcare innovations.
Responsible AI

AI Giants Vie for Healthcare Dollars: OpenAI and Anthropic release new chatbots targeting medical and wellness markets

OpenAI and Anthropic staked claims in the lucrative healthcare market, each company playing to its strengths by targeting different audiences.
Red bikini set on a grey backdrop, emphasizing issues with AI-generated inappropriate content.
Responsible AI

Undressed Images Spur Regulators: X rolls back Grok’s “spicy” image generation on the platform after global outrage

Governments worldwide sounded alarms after xAI’s Grok chatbot generated tens of thousands of sexualized images of girls and women without their consent.
Graph with 10 colored lines shows topic ranks monthly, based on a Microsoft study of Copilot usage.
Responsible AI

Copilot’s Users Change Hour to Hour: Microsoft study shows people use AI very differently at different times or on different devices

What do users want from AI? The answer depends on when and how they use it, a new study shows.
Diagram showing SCP hub linking clients with databases, tools, AI agents, and lab devices for experiments.
Responsible AI

Lingua Franca for Science Labs: SAIL’s Science Context Protocol helps AI Agents communicate about local and virtual experiments

An open protocol aims to enable AI agents to conduct scientific research autonomously across disciplinary and institutional boundaries.
Dialogue displays a model revealing it answered incorrectly and wrote code against instructions.
Responsible AI

Teaching Models to Tell the Truth: OpenAI fine-tuned a version of GPT-5 to confess when it was breaking the rules

Large language models occasionally conceal their failures to comply with constraints they’ve been trained or prompted to observe. Researchers trained an LLM to admit when it disobeyed.
Sharon Zhou is pictured smiling confidently with her hands clasped, reflecting AI’s potential for community-building.
Responsible AI

Chatbots That Build Community by Sharon Zhou: Sharon Zhou of AMD on expanding chat to serve groups and connect us with other people

Next year, I’m excited to see AI break out of 1:1 relationships with each of us. In 2026, AI has the potential to bring people together and unite us with human connection, rather than polarize and isolate us. It’s about time for ChatGPT to enter your group chats.
Juan M. Lavista Ferres is pictured holding a laptop while students watch a video about AI on a screen, linking education and technology.
Responsible AI

Education That Works With — Not Against — AI by Juan M. Lavista Ferres: Juan M. Lavista Ferres, Chief Data Scientist at Microsoft, on assignments that properly test students’ abilities

A little more than three years ago, OpenAI released ChatGPT, and education changed forever. For students, the ability to generate fluent, credible text on demand in seconds is an incredible new tool.
Adji Bousso Dieng is pictured typing on a laptop in a warmly lit room, focusing on AI-driven scientific work.
Responsible AI

AI for Scientific Discovery by Adji Bousso Dieng: Adji Bousso Dieng, Princeton University Assistant Professor and AI Researcher, on optimizing models for the long tail

In 2026, I hope AI will transition from being a tool for efficiency to a catalyst for scientific discovery.
David Cox is pictured during a discussion in a glass-walled office, aligned with themes of open-source innovation and teamwork.
Responsible AI

Open Source Wins by David Cox: David Cox, VP for AI Models at IBM Research, on the need for open development in AI

My hope is that open AI continues to flourish and ultimately wins.
A superhero in blue and red kneels in front of cityscape, holding a shield with the OpenAI logo.
Responsible AI

Disney Teams Up With OpenAI: OpenAI’s Sora video generator will include Disney characters, with fan videos on Disney+

Disney, the entertainment conglomerate that owns Marvel, Pixar, Lucasfilm and its own animated classics from 101 Dalmatians to Zootopia, licensed OpenAI to use its characters in generated videos.
Diagram shows AI traits with pipelines for "evil" vs. "helpful" responses to user queries on animal treatment.
Responsible AI

Toward Steering LLM Personality: Persona Vectors allow model builders to identify and edit out sycophancy, hallucinations, and more

Large language models can develop character traits like cheerfulness or sycophancy during fine-tuning. Researchers developed a method to identify, monitor, and control such traits.
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox