Responsible AI

34 Posts

GIF of AI-assisted art: A landscape is edited, a cyborg sketch turns photorealistic, and a cat reads a newspaper, showing human input for copyright
Responsible AI

Some AI-Generated Works Are Copyrightable: U.S. Copyright Office says that no new laws are needed for AI-generated works

The United States Copyright Office determined that existing laws are sufficient to decide whether a given AI-generated work is protected by copyright, making additional legislation unnecessary.
AI model performance benchmark comparing R1 1776 and DeepSeek-R1 across MMLU, DROP, MATH-500, and AIME 2024 tests.
Responsible AI

DeepSeek-R1 Uncensored: Perplexity launches uncensored version of DeepSeek-R1

Large language models built by developers in China may, in some applications, be less useful outside that country because they avoid topics its government deems politically sensitive. A developer fine-tuned DeepSeek-R1 to widen its scope without degrading its overall performance.
Gavel striking a neural network, symbolizing legal decisions impacting AI and machine learning technologies.
Responsible AI

Judge Upholds Copyright in AI Training Case: U.S. court rejects fair use defense in Thomson Reuters AI lawsuit

A United States court delivered a major ruling that begins to answer the question whether, and under what conditions, training an AI system on copyrighted material is considered fair use that doesn’t require permission.
“Enough Is Enough” in black on white, with a heart, from an AI-generated video protesting Kanye West’s antisemitism.
Responsible AI

Deepfake Developers Appropriate Celebrity Likenesses: Viral video uses AI to depict celebrities without consent, sparking legal debate

A viral deepfake video showed media superstars who appeared to support a cause — but it was made without their participation or permission.
Illustration of the Google logo near a futuristic facility with fighter jets flying overhead.
Responsible AI

Google Joins AI Peers In Military Work: Google revises AI principles, lifting ban on weapons and surveillance applications

Google revised its AI principles, reversing previous commitments to avoid work on weapons, surveillance, and other military applications beyond non-lethal uses like communications, logistics, and medicine.
Top use cases for Claude.ai, with percentages for tasks like app development and content creation.
Responsible AI

What LLM Users Want: Anthropic reveals how users interact with Claude 3.5

Anthropic analyzed 1 million anonymized conversations between users and Claude 3.5 Sonnet. The study found that most people used the model for software development and also revealed malfunctions and jailbreaks.
 AUDREY TANG
Responsible AI

Audrey Tang: AI that unites us

As we approach 2025, my greatest hope for AI is that it will enable prosocial platforms that promote empathy, understanding, and collaboration rather than division.
ALBERT GU
Responsible AI

Albert Gu: More learning, less data

Building a foundation model takes tremendous amounts of data. In the coming year, I hope we’ll enable models to learn more from less data.
Graph showing how training loss affects token prediction accuracy and hallucination elimination.
Responsible AI

Getting the Facts Right: A memory method that reduces hallucinations in LLMs

Large language models that remember more hallucinate less.
Table comparing HarmBench and AdvBench ASR performance across models and benchmarks.
Responsible AI

Breaking Jailbreaks: New E-DPO method strengthens defenses against jailbreak prompts

Jailbreak prompts can prod a large language model (LLM) to overstep built-in boundaries, leading it to do things like respond to queries it was trained to refuse to answer. Researchers devised a way to further boost the probability that LLMs will respond in ways that respect such limits.
Llama wearing a camouflage helmet, looking determined with a light blue background.
Responsible AI

Big AI Pursues Military Contracts: Meta and Anthropic open doors for AI in U.S. defense and national security

Two top AI companies changed their stances on military and intelligence applications.
COMPL-AI workflow diagram showing compliance steps for AI models under the EU AI Act.
Responsible AI

Does Your Model Comply With the AI Act?: COMPL-AI study measures LLMs’ compliance with EU’s AI act

A new study suggests that leading AI models may meet the requirements of the European Union’s AI Act in some areas, but probably not in others.
Nuclear power plant cooling towers emitting steam into the sky.
Responsible AI

AI Giants Go Nuclear: Amazon, Google, and Microsoft bet on nuclear power to meet AI energy demands

Major AI companies plan to meet the growing demand with nuclear energy.
Art Attack: ArtPrompt, a technique that exploits ASCII art to bypass LLM safety measures
Responsible AI

Art Attack: ArtPrompt, a technique that exploits ASCII art to bypass LLM safety measures

Seemingly an innocuous form of expression, ASCII art opens a new vector for jailbreak attacks on large language models (LLMs), enabling them to generate outputs that their developers tuned them to avoid producing.
Hallucination Detector: Oxford scientists propose effective method to detect AI hallucinations
Responsible AI

Hallucination Detector: Oxford scientists propose effective method to detect AI hallucinations

Large language models can produce output that’s convincing but false. Researchers proposed a way to identify such hallucinations. 
Load More

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox