This week's top AI news and research stories featured a deepfake scandal that is driving new AI laws, Hugging Face's leaderboards to evaluate model performance and safety, research that found that GPT-4 biothreat aid is comparable to web search, and research that tested large language models (LLMs) on their ability to understand the Theory of Mind. But first:
Italian watchdog claims ChatGPT violates privacy regulations
Italy's data protection authority, Garante, found that OpenAI's ChatGPT breaches European Union (EU) data protection rules, continuing an investigation that led to a temporary ban last year. Despite OpenAI's efforts to address privacy concerns, including allowing users to opt-out of training its algorithms, Garante identified potential violations without specifying details. OpenAI claimed its practices do comply with EU privacy standards; the company has 30 days to present its defense. (Read more at Reuters)
Meta launches open source Code Llama 70B
Built on the foundation of Llama 2, this model was trained on 500 billion tokens of code, featuring a larger context window for handling complex coding tasks. Meta also released three smaller versions of Code Llama, as well as versions optimized for Python and natural language instruction. Mark Zuckerberg highlighted the significance of coding for the wider field of AI, emphasizing the role of coding models in enhancing AI’sability to process information in other domains with greater logic. (Get more details at VentureBeat and Meta’s blog)
Yelp introduces over 20 AI-aided features to boost local business discovery and engagement
The platform now offers a suite of features including automated summaries and budgeting tools aimed at assisting businesses in enhancing customer engagement and streamlining their spending. Additionally, it provides market insights, conducts competitive analysis, and offers advice on maximizing advertising efficiency. (Read the news at VentureBeat)
Cisco AI Readiness Index reveals gap between ambition and capability in AI adoption
Despite 97 percent of leaders feeling pressured to deploy AI, 86 percent of companies are unprepared to fully leverage AI's potential. This readiness deficit is attributed to challenges in talent acquisition, knowledge gaps, and insufficient computing resources amid the rapid democratization of generative AI. The report emphasizes the need for an AI-ready culture within organizations and identifies six key pillars for AI readiness: strategy, infrastructure, data, governance, talent, and culture. (Read the full report at Cisco)
Volkswagen Group launches specialized AI Lab
The AI Lab aims to function as a globally networked competence center and incubator, focusing on identifying innovative product ideas and fostering collaborations with tech companies across Europe, China, and North America. The Lab will not directly manufacture production models but will rapidly develop digital prototypes for potential implementation across the company’s brands. (Read Volkswagen’s press release)
AI initiative aims to quicken emergency response times in urban traffic
The project, spearheaded by the C2SMARTER consortium and led by New York University, has the goal of reducing the New York Fire Department's response times to fire outbreaks and medical emergencies. By analyzing real-time traffic data along with information from fire trucks, ambulances, and the Waze navigation app, researchers plan to create a digital twin of a 30-block area in Harlem. This model will simulate traffic patterns and devise strategies for avoiding delays, potentially revolutionizing how emergency services respond to crises. (Learn more at The New York Times and New York University’s press release)
Research: Amazon introduces tool for virtual product trials in any environment
The "Diffuse to Choose" (DTC) tool enables customers to virtually place products in their personal spaces to see how they fit and look in real-time. This technology leverages diffusion models for a seamless "Virtual Try-All" experience, allowing for the realistic integration of items into any desired setting. DTC's approach overcomes the limitations of traditional image-conditioned diffusion models by retaining high-fidelity details and ensuring accurate semantic manipulations. (Read the news at Maginative)
U.S. targets foreign AI development with new cloud computing security measures
The Biden administration proposed new regulations for U.S. cloud computing firms to scrutinize foreign access to American data centers. The proposed "know your customer" rules would mandate that cloud companies identify and monitor foreign users leveraging U.S. cloud computing resources for AI training, aligning with broader efforts to curb China's access to advanced U.S. technology. (Full story at Reuters)
Google restructures AI ethics team
Google's primary internal AI ethics watchdog, the Responsible Innovation team (RESIN), is undergoing significant restructuring following the departure of its leader, Jen Gennai. RESIN has conducted over 500 project reviews including the Bard chatbot. (Learn more at Wired)
Common Sense Media joins forces with OpenAI to promote safe AI use
The advocate for children and family online safety partnered with OpenAI to enhance the safe and beneficial use of AI. The collaboration aims to develop AI guidelines and educational materials, and to curate a selection of family-friendly GPTs available in the GPT Store, adhering to Common Sense's ratings and standards. (Read Common Sense Media’s press release)
Google releases Imagen 2
The update to Google’s image generator is now available in Bard, ImageFX, Search, and Vertex AI. Developed by Google DeepMind, Imagen 2 addresses challenges like rendering realistic human features and minimizing visual artifacts, offering more detailed and semantically aligned images based on user prompts. Images generated with Imagen 2 feature SynthID watermarks, allowing users to identify AI-generated content. (Learn more at Google’s blog)
Mastercard launches model to enhance fraud detection
The model, called Decision Intelligence Pro, was developed in-house by Mastercard's cybersecurity and anti-fraud teams, and is set to improve fraud detection rates by up to 300 percent in certain scenarios, according to the company. Leveraging data from approximately 125 billion annual transactions, the model can discern patterns and predict fraudulent activities by analyzing customer and merchant relationships rather than textual data. (Read the full article at CNBC)
U.S. Federal Communications Commission (FCC) targets AI-generated robocalls with new criminalization measures
Following an incident involving a deceptive AI-generated robocall impersonating Joe Biden, the FCC announced plans to criminalize unsolicited robocalls that use AI to mimic human voices.. State attorneys general, empowered by this change, will have greater authority to prosecute AI-facilitated spam activities, as demonstrated by New Hampshire's ongoing investigation into the fake Biden call. (Read more at NBC News)
The Browser Company launches Arc Search
This app, evolving from the company's Arc browser project, allows users to input queries and uses AI to compile comprehensive reports and webpages from across the web, streamlining the search process. For example, users can inquire about recent events, like sports games or celebrity news, and receive detailed summaries instead of traditional search results. Arc’s search is powered by OpenAI and other undisclosed models. (Read the news at The Verge)