The latest in AI from Feb. 22 to Feb. 28, 2024

Published
Reading time
4 min read
The latest in AI from Feb. 22 to Feb. 28, 2024

This week's top AI news and research stories featured Google's troubled Gemini launch, OpenAI's next act, Groq's blazing inference speed, and a method for faster network pruning. But first:

Air Canada ordered to uphold chatbot's discount promise
A Canadian tribunal ordered the airline to compensate a passenger over $600 for not honoring a bereavement fare discount promised by its chatbot. Jake Moffatt, who was booking a last-minute flight for his grandmother's funeral, was misled by the chatbot into believing he could receive a reduced fare under Air Canada's bereavement policy after purchasing his ticket. The airline's defense that the chatbot was a "separate legal entity" and not liable for the misinformation was rejected by the tribunal. (Read the news at The Washington Post)

Research: Baby's eye-view footage trains AI
Research demonstrated an approach to language learning using just 61 hours of video and audio recorded from a baby's perspective. This study challenges the notion that massive datasets are essential for AI to understand and acquire language. By analyzing the world through the eyes and ears of a toddler, scientists were able to train a basic AI model to associate images with words, mirroring early human language acquisition. The findings, published in Science, suggest that language learning, both human and machine, might be achievable with far less data than previously believed. (Learn more at Scientific American)

AI dominates 2024 tech landscape, IEEE study predicts
A global survey by the Institute of Electrical and Electronics Engineers (IEEE) underscores the pivotal role of AI in driving technological progress this year. Conducted among 350 technology leaders worldwide, the study identifies AI, along with its branches like machine learning and natural language processing, as the most critical technology sector. Additionally, the advent of 5G networks and advancements in quantum computing are expected to further fuel AI's growth. (Read more at VentureBeat)

U.S. Justice Department appoints first Chief AI officer
The officer’s role will involve advising on the integration of AI in investigations and criminal prosecutions, assessing the technology's ethical and effective use, and leading a board to guide the Justice Department on AI-related matters. (Find more details at Reuters)

AI in healthcare innovation outpaces regulation 
The rapid advancement of AI technologies in healthcare has outpaced the ability of regulatory bodies like the U.S. Food and Drug Administration (FDA) to establish and enforce guidelines. The FDA has expressed a desire for more authority to actively monitor AI products over time and to establish more specific safeguards for algorithms. However, obtaining the necessary powers from Congress seems unlikely in the near term, given the legislative body's historical reluctance to expand the FDA's regulatory reach. (Read the full report at Politico)

Reddit enters into a $60 million annual deal with Google
The agreement allows the use of Reddit's vast content for training AI models. With over $800 million in revenue last year, marking a 20% increase from 2022, Reddit aims to capitalize on this AI wave to enhance its market position. Google, meanwhile, gets privileged access to Reddit’s archive of social content. (Learn more at Reuters)

A startup’s pioneering approach to ethical AI porn
MyPeach.ai, founded by Ashley Neale, a former stripper turned tech entrepreneur, leverages AI to simulate romantic and sexual interactions while imposing strict ethical guidelines to prevent abuse. The platform provides an immersive experience and sets boundaries around user interactions, ensuring that virtual engagements promote consent and respect. (Read the story at The Guardian)

Singapore to invest $1 billion in AI development over the next five years
Announced by Deputy Prime Minister and Finance Minister Lawrence Wong, the investment aims at advancing AI compute, talent, and industry development, with a focus on securing access to advanced chips essential for AI deployment. The initiative also includes establishing AI centers of excellence to foster collaboration, innovation, and value creation across the economy. (Read more at The Straits Times)

Adobe announces AI assistant for Reader and Acrobat
The tool formats information and generates summaries and insights from PDF files, emails, reports, and presentations. Adobe plans to expand AI Assistant's capabilities beyond individual PDFs, including insights across multiple documents and intelligent document collaboration. The AI Assistant features are available in beta for Acrobat Standard and Pro Individual and Teams plans on desktop and web in English. The AI Assistant’s features are planned to extend to Adobe Reader desktop customers in the coming weeks at no additional cost. (Read Adobe’s blog)

AI-generated biographies flood the market after celebrity deaths 
These books, often filled with factual inaccuracies and grammatical errors, seem to exploit the public's interest in recently deceased figures for quick profit. Tools like GPTZero suggest a high likelihood that these biographies are AI-generated, raising questions about the ethical implications and quality control of such publications. (Get all the details at The New York Times)

Google DeepMind forms dedicated AI safety and alignment division
This initiative comes as Google tackles the challenges posed by generative AI models like Gemini, which have demonstrated a capacity to generate deceptive content. The new organization will encompass existing and new teams dedicated to ensuring the safety of AI systems, including a special focus on developing safeguards for artificial general intelligence (AGI) systems capable of performing any human task. (Learn more at TechCrunch)

Stability AI announces Stable Diffusion 3 (SD3)
Unlike its predecessors and proprietary counterparts like OpenAI's DALL-E, SD3 enables users to run and modify it on a wide range of devices. With a capacity ranging from 800 million to 8 billion parameters, SD3 aims to deliver strong prompt fidelity, even on compact devices. Although a public demo is currently unavailable, Stability has initiated a waitlist for early access. (Read more at Ars Technica)

ChatGPT suffered a glitch, generated bizarre responses 
Reports of ChatGPT "having a stroke" and "going insane" flooded Reddit last week. OpenAI's statement revealed the problem stemmed from a bug introduced in an optimization update, leading to incorrect word sequence generation due to a misstep in numerical token selection. This incident reignited discussions on the reliability of closed versus open-source AI models, with some advocating for the latter's transparency and fixability. (Learn more at Ars Technica)

Autonomous racing paves the way for safer driverless vehicles
The emerging field of autonomous auto racing aims to solve complex challenges that autonomous vehicles face in real-world conditions, such as rapid decision-making and precise maneuvering. The University of Virginia's Cavalier Autonomous Racing team showcased its prowess by securing second place at the Indy Autonomous Challenge nitiatives like the F1tenth Autonomous Racing Grand Prix further demonstrate the potential of autonomous racing as both an educational tool and a platform for global collaboration in refining AI algorithms. (Read the story at The Conversation)

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox