Twice a week, Data Points brings you the latest AI news, tools, models, and research in brief. In today’s edition, you’ll find:
- Apple and Alibaba strike an AI deal
- Building a reasoning model without using chain of thought
- Torque clustering may enable better, faster autonomous learning
- Remaking BERT without using task-specific heads
But first:
OpenAI cancels standalone o3 model in favor of integrated GPT-5
OpenAI announced it wouldn’t release its o3 AI model, opting instead to integrate o3’s technology into a new unified model called GPT-5. CEO Sam Altman announced plans to simplify OpenAI’s product offerings, promising “magic unified intelligence” and unlimited chat access to GPT-5 at a standard setting. Altman also announced that GPT-4.5, also known as Orion, would be released in weeks or months. This shift in strategy comes as OpenAI faces increasing competition from other AI labs and aims to streamline its product lineup for easier user experience. (TechCrunch and X)
Judge rules against AI firm in Thomson Reuters copyright case
A federal judge in Delaware ruled that Ross Intelligence’s copying of Thomson Reuters’ content to build an AI-based legal platform violated U.S. copyright law. In particular, the judge decided that Ross Intelligence had no fair use exemption because it was building a product to compete with Thomson Reuters’ service. The decision marks the first U.S. ruling on fair use in AI-related copyright litigation, a key defense for tech companies in cases involving the use of copyrighted material to train AI systems. This ruling could have significant implications for ongoing and future copyright cases against AI companies, potentially influencing how courts interpret claims of fair use in AI training. (Reuters)
Alibaba’s AI tech to power iPhones in China
Apple plans to incorporate Alibaba’s AI technology into iPhones sold in China, according to Alibaba’s chairman Joseph Tsai. This partnership could help Apple revive iPhone sales in China, where the company has struggled against competitors offering AI-enabled smartphones. The collaboration marks a significant win for Alibaba in China’s competitive AI market, potentially boosting its position against rivals like Baidu and DeepSeek. (CNBC)
New language model uses recurrent depth to scale reasoning
Researchers at multiple institutions developed a novel language model architecture that iterates a recurrent block to perform reasoning in latent space, allowing flexible scaling of test-time computation. Unlike models that scale by producing more tokens, this approach requires no specialized training data and can capture reasoning not easily verbalized. A 3.5 billion parameter proof-of-concept model trained on 800 billion tokens showed improved performance on reasoning benchmarks with increased computation, competing with larger models. This architecture opens up new possibilities for efficient and powerful AI reasoning capabilities that can be dynamically adjusted at inference time. (arXiv)
Unsupervised learning clustering algorithm inspired by physics
Researchers at the University of Technology Sydney developed Torque Clustering, a novel unsupervised learning algorithm that outperforms traditional methods with a 97.7 percent average adjusted mutual information score across 1,000 datasets. The algorithm, inspired by gravitational interactions between galaxies, uses the physical concept of torque to autonomously identify clusters and adapt to diverse data types without parameters. It outperforms other unsupervised learning algorithms by over 10 percent. This research could significantly impact artificial intelligence development, particularly in robotics and autonomous systems, by enhancing movement optimization, control, and decision-making capabilities. (University of Technology Sydney and IEEE)
Encoder model performs well using masked head for classification
Researchers at Answer.AI introduced ModernBERT-Large-Instruct, a 0.4 billion-parameter encoder model that uses its masked language modeling head for generative classification. The model outperforms similarly sized large language models on MMLU and achieves 93 percent of Llama3-1B’s MMLU performance with 60 percent fewer parameters. This approach demonstrates the potential of using generative masked language modeling heads over traditional task-specific heads for downstream tasks, suggesting further exploration in this area is warranted. (arXiv)
Still want to know more about what matters in AI right now
Read this week’s issue of The Batch for in-depth analysis of news and research.
This week, Andrew Ng advocated for shifting the conversation from “AI safety” to “responsible AI” at the Artificial Intelligence Action Summit in Paris, emphasizing the importance of focusing on AI opportunities rather than hypothetical risks.
“AI, a general-purpose technology with numerous applications, is neither safe nor unsafe. How someone chooses to use it determines whether it is harmful or beneficial.”
Read Andrew’s full letter here.
Other top AI news and research stories we covered in depth: OpenAI’s Deep Research agent generates detailed reports by analyzing web sources; Google revised its AI principles, lifting a self-imposed ban on weapons and surveillance applications; Alibaba debuted Qwen2.5-VL, a powerful family of open vision-language models; and researchers demonstrated how tree search enhances AI agents’ ability to browse the web and complete tasks.