Twice a week, Data Points brings you the latest AI news, tools, models, and research in brief. In today’s edition, you’ll find:
- ChatGPT now includes an AI search engine
- Upgrading data centers leaves behind too much trash
- OmniParser works with vision models to read computer screens
- Gallup poll shows most big companies’ workforces haven’t embraced AI
But first:
GitHub data shows Python’s rise and global developer growth
Python surpassed JavaScript as the most used programming language on GitHub in 2024, while Jupyter Notebooks saw a significant rise in popularity. The shift highlights the growing importance of data science and machine learning in software development. GitHub’s data also showed that the number of developers using its platform is growing, particularly in India, Africa, and Latin America. (GitHub)
AI-powered robots tackle laundry, other household tasks in demonstration
Physical Intelligence, a San Francisco startup, unveiled an AI robot that can perform multiple complex household tasks like folding laundry and cleaning tables. The model powering the robot, called π0 (pi-zero), was trained on a tremendous amount of robotic data from various robots performing domestic chores. Such robots could bring general AI capabilities into the physical world, similar to how large language models have enhanced chatbots’ abilities, but developing them and their capabilities requires finding an equivalent amount of training data. (Wired)
ChatGPT evolves with web search, challenging traditional search engines
OpenAI upgraded ChatGPT to search the web and summarize results, transforming the chatbot into a more direct competitor to Google. The new feature, powered by Microsoft’s Bing search engine, will initially be available to paying subscribers and includes content from partner publishers like News Corp and Associated Press. This update could reshape how people find information online, potentially altering the landscape for search engines, publishers, and AI-driven content discovery. (OpenAI and The Washington Post)
AI could generate up to 5 million metric tons of e-waste by 2030
A new study published in Nature Computational Science estimates generative AI could contribute between 1.2 and 5 million metric tons of electronic waste by 2030. The primary source of this e-waste is high-performance computing hardware used in data centers, which contains valuable metals and hazardous materials. Researchers suggest strategies like extending equipment lifespan, refurbishing components, and designing for easier recycling could reduce AI-related e-waste by up to 86 percent in a best-case scenario. (Nature and MIT Technology Review)
AI tool helps computers understand and use apps like humans do
Researchers at Microsoft created OmniParser, a tool that helps AI systems better understand what’s on a computer screen, and released it to the public under a Creative Commons license. When paired with advanced vision models like GPT-4V, OmniParser allows the AI to more accurately identify clickable buttons and understand what different parts of the screen do. Such parsing models could lead to more AI assistants that can navigate apps and operating systems more like humans, broadening the computing tasks that AI can accomplish and potentially making computers easier to use for everyone. (Microsoft)
Fortune 500 companies are enthusiastic about AI, but most employees haven’t jumped in yet
In a new poll from Gallup, 93 percent of Fortune 500 CHROs report using AI tools, but only 33 percent of U.S. employees say their organizations have begun integrating AI into their work. Weekly AI use remains limited, with 70 percent of employees never using AI and only 10 percent using it weekly. To improve AI adoption, organizations should clearly communicate integration plans, establish usage guidelines, and provide role-specific training for employees. (Gallup)
Still want to know more about what matters in AI right now?
Read last week’s issue of The Batch for in-depth analysis of news and research.
Last week, Andrew Ng delved into the psychology behind AI fear mongering in a special Halloween edition of The Batch. He examined why some AI experts advocate extreme positions on AI “safety” that are more aligned with science fiction than science.
“Fear mongering attracts a lot of attention and is an inexpensive way to get people talking about you or your company. This makes individuals and companies more visible and apparently more relevant to conversations around AI.”
Read Andrew’s full letter here.
Other top AI news and research stories we covered in our exploration of Halloween fears: AI’s surging power demands raise concerns over energy sustainability, with fears that AI infrastructure could drain the grid; policymakers, driven by dystopian fears, may stifle AI growth by imposing restrictive regulations; AI coding assistants increasingly encroach on software development, sparking debate over the future role of human programmers; benchmark contamination continues to challenge AI evaluation, as large models train on test answers across the web; and researchers warn that training on synthetic data could degrade model performance over time, risking the future of AI.