Dear friends,
Last week, I attended the NeurIPS conference in New Orleans. It was fun to catch up with old friends, make new ones, and also get a wide scan of current AI research. Work by the big tech companies tends to get all the media coverage, and NeurIPS was a convenient place to survey the large volume of equally high-quality work by universities and small companies that just don’t have a comparable marketing budget!
AI research has become so broad that I struggle to summarize everything I saw in a few sentences. There were numerous papers on generative AI, including large language models, large multimodal models, diffusion models, enabling LLMs to use tools (function calls), and building 3D avatars. There was also plenty of work on data-centric AI, differential privacy, kernels, federated learning, reinforcement learning, and many other areas.
One topic I’m following closely is autonomous agents: Software, usually based on LLMs, that can take a high-level direction (say, to carry out competitive research for a company), autonomously decide on a complex sequence of actions, and execute it to deliver the outcome. Such agents have been very hard to control and debug, and so, despite amazing-looking demos, there have been few practical deployments. But now I see them on the cusp of working well enough to make it into many more applications, and increasingly I play with them in my spare time. I look forward to getting through my reading list of autonomous agent research papers over the coming holiday!
At NeurIPS, many people I spoke with expressed anxiety about the pace of AI development and how to keep up as well as publish, if what you're working on could be scooped (that is, independently published ahead of you) at any moment. While racing to publish first has a long history in science, there are other ways to do great work. The media, and social media especially, tend to focus on what happened today. This makes everything seem artificially urgent. Many conversations I had at NeurIPS were about where AI might go in months or even years.
I like to work quickly, but I find problem solving most satisfying when I’ve developed an idea that I believe in — especially if it’s something that few others see or believe in — and then spend a long time executing it to prove out the vision (hopefully). I find technical work more fulfilling when I have time to think deeply, form my own conclusion, and perhaps even hold an unpopular opinion for a long time as I work to prove it. There’s a lot of value in doing fast, short-term work; and given the large size of our community, it’s important to have many of us doing long-term projects, too.
So, this holiday season, when the pace of big announcements might slow down for a couple of weeks, I hope you’ll take a break. Spend time with friends and loved ones, let thoughts simmer in the back of your mind, and remind yourself of holiday values like charity and renewal. If you’re looking for ideas, maybe even some that will keep you productively busy for months or years, injecting more inputs — taking courses, reading blogs or papers — is a good way to do that.
It has been a great year for AI, with lots of progress and excitement. I’m grateful to have gotten through this crazy year with you.
Happy holidays!
Andrew