DeepMind’s latest research at NeurIPS 2022
Advancing best-in-class massive fashions, compute-optimal RL brokers, and extra clear, moral, and truthful AI methods
The thirty-sixth Worldwide Convention on Neural Info Processing Programs (NeurIPS 2022) is happening from 28 November – 9 December 2022, as a hybrid occasion, primarily based in New Orleans, USA.
NeurIPS is the world’s largest convention in synthetic intelligence (AI) and machine studying (ML), and we’re proud to assist the occasion as Diamond sponsors, serving to foster the trade of analysis advances within the AI and ML group.
Groups from throughout DeepMind are presenting 47 papers, together with 35 exterior collaborations in digital panels and poster classes. Right here’s a short introduction to a few of the analysis we’re presenting:
Finest-in-class massive fashions
Massive fashions (LMs) – generative AI methods skilled on enormous quantities of information – have resulted in unimaginable performances in areas together with language, textual content, audio, and picture era. A part of their success is all the way down to their sheer scale.
Nevertheless, in Chinchilla, we’ve got created a 70 billion parameter language model that outperforms many larger models, together with Gopher. We up to date the scaling legal guidelines of huge fashions, exhibiting how beforehand skilled fashions have been too massive for the quantity of coaching carried out. This work already formed different fashions that observe these up to date guidelines, creating leaner, higher fashions, and has received an Outstanding Main Track Paper award on the convention.
Constructing upon Chinchilla and our multimodal fashions NFNets and Perceiver, we additionally current Flamingo, a family of few-shot learning visual language models. Dealing with photos, movies and textual knowledge, Flamingo represents a bridge between vision-only and language-only fashions. A single Flamingo mannequin units a brand new cutting-edge in few-shot studying on a variety of open-ended multimodal duties.
And but, scale and structure aren’t the one components which can be essential for the facility of transformer-based fashions. Information properties additionally play a big function, which we focus on in a presentation on data properties that promote in-context learning in transformer models.
Optimising reinforcement studying
Reinforcement studying (RL) has proven nice promise as an method to creating generalised AI methods that may deal with a variety of advanced duties. It has led to breakthroughs in lots of domains from Go to arithmetic, and we’re at all times on the lookout for methods to make RL brokers smarter and leaner.
We introduce a brand new method that enhances the decision-making skills of RL brokers in a compute-efficient approach by drastically expanding the scale of information available for their retrieval.
We’ll additionally showcase a conceptually easy but basic method for curiosity-driven exploration in visually advanced environments – an RL agent known as BYOL-Explore. It achieves superhuman efficiency whereas being strong to noise and being a lot easier than prior work.
From compressing knowledge to working simulations for predicting the climate, algorithms are a elementary a part of fashionable computing. And so, incremental enhancements can have an infinite affect when working at scale, serving to save vitality, time, and cash.
We share a radically new and extremely scalable technique for the automatic configuration of computer networks, primarily based on neural algorithmic reasoning, exhibiting that our extremely versatile method is as much as 490 occasions quicker than the present cutting-edge, whereas satisfying the vast majority of the enter constraints.
Throughout the identical session, we additionally current a rigorous exploration of the beforehand theoretical notion of “algorithmic alignment”, highlighting the nuanced relationship between graph neural networks and dynamic programming, and the way finest to mix them for optimising out-of-distribution efficiency.
On the coronary heart of DeepMind’s mission is our dedication to behave as accountable pioneers within the area of AI. We’re dedicated to growing AI methods which can be clear, moral, and truthful.
Explaining and understanding the behaviour of advanced AI methods is a necessary a part of creating truthful, clear, and correct methods. We provide a set of desiderata that capture those ambitions, and describe a practical way to meet them, which includes coaching an AI system to construct a causal mannequin of itself, enabling it to clarify its personal behaviour in a significant approach.
To behave safely and ethically on the earth, AI brokers should be capable of motive about hurt and keep away from dangerous actions. We’ll introduce collaborative work on a novel statistical measure known as counterfactual harm, and display the way it overcomes issues with normal approaches to keep away from pursuing dangerous insurance policies.
Lastly, we’re presenting our new paper which proposes ways to diagnose and mitigate failures in model fairness caused by distribution shifts, exhibiting how essential these points are for the deployment of secure ML applied sciences in healthcare settings.
See the complete vary of our work at NeurIPS 2022 here.