Google’s DeepMind researchers have introduced a new method to accelerate AI training, which they claim significantly reduces the computational resources and time required.
The technique, called multimodal contrastive learning with joint example selection (JEST), outperforms state-of-the-art models by requiring up to 13 times fewer iterations and 10 times less computational power. Given the high energy consumption of large-scale AI systems, this advancement is noteworthy. For instance, Microsoft’s water consumption increased by 34% from 2021 to 2022 due to heightened AI computing demands, with ChatGPT consuming nearly half a liter of water per 5 to 50 prompts.
The International Energy Agency (IEA) projects that data center electricity consumption will double from 2022 to 2026, drawing parallels between the energy demands of AI and the energy-intensive cryptocurrency mining industry. However, JEST’s efficient data selection process may mitigate some of this impact. By optimizing which data is used for training, JEST reduces the number of iterations and computational power needed, thereby lowering overall energy consumption.
AI is power-hungry. If JEST (and JEST-like) training methods substantially reduce power consumption, and if synthetic data can be used for ethical training, and if the AI community figures out a way to align AI output with human expectations, all we’ll have to do is figure out how to find work for the people these tools displace. That’s a lot of ifs.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.