December 23, 2022 by Melissa HeikkiläIn 2022, AI got creative. AI models can now produce remarkably convincing pieces of text, pictures, and even videos, with just a little prompting.It’s only been nine months since OpenAI set off the generative AI explosion with the launch of DALL-E 2, a deep-learning model that can produce images from text instructions. That was followed by a breakthrough from Google and Meta: AIs that can produce videos from text. And it’s only been a few weeks since OpenAI released ChatGPT, the latest large language model to set the internet ablaze with its surprising eloquence and coherence. The pace of innovation this year has been remarkable—and at times overwhelming. Who could have seen it coming? And how can we predict what’s next?Luckily, here at MIT Technology Review we’re blessed with not just one but two journalists who spend all day, every day obsessively following all the latest developments in AI, so we’re going to give it a go. Here, Will Douglas Heaven and Melissa Heikkilä tell us the four biggest trends they expect to shape the AI landscape in 2023.Over to you, Will and Melissa.Get ready for multipurpose chatbotsWe saw a glimpse of this with DeepMind’s Flamingo, a “visual language model” revealed in April, which can answer queries about images using natural language. And then, in May, DeepMind announced Gato, a “generalist” model that was trained using the same techniques behind large language models to perform different types of tasks, from describing images to playing video games to controlling a robot arm.But of course, there’s a downside. Next-generation language models will inherit most of this generation’s problems, such as an inability to tell fact from fiction, and a penchant for prejudice. Better language models will make it harder than ever to trust different types of media. And because nobody has fully figured out how to train models on data scraped from the internet without absorbing the worst of what the internet contains, they will still be filled with filth. The use of facial recognition in public places will also be restricted for law enforcement in Europe, and there’s even momentum to forbid that altogether for both law enforcement and private companies, although a total ban will face stiff resistance from countries that want to use these technologies to fight crime. The EU is also working on a new law to hold AI companies accountable when their products cause harm, such as privacy infringements or unfair decisions made by algorithms. All these regulations could shape how technology companies build, use and sell AI technologies. However, regulators have to strike a tricky balance between protecting consumers and not hindering innovation — something tech lobbyists are not afraid of reminding them of. AI is a field that is developing lightning fast, and the challenge will be to keep the rules precise enough to be effective, but not so specific that they become quickly outdated. As with EU efforts to regulate data protection, if new laws are implemented correctly, the next year could usher in a long-overdue era of AI development with more respect for privacy and fairness. The big companies that have historically dominated AI research are implementing massive layoffs and hiring freezes as the global economic outlook darkens. AI research is expensive, and as purse strings are tightened, companies will have to be very careful about picking which projects they invest in—and are likely to choose whichever have the potential to make them the most money, rather than the most innovative, interesting, or experimental ones, says Oren Etzioni, the CEO of the Allen Institute for AI, a research organization.Next year could be a boon for AI startups, Etzioni says. There is a lot of talent floating around, and often in recessions people tend to rethink their lives—going back into academia or leaving a big corporation for a startup, for example. Startups and academia could become the centers of gravity for fundamental research, says Mark Surman, the executive director of the Mozilla Foundation. “We’re entering an era where [the AI research agenda] will be less defined by big companies,” he says. “That’s an opportunity.” Between them, DeepMind and Meta have produced structures for hundreds of millions of proteins, including all that are known to science, and shared them in vast public databases. Biologists and drug makers are already benefiting from these resources, which make looking up new protein structures almost as easy as searching the web. But 2023 could be the year that this groundwork really bears fruit. DeepMind has spun off its biotech work into a separate company, Isomorphic Labs, which has been tight-lipped for more than a year now. There’s a good chance it will come out with something big this year.But clinical trials can take years, so don’t hold your breath. Even so, the age of pharmatech is here and there’s no going back. “If done right, I think that we will see some unbelievable and quite amazing things happening in this space,” says Lovisa Afzelius at Flagship Pioneering, a venture capital firm that invests in biotech. —Will Douglas Heaven.
This article was originally published by MIT Technology Review.