Lots of research in Artificial Intelligence (AI) and Machine Learning (ML) in the past few years has focused on supervised learning, and more recently there has been a trend of constructing massive models and focusing on scaling (e.g. GPT-3, Gopher, AlphaCode). These aren't the only two areas of work, but is it perhaps a mistake to focus on these approaches in ML research? Is something like Reinforcement Learning (RL) more promising? I'm not sure, but these are some opinions I've heard from others.
Physics simulation paper: [ Ссылка ]
Face image generation paper: [ Ссылка ]
Source of other AI-gen images used (DALL-E): [ Ссылка ]
![](https://i.ytimg.com/vi/k2gn0XvHSgw/maxresdefault.jpg)