Bengio flipped:

Reasoning through arguments against taking AI safety seriously
https://yoshuabengio.org/2024/07/09/reasoning-through-arguments-against-taking-ai-safety-seriously/

"I worry that with the current trajectory of public and political engagement with AI risk, we could collectively sleepwalk - even race - into a fog behind which could lie a catastrophe that many knew was possible, but whose prevention wasn't prioritized enough."

Hinton flipped:

Why the Godfather of A.I. Fears What He's Built
https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinton-profile-ai

"People say, It's just glorified autocomplete," he told me, standing in his kitchen. (He has suffered from back pain for most of his life; it eventually grew so severe that he gave up sitting. He has not sat down for more than an hour since 2005.) "Now, let's analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what's being said. That's the only way. So by training something to be really good at predicting the next word, you're actually forcing it to understand. Yes, it's 'autocomplete'-but you didn't think through what it means to have a really good autocomplete." Hinton thinks that "large language models," such as GPT, which powers OpenAI's chatbots, can comprehend the meanings of words and ideas.

LeCun half flipped:

Meta AI Head: ChatGPT Will Never Reach Human Intelligence
https://www.pymnts.com/artificial-intelligence-2/2024/meta-ai-head-chatgpt-will-never-reach-human-intelligence/

These models, LeCun told the FT, "do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan...hierarchically."

Bostrom was shut down:

Oxford shuts down institute run by Elon Musk-backed philosopher
https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

Nick Bostrom's Future of Humanity Institute closed this week in what Swedish-born philosopher says was "death by bureaucracy"

Metzinger had a clash with political reality:

Eine Frage der Ethik
https://www.zeit.de/digital/internet/2019-04/kuenstliche-intelligenz-eu-kommission-richtlinien-moral-kodex-maschinen-ethik/komplettansicht
A question of ethics (on Gooogle Translate)
https://www-zeit-de.translate.goog/digital/internet/2019-04/kuenstliche-intelligenz-eu-kommission-richtlinien-moral-kodex-maschinen-ethik/komplettansicht?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp

What is artificial intelligence allowed to do? Experts commissioned by the EU have looked into this question and developed ethical guidelines. Not everyone thinks they go far enough.