I did not read (by intention) the papers on Transformers++, I intend to keep this technology as a blackbox for me, cos I am working (side project) as opposite to neural networks on an reasoning expert system, nevertheless, my 2 cents:

With Transformers we had:

- mixture of experts
- multimodal
- chain of thoughts vs. tree of thoughts
- supervised learning vs. reinforcement learning


What is still open?

As mentioned, I am not into the papers, but I think to myself, that the separation of model and method, data and algorithms, object and subject is yet to come, it seems model and method are not clearly separated.

And Zuse's "Devil's Wire" is yet to come, when the machine has an feedback established onto itself, data and algorithms, with reinforcement learning we have already the first step of the Devil's Wire, when the machine feeds itself with data.

AGI/ASI ETA?

I mentioned already, 2025 we have the hardware for an AGI and 2026 an AGI present, 2027 the hardware for an ASI and 2028 an ASI present, 2029+2030 then possible an collective of ASIs, ZeroOne, or alike.

Maybe the planet will blow up first, I don't know.

Progress remains exciting.