[Diffusion is spectral autoregression – Sander Dieleman](https://sander.ai/2024/09/02/spectral-autoregression.html)
[ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models](https://arxiv.org/html/2505.24864v1)
Selforganizers: Selforganizing Ai like neural celluar automata
I'm thinking a lot lately if its possible to somehow hybridize all these approaches, or if that would be too much of a amalgamation and it just wouldn't work. Time to test it.
Idea is probably some combination of:
- neuro for flexibility (LLM stuff)
- symbolic for better generalization and more rigid circuits where needed (Francois Chollet ideas, like DreamCoder, MCTS, symbolic math/physics engines, python execution environment)
- evolutionary/novelty search for better more creative open ended discovery (Kenneth Stanley ideas)
- better RL algorithms for better generalization and other stuff (Rich Sutton ideas)
- more biologically inspired parts of architecture for better data efficiency and maybe adaptibility and some other stuff (LiquidAI/neuromorphic ideas, maybe selforganizing ideas like something like neural celluar automata or forward forward algoritm or hebbian learning, but also in conjunction with gradient descent)
- maybe some physics bias (like hamiltonian neural networks have)
Omnimodality nvidia
[\[2205.10343\] Towards Understanding Grokking: An Effective Theory of Representation Learning](https://arxiv.org/abs/2205.10343)