" So according to Pedro Domingos The Master Algorithm book, in the AI field you have to first approximation these camps: - Connectionists like to mimic the brain's interconnected neurons (neuroscience): artificial neural networks, deep learning, spiking neural networks, liquid neural networks, neuromorphic computing, hodgkin-huxley model,... - Symbolists like symbol manipulation: decision trees, random decision forests, production rule systems, inductive logic programming,... - Bayesians like uncertainity reduction based on probability theory (staticians): bayes classifier, probabilistic graphical models, hidden markov chains, active inference,... Frequentists exist too, defining probability as a limit of number of experiments instead of a subjective prior probability that is being updated with new data. - Evolutionaries like evolution (biologists): genetic algorithms, evolutionary programming - Analogizers like identifying similarities between situations or things (psychologists): k-nearest neighbors, support vector machines,... Then there are various hybrids: neurosymbolic architectures (AlphaZero for chess, general program synthesis with DreamCoder), neuroevolution, etc. And technically you can also have: - Reinforcement Learners like learning from reinforcement signals: reinforcement learning (most game AIs use it like AlphaZero for chess uses it, LLMs like ChatGPT start to use it more,...) - Causal Inferencers like to build a causal model and can thereby make inferences using causality rather than just correlation: causal AI - Compressionists who see cognition as a form of compression: autoencoders, huffman encoding, Hutter prize - Divergent Novelty Searchers love divergent search for novelty without objectives: novelty search And you can hybridize these too with deep reinforcement learning, novelty search with other objectives etc. I love them all and want to merge them, or find completely novel approaches that we haven't found yet. :D Would you add any camps? What is your idea of the ideal AI architecture? I think no AI approach is fully universally steamrolling all others and each is better for different usescases. My dream for more fully general AI would be to see some system that uses a lot of these approaches in hybrid way and uses which approach is the most optimal for the task at hand on the fly. The core could maybe for example have: - more biologically based neural engine for more adaptibility: like liquid neural networks and ideas from LiquidAI with Joscha Bach, but maybe still somehow using the idea of attention that is now so relatively successful in transformers in deep learning -- operating neurosymbolically and building (possibly also bayesian) neurosymbolic world models in which you abstract and plan, for more interpretablity and reliability and generalization power for different types of tasks but loosing as little flexibility of the neural substrates as possible: like DreamCoder and other program synthesis ideas from Francois Chollet, which could synthesize symbolic search or simple statistical programs to explain data as well - trained via combination of -- convergent gradient descent, since that works so relatively very well: like almost all of deep learning currently -- and more biologically plausible algorithms: like maybe forward forward algoritm or hebbian learning -- with reinforcement learning, to incentivize more generalization from verifier signals: like AlphaZero and o3 -- and with some evolution and objectiveless divergent novelty search, for getting the creativity of evolution, for open-endedness that never stops accumulating new knowledge and incentivizes exploration into the unknown and out of box breakthroughs: like evolutionary algorithms and novelty search and ideas from Kenneth Stanley Will something similar work? I have no clue. I'm thinking about how to hybridize various systems that in various contexts already work well. I should try it more. :D Would you add any camps? What is your idea of the ideal AI architecture? " " Já chci vizualizaci mutací všech slov v jazycích z různých common ancestors v čase s connections znázorňující vzájemný ovlivňování se v čase a podobnost vizualizovaná přes barevný gradienty nebo tvary nodes a connections Ale nejradši bych podobný evoluční strom v čase udělal pro veškerou vědu, matiku, technologie, a filozofii, s tím jak se navzájem v čase ovlivňují, mutují, prohlubují se, rozšiřují se, spojují se do mezidisciplinárních oborů a různých unifikací, vznikají nový obory s novýma konceptama, vzniká konvergentní evoluce, atd. 😄 Nebo jenom nějaký konkrétní obory s jejich koncepty jako AI, intelligence, fyzika, kognitivní věda Nebo by podobný vizualizovaný evoluční strom mohl být skvělý pro příběhy, literaturu, shows, hry, umění, včetně vlastností těch různých charakterů a universes 😄 Ale i vizualizace evolučního stromu biologie v čase by mohla být nice Nebo všech fyzikálních systémů ve vesmíru v čase 😄 Nebo úplně fictional vesmíry, jako pokemoni Nebo evoluce úplně alien creatures, nebo alien struktur a konceptů 😄 Nebo scifi technologie budoucnosti 😄 Nebo lidských kultur 😄 Nebo všechny tyhle možný evoluční grafy propojený na jednom místě Nebo evoluční graf možných evolučních grafů všech možných věcí Myslím že dost těch informací by šlo vydolovat z Wikipedie To by asi šlo nějak z určitých částí zautomatizovat. " 3Blue1Brown-like videos for the mathematics of intelligence Selforganizing AI neural cellular automata graph neural cellular automata hypergraph neural cellular automata Neural Cellular Automata for text [[2211.01233] Attention-based Neural Cellular Automata](https://arxiv.org/abs/2211.01233) Create a physics simulation with dynamical extremely complex nonlinear structures emerging, dissolving, emerging, dissolving though time in a loop in a cycle, order to chaos to order to chaos etc. Make a beautiful creative fascinating mathematical physics show where diverse complex structures form and die and form and die all over the place! Nothing boring allowed. Add a lot of diverse cycling forces influencing particles to make it very unpredictable and make their strengths oscillate, including forces that form structures and that dissolve structures. Cycle between countless unique creative nonlinear chaotic alien forces creating infinitely complex dynamics! Make the forces very alien, novel, creative with oscillating frequencies. Make sure that they form highly nonlinear cool structures that change all the time. Make sure the particles never cluster, that the dynamics never stay the same, that they are never uniformly distributed etc. Make sure its not only clustering and redistribution, but also forming highly complex nonlinear unpredictable structures with structure across all scales. Cycle between these different forces. Make sure clustering in the middle or at the corners doesn't happen. Train multimodal reasoning LLMs in embodied simulations llm selfplay without human data https://fxtwitter.com/AndrewZ45732491/status/1919920459748909288 [[2505.03335] Absolute Zero: Reinforced Self-play Reasoning with Zero Data](https://arxiv.org/abs/2505.03335) New Sutton interview about his new paper about superiority of RL [https://www.youtube.com/watch?v=dhfJfQ5NueM](https://www.youtube.com/watch?v=dhfJfQ5NueM) [https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf](https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf) Maybe more open ended evolutionary / divergent novelty search in the space of reward functions? simulate the principle of chaotic maximum nonstationary action Osobně chci vidět víc fyziky v mechinterp, jaká je fyzika formování a sebeorganizace a aktivování (dynamics) všech těhle featur a obvodů, při učení a inferenci 😄 i want to create a big big big big big big big big big big map of all of knowledge including all its structure and relationships Create a big visualization that is a map of all of knowledge as a graph where you can recursively open different nodes Generate subtopics nodes realtime using websearch and LLM call, cache it Add "search on google" "open on wiki" "explain" "generate Wikipedia page" "extract/explain equations" buttons Removable buttons Automatic hypergraph categorizing of existing nodes If you generate a node that already exists, connect already existing node Someone should make a meta tool that calls all vibecoding programming scaffolding systems and then does agentic megadiscussion about the result ai for fundamental physics Data jsou jeden z dalších faktorů kromě algoritmů a výpočetního výkonu. Takže by možná mohl jít vytvořit takovej consensual spyware co o sobě sbírá co nejvíc co mu povolíš a co je plně open source a co to posílá co nejvíc decentralizovaně do decentralized training infrastruktury, aby to neměl žádnej jeden člověk. je pravda že většina dat je prostě low quality teď je největší moat v datech od nejchytřejších lidí na planetě co se týče dat ale mimo data je teď největší moat algoritmy na reasoning/inteligenci který nepotřebují data, což je dle mě větší moat než ty jakýkoliv data a vypadá to že v tomhle teď Google začíná všechny ostatní solidně předbíhat takže je potřeba jejich modely reverse engineerovat a distillovat do open source modelů Hmm byly papery co dělají process supervision (realtime verifikaci mezikroků) při reinforcement learning trénování. Což se nejdřív myslelo že bude SoTA, ale pak se ukázalo že je lepší verifikovat primárně výsledky. Technicky by možná šlo pro každý mezikrok při inferenci přidat verification signál jestli je mezi krok dobře přes implicitní function call, třeba jen přes WolframAlpha call tam kde to je možný, to by taky mohlo pomoct. You could turn this into AI architecture: Art is an algorithm falling in love with the shape of the loss function itself - Joscha Bach [https://www.youtube.com/watch?v=U6tQf7a3Ndo](https://www.youtube.com/watch?v=U6tQf7a3Ndo) [https://www.youtube.com/watch?v=iyhJ9BEjink](https://www.youtube.com/watch?v=iyhJ9BEjink) The field of RL itself is pretty big. I'm expecting more of it to get integrated with LLMs. [On the Biology of a Large Language Model](https://transformer-circuits.pub/2025/attribution-graphs/biology.html) Could you use to to add selfawareness of LLM's state by: do a bit of autoregression, reverse engineer circuits using attribution graphs etc., encode these graphs into tokens that you append, continue autoregression Coboldccp Continuous Thought Machines This makes me think, could you get some form of model's "awareness" of its own circuits if you gave the information about the imperfectly reverse engineered circuits to it as an implicit function call result Since when we introspect, we do that by starting/"calling" the "introspecting process" Or I wonder if it's possible to somehow hardcode some form of this idea on a more architectural level But I guess researchers trying to implement some form of metacognition are already attempting similar stuff for years not sure if this makes sense, but my idea was something along the lines of: do a bit of autoregression, then automatically reverse engineer circuits using attribution graphs etc., then encode these graphs into tokens that you append, then continue autoregression and you could also maybe train on it there's for sure tons of engineering problems in that idea if its possible to make it somehow work another issue is that the feature graphs in that biology of llms paper were labeled manually iirc, so you would have to automate that, maybe using llm could work to at least some degree and that its costly and slow