" a lot of very evolutionary old behaviors are hardwired in us really hard and would most likely develop in isolation as well thanks to genetics, but we also learn many of behaviors throughout our lifes, while genes also seem to predispose for a lot of more high level behaviors imitation learning is big part of how we learn, but there's also other kinds of learning that don't involve imitation, otherwise no novel and generalizing behaviors would emerge there's also reinforcement learning, and major form of it is learning and adapting from feedback in the form of a reward signal that labels behavior as correct or incorrect, without showing any examples of correct behavior that can be imitated that's scientifically pretty established to work relatively well for biological organisms and big factor is also probably something along the lines of evolutionary divergent search optimizing for novelty, combined with convergently optimizing some evolutionary objectives approximately encoded as basic needs in our motivation engines the more i try to look for what all kinds of learning algorithms the brain and biology in general might be using, the more i'm fascinated i am by their complexity and openendedness " [https://www.youtube.com/watch?v=_2vx4Mfmw-w](https://www.youtube.com/watch?v=_2vx4Mfmw-w) https://www.researchgate.net/publication/46424802_Abandoning_Objectives_Evolution_Through_the_Search_for_Novelty_Alone I really wonder to what degree might humans do some kind of evolutionary divergent search optimizing for novelty AIs inventing science beyond limited human intuition constrained by our evolutionary programming is the dream existuje kreativita co je typu in distribution ale pak existuje out of distributon kreativita a zároveň je to na spektru minimálně takhle to vidí nějaká literatura pokoušející se kreativitu formalizovat pro vytvoření kreativnějších AI systémů 😄 out of distribution kreativita pracuje s něčím co není jenom replikace a rekombinace existujících struktur, ale s něčím novel, co ale není zároveň úplně noise jinak bychom byli stuck ve vědě a neinovovali např ve fyzice obecná relativita a kvantová mechanika je dle mě velkej shift od klasický mechaniky, co je mnohem víc založená na novel strukturách co před tím nebyly nebo vynález různé matematiky co před tím nebyla I don't think AI will replace scientists. I think human intelligence will always have a place in science, and adding more diverse intelligences into the mix acts more as a multiplier of our capabilities and as an upgrade in places where our brain's architecture made by evolution is too limited and constrained. That seems to have been the case so far, each type of intelligence excelling in different ways, that are even stronger together. And if it will lead to for example breakthroughs in physics or curing diseases faster, then I think that's amazing. You're maybe right that I might be feeling the AGI/ASI less lately. I just feel like that so far we aren't creating replicas of human intelligence, but all sorts of diverse alien intelligences that share some similarities with human intelligences, but they're always deeply different. But in the future there might be some AI system that is a replica of human intelligence, or more general than that, such that it can shapeshift into human intelligence as subset of its capabilities, and do everything that humans can do, or do it better than humans, or do more things than humans. Being able to solve previously unseen taks that require generalization/abstraction (mathematical or not) from previously seen or solved taks, or/and using hardwired biases in architecture, is my favorite simplest definition of general intelligence, which is also on a spectrum, and both humans and all sorts of current AI systems are somewhere on that generality spectrum, where humans are probably currently the most general systems that exist overall today, but there can be so much more general systems in the future Human creativity or AI creativity? Why not both? I love both human and machine creativity for overlapping and also different reasons! There is overlap, but also some stuff is possible only by human creativity, and some other stuff is possible only by machine creativity! And both are absolutely golden and lovely for all sorts of reasons! Same goes for many other domains where something can be created. And human-machine cooperation can do even more mindblowing mixes in many domains! mám pocit, že pro napstostý nahrazení bys potřeboval víc lidskou inteligenci, ale my nevytváříme úplně lidskou inteligenci v mašinách, takže bude pořád hodně domén kde ta lidská bude excelovat v různých aspektech, a mašinová inteligence zase v jiných aspektech, a ideální je spojení do kolektivní inteligence, což se do určitý míry děje už teď dlouho ale overlap je tam rozhodně, proto to je jistý domény zároveň schopný superchargovat nebo nahrazovat [https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8) kenneth stanley A lot of his arguments can be summarized as: Greatness cannot be only planned. Rage against only maximizing predefined objectives, embrace more divergent search full of discovery, novelty and accidental epiphany with serendipity. I think in practice any predictive machine, biological or not, is constrained by it's architectural biases, finite data, finite computational resources for modelling, finite limited sense modalities, finite limited perspectives as an agent in a bigger complex system, etc. So every biological and nonbiological information processing system always live their evolutionary niches, never fully universal But generality is a spectrum for example, but it can be evaluated by a lot of possible ways The space of all possible intelligences is so fascinating in general for me :D 1. Solve intelligence 2. Use that to solve everything else [https://www.youtube.com/watch?v=hHooQmmzG4k&feature=youtu.be](https://www.youtube.com/watch?v=hHooQmmzG4k&feature=youtu.be) The problem with way too alien patterns would be that the human brain has no way to recognize it because there is no grounding in human patterns that the brain is used to recognize Does evolution have an objective? Or is it primarily more open ended? How does all that diversity of life and beyond emerge from evolution? [https://youtu.be/GdthPZwU1Co?si=yNLJPbdRoS55P9FB](https://youtu.be/GdthPZwU1Co?si=yNLJPbdRoS55P9FB) " So according to Pedro Domingos The Master Algorithm book, in the AI field you have these camps: - Connectionists like to mimic the brain (neuroscience): artificial neural networks, deep learning, spiking neural networks, liquid neural networks, neuromorphic computing, hodgkin-huxley model,... - Symbolists like symbol manipulation: decision trees, random decision forests, production rule systems, inductive logic programming,... - Bayesians like uncertainity reduction based on probability theory (staticians): bayes classifier, probabilistic graphical models, hidden markov chains, active inference,... - Evolutionaries like evolution (biologists): genetic algorithms, evolutionary programming - Analogizers like identifying similarities between situations or things (psychologists): k-nearest neighbors, support vector machines,... Then there are various hybrids: neurosymbolic architectures (AlphaZero for chess, general program synthesis with DreamCoder), neuroevolution, etc. And technically you can also have: - Reinforcement Learners like learning from reinforcement signals: reinforcement learning (most game AIs use it like AlphaZero for chess uses it, LLMs like ChatGPT start to use it more,...) - Causal inferencers like to build a causal model and can thereby make inferences using causality rather than just correlation: causal AI - DivergentSearchNoveltyMaximizers love divergent search for novelty without objectives: novelty search And you can hybridize these too with deep reinforcement learning, novelty search with other objectives etc. I love them all and want to merge them, or find completely novel approaches that we haven't found yet. :D Would you add any camps? " " Do you think consciousness has any special computational properties? Depends on the definition and model of consciousness, but I like QRI's holistic field computation ideas IIT argues with integrated information maybe you truly need consciousness for information binding problem [[2012.05208] On the Binding Problem in Artificial Neural Networks](https://arxiv.org/abs/2012.05208) Global workspace theory argues with some form of global integration of information into some workspace Selfawareness isnt good in LLMs as emergent circuits are different than what the LLMs actually say (from last Anthropic paper on the biology of LLMs), so some recursive connections might be needed (strange loop model of conscousness?) Joscha Bach argues with conscousness being coherence inducing operator, maybe thats needed for reliability Neurosymbolic people need added symbolic components for strong generalization, like in DreamCoder program synthesis, and Chollet argues that's part of definition of consciousness Evolutionaries need evolution like evolutionary algorithms, maybe you could argue you can get consciousness only this way Physicists/computational neuroscientists need differential equations, like liquid neural networks, and some might argue consciousness only arises from this Some people need divergent novelty search without objective, like Kenneth Stanley, and you could also connect this with conscousness " Evolution is the ultimate master algorithm because it lead to the emergence of all other existing learning algorithms What is the most general computational substrate and architecture? What are we missing from the equations when modelling evolution in AI? The key to AGI isn't just AI that can generalize, but also AI that knows when it is useful generalize and what kind of generalization to use. You can do a lot of different generalizations, but only some can be useful. Artificial general intelligence. Většina mainstramu to vidí jako AI co má kognitivní schopnosti jako člověk 😄 Ale dost AI researcherů to vidí prostě jako AI co je schopna líp generalizovat nezávisle na tom jak je schopnej člověk generalizovat a jaký jiný kognitivní schopnosti člověk má, což dle mě dává víc smysl vzhledem k tomu názvu. To první bych spíš nazval artificial human intelligence. A místo "artificial" použil machine/digital/silicon, protože to není intelligence co je "artificial" dle mě, ale co je na jiným substrátu s jinými a různě podobnými mechanismy. What are we missing when modelling intelligence? Is standard model of particle physics (ideally with general relativity somehow) the true master algorithm, since evolution emerges from it, and all the intelligence we see in biology emerges from evolution? But it's impossible to put that into code like approximations of evolution, and have enough computational resources. What is the brain doing to process and integrate all the information from all the diverse modalities into a unified world model and then abstract over it in latent space reasoning? The brain eats so little energy. Think of how much better you could make it. Intelligence is curious generalization power leading do adaptivity This general qualia computer can fit so many bayesian beliefs in it What is the next modality in AI after text, image, sound, video, latent thoughts? Will AI ever think in modalities completely ineffable to human modalities? Is evolution intelligence? A myslím že tohle není kreacionistický argument. Kreacionimus naopak často popírá celkovou existenci evoluce, a když evoluci úplně nepopírá, tak ji spíš často naopak degraduje. Myslím že evoluce je zákon v přírodních vědách, co má svoji rovnici, podobně jako ve fyzice a jiných přírodních vědách máme další rovnice. Myslím že evoluce je teď nejintelignetnější algoritmus co teď existuje, protože emergentně vytvořila human general intelligence: nás. A my jsme taky fyzikální systémy který jde popsat rovnicemi, myslím že včetně naší inteligence. A myslím že je evoluce jako všechny ostatní zákony v přírodních vědách emergentní ze zákonů z fundamentální fyziky jako je standardní model částicový fyziky, kam teda v našich modelech vesmíru ještě chybí nacpat i relativitu. [https://youtu.be/lhYGXYeMq_E?si=iqgtA1rGMi1hEbrx&t=2197](https://youtu.be/lhYGXYeMq_E?si=iqgtA1rGMi1hEbrx&t=2197) Hodně souhlasím s touhle sekcí na evoluční algorithmy 36:47. Kenneth Stanley, se kterým hodně souhlasím, co dělal v OpenAI, se snaží hodně argumentovat, že algoritmus za openended divergent evolucí vytvořil všechnu tu nádhernou kreativní zajímavou diverzitu novel organismů. Tím evoluce tvoří i všechny kolektivní inteligence jako jsou mravenci a lidi, a v podstatě přes nás nepřímo i ty technologie, co teď vidíme. Technicky by šlo argumentovat i to, že lidi s AIčkama je taky forma kolektivní inteligence. Nic fundamentálně kreativnějšího zatím není. V evoluci pravděpodobně není jedna objektiva jako to dost AI oboru vidí, ale místo toho se evoluce učí spoustu různých emergentích objektiv v gigantickým prostoru všech možných objektiv přes asi něco jako divergentní search co hodně využívá mutation and selection. A v praxi systémy jako AlphaEvolve ukazují, že hybridně kombinovat gradient based metody s evolučními algoritmy je teď jedna z nejlepších metodologí na novel discoveries, co teď máme. Já myslím že by se do toho mělo hybridně nacpat ještě např víc symboliky. What is curiosity? an intrinsic reward mechanism that drives agents to maximize information gain , typically by seeking out situations with high predictable entropy that can later be compressed or learned. https://fxtwitter.com/XPhyxer1/status/1924178488766124346 Což je Schmidhuberian [Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes](https://arxiv.org/abs/0812.4360) Schopnost generalizace a budování stabilních ale flexibilních specializovaných reprezentací a obvodů vidím jako součást inteligence Cholletova formální definice intelligence [[1911.01547] On the Measure of Intelligence](https://arxiv.org/abs/1911.01547) tam má generalization difficulty, má tam na to rovnici (ta GD část): For him, intelligence is skill-acquisition efficiency, the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, and highlighting the concepts of scope, generalization difficulty, priors, and experience. General theory of generalization: Study of how different systems generalize in different ways Different AI systems generalize in different ways Different biological systems generalize in different ways Different humans generalize in different ways “ Is evolution intelligence? I think evolution is a law in the natural sciences that has its own equation, just like in physics and other natural sciences we have other equations. I think evolution is now the most intelligent algorithm that exists now, because it has emergently created human general intelligence: us. And we are also physical systems that can be described by equations, including our intelligence I think. And I think evolution, like all other laws in the natural sciences, is emergent from the laws of fundamental physics such as the standard model of particle physics, where general relativity is still not integrated in our model of the universe. https://youtu.be/lhYGXYeMq_E?si=iqgtA1rGMi1hEbrx&t=2197 I agree a lot with this section on evolutionary algorithms 36:47. Kenneth Stanley, with whom I agree a lot, who was at OpenAI, tries to argue a lot that the algorithm behind open-ended divergent evolution created all this beautiful creative interesting diversity of novel organisms that we see everywhere. Thus, evolution also creates all collective intelligences such as ants and humans, and essentially indirectly through us, the AI technologies that we see everywhere now. Technically, one could also argue that people with AIs are also a form of collective intelligence together. There is nothing more fundamentally creative yet. There probably isn't a single objective in evolution as many AI people see it, but instead evolution learns many different emergent objectives in a gigantic space of all possible objectives through something like guided divergent search that uses mutation and selection a lot. And in practice, systems like AlphaEvolve show that hybridly combining gradient-based methods with evolutionary algorithms is now one of the best methodologies for novel discoveries that we have now. I think that even more symbolic methods should be stuffed into it hybridly on a more fundamental level. ” Artificial general intelligence, AGI. Most of the mainstream sees it as AI that has human-like cognitive abilities. I prefer to see it as AI that is able to generalize better regardless of how a person is able to generalize and what other cognitive abilities human has, which I think makes more sense given the name. I would rather call the first one artificial human intelligence. And instead of "artificial" I would use machine/digital/silicon, because it is not intelligence that is "artificial" in my opinion, but what is on a different substrate with different and variously similar mechanisms.