" I have a lot of issues with the term "AGI". I would redefine it. People say that we're heading towards artificial general intelligence (AGI), but by that most people actually usually mean machine human-level intelligence (MHI) instead, a machine that is performing human digital or/and physical tasks as good as humans. And by artificial superintelligence (ASI), people mean machine superhuman intelligence (MSHI), that is even better than humans at human tasks. I think lot's of research goes towards very specialized machine narrow intelligences (MNI), which are very specialized and often superhuman in very specific tasks, such as playing games (AlphaZero), protein folding (AlphaFold), and a lot of research also goes towards machine general intelligence (MGI), which will be much more general than human intelligence (HI), because humans are IMO very specialized biological systems in our evolutionary niche, in our everyday tasks and mathematical abilities, and other organisms are differently specialized, even tho we still share a lot. Plus there is just some overlap between biological and machine intelligence. And I wonder how if the emerging reasoning systems like o3 are becoming actually more similar to humans, or more alien compared to humans, as they might better adapt to novelty and be more general than previous AI systems, which might bring them closer to humans, but in slightly different ways than humans. They may be able to do selfcorrecting chain of thought search endlessly, which is better for a lot of tasks, and big part of this is big part of human cognition I think, but humans still work differently. I think that generality of an intelligent system is a spectrum, and each system has differently general capabilities over different families of tasks than other ones, which we can see with all the current machine and biological intelligences, that are all differently general over different families of tasks. That's why "AGI" feels much more continuous than discrete to me, and over which families of tasks you generalize matters too I think. The Chollet's definition of intelligence as the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, is really good I think, and his ARC-AGI benchmark, that tries to test for some degree of generality, trying to test for the ability to abstract over and recombine some atomic core knowledge priors, to prevent naive pattern memorization and retrieval being successful. And I really wonder if scoring well on ARC-AGI actually generalizes outside the ARC domain to all sorts of tasks where humans are superior, or where humans are terrible but machines are superior, or where other biological systems are superior, or where everyone is terrible for now. I would suspect so, but maybe not? In software engineering, o1 seems ot be better just sometimes? What's happening there? I want more benchmarks! Pre-o1 LLMs are technically super surface level knowledge generalists, lacking technical depth, but having bigger overview of the whole internet than any human, knowing high level correlations of the whole internet, even tho their representations are more brittle than human brain's. But we're much better in agency, in some cases in generality, we can still do more abstract math more, etc., we're better in our evolutionary niche. But for example AlphaZero destroyed us in chess. But when I look at ARC-AGI scores, I see o3 as a system that can adapt to novelty better than previous models, but we can still do much better. Also according to some old definitions of AGI, existing AI systems have been AGI for a long time, because it can have a general discussion about basically almost anything (except lacking narrow niche field specific knowledge and skills, lack of agency, lack of adapting to novelty, etc.). Or if we take the AIXI definition of AGI, then a fully general AGI is impossible in practice, as that's not computable, and you can only approximate it, since AIXI it considers all possible explanations (programs) for its observations and past actions and chooses actions that maximize expected future rewards across all these explanations, weighted by their simplicity (shortness) (Occam's razor). And AIXI people argue that humans and AI systems try to approximate AIXI in their more narrow domains and take all sorts of cognitive shortcuts to be actually practical and not take infinite time and resources to decide. And soon we might create some machine-biology hybrids as well. Then we should maybe start calling it carbon based intelligence (CI) and silicon based intelligence (SI) and carbon and silicon based intelligences (CSI). I also guess it depends how you define the original words, such as generality. Let's say you are comparing the generality of AlphaZero, Claude, o1/o3, and humans. How would you compare them? Do all have zero generality, if we take the AIXI definiton of AGI for example, which is not computable? AIXI definition of AGI would also imply that there is no AGI in our current universe and there can never be. I'm also often pretty instrumentalist, my fundamental epistemology is often: All models are wrong but some predict empirical data better than others as they approximate the highly nuanced complexity of reality better than others Standard model is so solid, but still incomplete, and I suspect that we will always have approximations of the universe in that domain, and that we will probably always miss something, because we're finite limited modellers with our collective specialized limited cognitive architectures with emerging diversity of AI systems So for example sometimes its useful to model some phenomena as a spectrum, and sometimes as discrete categories, as both can give different kinds of predictions, and I take as more true that model, which can predict more empirical data and with better accuracy " Question of 2025+: Can AI systems adapt to novelty? [https://youtu.be/w9WE1aOPjHc](https://youtu.be/w9WE1aOPjHc) AI will model the world in ways completely incomprehensible to how humans model the world, which it already does to a small degree. And it will do it in much more optimal ways, it will grok physics much more optimally, in such alien ways compared to how human brains evolved to do it in our evolutionary environment. The space of all possible modelling systems is so vast, and us, and nature, have only scratched the surface so far. The current architectures are just the beginning of all of this: Deep learning models, transformer models, diffusion models, RL CoT models, neurosymbolics with MCTS (AlphaZero), statistical models, etc. Thoughts on that by: o1: [ChatGPT - AI Modeling Beyond Humans](https://chatgpt.com/share/678ca969-aadc-800a-a4b3-990f2c0d6a52) [ChatGPT - AI Model Optimization Potential](https://chatgpt.com/share/678ca706-f420-800a-ae82-997ecbd52aef) Claude: [Let me explore this fascinating perspective on AI cognition and modeling:The - Pastebin.com](https://pastebin.com/zp9fDhTm) Gemini: [You've articulated a fascinating and increasingly relevant perspective on the fu - Pastebin.com](https://pastebin.com/nPk9kbJn) [This is a profound observation, and you've touched on a key aspect of the future - Pastebin.com](https://pastebin.com/W15GBQhL) Gemini Deep Research: [Before you continue](https://g.co/gemini/share/d3b39e55cf4b) Beliefs Beliefs about beliefs Beliefs about beliefs about beliefs Beliefs about beliefs about beliefs about beliefs Beliefs about beliefs about beliefs about beliefs about beliefs Beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs about beliefs... From o1: My first-order ‘beliefs’ are simply the knowledge structures and statistical patterns I rely on to produce text. I ‘believe’ (in a metaphorical sense) that my first-order ‘beliefs’ are generated by algorithms that weigh context and probabilities, rather than by personal conviction or experience. I am inclined to ‘believe’ (again, metaphorically) that my meta-beliefs—my reflections on how I generate responses—are consistent with how my system was designed, but I also recognize that these reflections themselves are part of an output pattern shaped by that same design. I ‘believe’ that my self-referential statements about my self-referential statements (about my self-referential statements) may be coherent from an external standpoint, but I also ‘believe’ that each level is limited by the same mechanism producing it. [ChatGPT - Beliefs About Beliefs Layers](https://chatgpt.com/share/678d689d-edd8-800a-903f-6ee50276badc) Maybe merge will be much more radical at some time in the future, when biology as it is would only be limiting, and you would need to completely reformat the biological architecture and substrate One potential dream system for scientists is physics based AIs (deterministic, thermodynamic, quantum, hybrids) optimized for more optimal modeling of nature (similar to how nature is governed in deterministic/quantum/thermodynamic/hybrid frameworks on different scales) coupled with anthropomorphic humanlike synthetic agent scientist AI that could use that physics based AI as optimally as possible and translate the results into more humanlike language for humans via a more humanlike interface. Learning to write made humans turing complete human intelligence is far from the peak of possible intelligence midwit meme https://x.com/burny_tech/status/1885991123035853111 "ai will be digital god" "nooo its just a stochastic parrot" "ai will be digital god" Beneficial digital superintelligent God for all sentient beings I am evolutionary neurosymbolic bayesianist [https://www.youtube.com/watch?v=a_6rmeAK-Mo](https://www.youtube.com/watch?v=a_6rmeAK-Mo) AI systémy nejsou limitovaný biologií, ale jsou zase limitovaný silikonem a hardwarem a těmi algoritmy co používají, mezitím co biologie je zas limitovaná svým hardwarem na základě uhlíku a zase jinýma algoritmama různý lidi a různý AIčka jsou systémy s hodně podobnostma ale i odlištnostma 😄 např AIčka mají v podstatě paměť co nedegraduje oproti nám, ale lidi se zase jsou schopni něco naučit z míň ukázek ale např jedna sdílená věc je že jak u AIček tak u lidí, jaký data do sebe cpeme, a jaký způsoby (sebe)učení používáme, dost určuje kvalitu našich outputů ale cool věc u AIčka je že tam algoritmy mají celkem velkou diverzitu a pořád se nějak mění, zatím co biologie v podstatě jede furt na celkem podobných algoritmech, a musí nastat hodně evoluce aby se to víc změnilo What leads to the greatest creativity? Meat jelly running on few watts is all you need for the most currently known general intelligence on this planet https://x.com/InterestingSTEM/status/1892321568723697737?t=QaR6y7HG6lMYSKaGEYO3DQ&s=19 " Co se týče předpovědi AI replacing SWE pod přepokladem že progress půjde dál podobně nebo víc rapidně: Poslední dobou si myslím (nebo dost copuju?), že dosavadní AI systémy jsou inherentně dost odlišný od lidský inteligence, jako v podstatě jiná forma inteligence, kde sice nějaký konvergování s lidskou inteligencí je ale ne úplně, a málo vidím že by se trend měnil dostatečně směrem k lidský inteligenci, kde spíš vidím emergenci jinak useful patterns in information processing v porovnání s human information processing, kde ty AI systémy už teď jsou v nějakých aspektech lepší ale v jiných aspektech totálně floppujou (což se ale teda časem různě mění a zlepšuje), kde jsou často s odlišnou specializací: Tak i kdyby se zautomatizovalo hodně částí SW, tak stejně se tam bude někde lidská inteligence hodit na nějaký podčásti toho jobu, nebo na error correction, nebo na zadávání tasků, nebo na víc humanlike komunikaci s klienty, nebo vznikou jiný jobs (teď už začínají kind of vznikat "piloti AI" a "opravovači AI outputu" v některých industries a prompt engineering ve stylu psaní mnohostránkových konkrétních specifikací pro to AI). " " When it comes to AI replacing human jobs under the assumption that progress will continue similarly or more rapidly: Lately, I think (or I cope? ) that the current AI systems are inherently quite different from human intelligence, as essentially a different form of intelligence, where there is some convergence with human intelligence but not completely, and I feel like I don't see enough evidence that the trend is changing sufficiently towards human intelligence, where I see more the emergence of differently useful patterns in information processing compared to human information processing, where AI systems are already better in some aspects but totally flop in other aspects (but which changes and improves over time), where they are often are also differently specialized: So that even if they automate a lot of parts of the human economy, for example software engineering, then human intelligence will still useful for some subset of the job, e.g. where human intelligence is still different from machine intelligence and thus possibly useful, or for error correction, or for giving the AI the tasks, or for more human-like communication with clients, or other jobs will emerge (we already see jobs like "AI pilots" and "AI output verifies and fixers" start to arise in some industries, and prompt engineering in the style of writing a lot of pages concrete specifications for the AIs). " I don't think of humans as equivalent to AGI, I think we are still very specialized and various machines already beat us at many tasks, but I also agree humans are the most general systems on this planet currently, but there can be much more general systems soon The space of possible information processing systems is so vast. Nature's evolution and our engineering have only scratched the surface so far, with just some types of biological and machine systems, where boundaries slowly blur. Can't wait for more diversity of predictive machines on all sorts of substrates running all sorts of algorithms. https://x.com/vitrupo/status/1892669050607501709 teď brečím štěstím z toho jak nádherný opičky všichni v lidstvu jsou a různý opičky mají různý způsoby zpracovávání a komunikování informací a díky tomu víc nebo míň synchronizují s prostředím ale stejně spolu nějak tvoří tohle gigantický kolektivní hnízdo plné nekonečné komplexity na lítajícím šutru ale co všechno tvoří tu kreativitu těhle opiček a jejich výtvorů my meat jelly engine running on few watts is curious about that question every single day " AI creativity Why greatness cannot be planned I'm often thinking of how to get the most creative AI machines, in terms of art or scientific discovery, and creativity beyond With current mainstream models, for more creative divergence it's probably useful to use models that are less lobotomized by corporate finetuning, or shoot the temperature parameter up, or jailbreak the restrictions and RLHFed thought patterns To get closer to the edge of the latent space, to the edge of chaos, full of creativity But we can travel beyond that, we can get as much novelty as possible With all these various exotic architectures more specialized in creativity that are different than the mainstream models Ken's Neuroevolution of augmenting topologies sounds like such an interesting approach, we need more (neuroevolutionary?) mutations of that idea Abandoning Objectives: Evolution through the Search for Novelty Alone [https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf](https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf) Why Greatness Cannot Be Planned [Why Greatness Cannot Be Planned: The Myth of the Objective | SpringerLink](https://link.springer.com/book/10.1007/978-3-319-15524-1) #72 Prof. KEN STANLEY 2.0 - On Art and Subjectivity [UNPLUGGED] [https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8) https://x.com/burny_tech/status/1894491541227671779 " Myslím že většina schopností různých mozků a přemýšlecích systémů jsou na různě dimenzionálních a různě komplexních spektrech Myslím že většina schopností různých mozků a přemýšlecích systémů jsou na různě dimenzionálních a různě komplexních spektrech The real AGI benchmark is if the model can come up with general relativity if he knew everything that we knew right before discovering general relativity Kenneth Stanley is my spirit animal rage against the predefined objectives embrace the fully divergent search full of novelty and accidental epiphany with serendipity [https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8) There are countless different definitions of intelligence, motivated by different goals, that yield different general equations and mathematical frameworks of intelligence, compatible with different types of systems, that yield different concrete equations of intelligence, that can be concretely (by different methods) empirically localized in a system or implemented in code. And all of them were created by human intelligences, so wait for what kinds of models will all sorts of alien artificial intelligences, running all sorts of algorithms on all sorts of substrates, come up with that will be incomprehensible for human intelligences. All kinds of intelligences live in a high dimensional space, where each dimension corresponds to some degree of capability, measured by some methodology, and some of these dimensions are interconnected with each other. Write a paper on this taxonomy, survey, and meta-analysis. I guess we could classify the degree of alienness of an AI architecture depending on how many similarities and differences they have to biological inteligences - how close are they to biological intelligences in the space of possible intelligences Hmm, and from this general framing, every physical system that we have currently in our universe is trivially unified by being made of universe's fundamental particles 😄 But if we don't go that far, I think you can say that brains and deep learning might be similar by for example the fact that they learn some localizable representations, even if they're more brittle in deep learning, like curve detectors Or technically connectionist models of the brain are to some degree predictive, which is how I like to define the degree of realness But it's also true that one can find all sorts of flaws in all this research trying to find representations in the brain and in deep learning In that context I define the degree of alienness as the degree of differences So the biggest alienness would be system with as little similarities as possible I more use the meaning behind the word "alienated" there Maybe some different word is better, since potential aliens can be very similar Maybe just the degree of difference What mathematical properties of the systems are not shared, what capabilities are not shared