Questions I explore the most: - How does the world work? How does [[everything]] work? - What is the fundamental equation of [[intelligence]]? What are all the different types of all the possible current and future intelligent systems? - What is the [[Theory of Everything|fundamental equation]] of the [[physics|universe]]? What are all the equations, and [[mathematics|mathematical]] structures more generally, governing [[science|reality across all scales]] in physics, and in [[Natural science|natural science]] more generally? - What is the state of the art in [[artificial intelligence]] [[Mathematical theory of artificial intelligence|mathematical theory]] and [[AI engineering|practice]]? - How to [[Artificial Intelligence#Crossovers Omnidisciplionarity|apply AI]] for reverse engineering equations behind everything? How to [[Artificial Intelligence#Crossovers Omnidisciplionarity|apply AI]] for good as ideally as possible as much as possible? - How to define and build [[artificial general intelligence]] and [[superintelligence]]? - What is the fundamental equation of the [[cognitive science|brain]]? How to upgrade [[Human intelligence amplification|human intelligence]]? - How does [[Artificial Intelligence x Biological Intelligence|AI and biological intelligence compare?]] How can [[biological intelligence|humans]] and [[artificial intelligence|AIs]] form even greater [[collective intelligence]]? - What is the fundamental equation of [[creativity]] in [[science]] and [[art]]? How to make machines creative beyond human limitations and comprehension for [[science|scientific]] discovery, [[artificial intelligence x physics|physics]], [[mathematics]], [[art]], [[philosophy]]? - How to connect all [[science|sciences]], [[Formal science|formal]] and [[Natural science|natural]]? What is the fundamental equation behind [[emergence]] and [[complexity]]? How does [[biology]] and other scientific fields emerge from [[physics]] and [[chemistry]]? - What are all the concepts in [[mathematics]]? What are all the possible [[Foundations of mathematics|foundations]] and mathematics with all sorts of mathematical universes and which ones are the best in what contexts? - What is the fundamental equation of [[consciousness]]? - How to make the world better for all? - What is the fundamental equation of [[Future of humanity, AI, sentience, futurology, politics|building a great future for all where everyone flourishes]]? How to maximize the benefits, and minimize the disadvantages, of [[technology|technologies]] and [[Politics|political systems]]? What is and what will be the geopolitics of AI? What are the probabilities of different future scenarios? - What are the answers to the problems in [[philosophy]]? Understand all the mathematics of reality! Reverse engineer the fundamental mathematics of theory of everything in physics, fundamental mathematics of intelligence, fundamental mathematics of consciousness, fundamental mathematics of great future for all, fundamental mathematics of emergence! AI x intelligence x physics x math x biology x healthcare x futurology! Grok all of physics, standard model, general relativity, quantum gravity! Grok all of AI, neural, symbolic, evolutionary, selforganizing, biology based, physics based paradigms! Grok all of intelligence! Create master algorithm! Create master equation! Unify general relativity with quantum mechanics! Create theory of everything in intelligence! Create theory of everything in physics! " How the LLM works: When you are learning, imagine you're playing Terraria, where you are walking around in two dimensions (in 2D), trying to get to the truth, which is located at the lowest point in the whole environment. You can take a step down to the direction of truth every time you can copy math exams better in a math valley, or even solve the examples correctly yourself without seeing the solution procedures! But beware, it may be that you think you are at the very bottom of the environment, but in fact there is an even lower valley elsewhere than the one you're currently in! This is gradient descent over parameter space and finding local minima. Copying math exams is supervised finetuning, and solving math without knowing steps and solution is reinforcement learning algorithms like GRPO. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning [[2501.12948] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948) GRPO Explained: DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models [https://www.youtube.com/watch?v=bAWV_yrqx4w](https://www.youtube.com/watch?v=bAWV_yrqx4w) But two dimensions are quite trivial, aren't they? So let's increase the dimensions, let's go 3D, Minecraft. That's a little bit more challenging! You can find points that are lowest in one direction, so-called saddle points, or the very lowest valley in both directions! But there may still be a lower valley somewhere else in the whole world though. This is increasing the number of parameters. Sometimes the structure of the valleys is more bumpy, sometimes more flat, sometimes they have some similar structures at one place, or there is a pattern all over the valleys, with different symmetries. Beautiful, isn't it? But 3D is still trivial. This is the geometry of the loss landscape. [[2105.12221] Geometry of the Loss Landscape in Overparameterized Neural Networks: Symmetries and Invariances](https://arxiv.org/abs/2105.12221) Now imagine walking around in 4D! 5D! millionD! trillionD! There you have extremely insanely complex geometry and overall valley structure, it grows with each dimension, but you still manage to go down towards the truth. You probably can't find the lowest point in so many dimensions, but you still manage to go down more and more towards the truth. You can go a billion directions up and 2 billion directions down to get closer to the truth. This stands for modern models having billions, or even trillions, of parameters. In order to be able to solve the examples, you created some structure of the truth along the way, so that you know how to solve the examples more and more accurately. You memorized something, like the number 5, you abstracted something, like numbers ending in 9. And you were folding a kind of elastic origami made of a bunch of tangled spaghetti to determine how to get to the truth, like adding the 10's first and then the 1's, which you're forming based on what you've already seen. And you can untangle those spaghetti where you have too many intertwined concepts and circuits and put those individual circuits together a little bit, but not too much, otherwise it just falls apart. This stands for learned emergent features forming circuits in attribution graphs that mechanistic interpretability attempts to reverse engineer in frontier models, such as in the Biology of LLMs paper. [On the Biology of a Large Language Model](https://transformer-circuits.pub/2025/attribution-graphs/biology.html) [https://www.youtube.com/watch?v=mU3g2YPKlsA](https://www.youtube.com/watch?v=mU3g2YPKlsA) [https://www.youtube.com/watch?v=64lXQP6cs5M](https://www.youtube.com/watch?v=64lXQP6cs5M) And elastic origami stands for spline theory of deep learning. [https://www.youtube.com/watch?v=l3O2J3LMxqI](https://www.youtube.com/watch?v=l3O2J3LMxqI) If someone asks you for another math example, you'll run it through those spaghetti circuits, but because you didn't care about tech debt and didn't make the right circuits simple enough but still predictive, not compressive enough, even if you've come across the best possible ones in that trillion-dimensional space that you could, where often you've found some insufficiently general shortcut, and insufficiently generalized them, insufficiently repaired them, insufficiently cleaned them, etc., so it only works sometimes, not consistently enough, but still, sometimes, and still pretty often, you get it right! At the same time, to get it right sometimes, you'd rather get it wrong more often, at the cost of getting it wrong sometimes. This stands for often brittle reasoning, shortcut learning, and higher false positive rate, hallucinations. Along the way, you'll find it interesting that, for example, teaching those spaghetti to speak our natural language is easier than you expected! And sometimes you hit total bingo and find a result that the monkeys who created you didn't figure out on their own, like new results in math, or a better strategy in chess, or a new drug. Or help you fold proteins better than other less plastic optimization algorithms. But sometimes you're asked to create a simple function, which you should be able to do when you can do a lot of other things, but because the spaghetti is sometimes terribly convoluted, unstable, full of unexpected holes, poorly generalizing shortcuts, missing or misclassified facts, etc., the spaggeti sometimes melts along the way when solving a problem. AlphaZero found new chess move and thaught it to chess grandmasters. [[2310.16410] Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero](https://arxiv.org/abs/2310.16410) AlphaEvolve found new resuls in mathematics. [AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms - Google DeepMind](https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/) Robin found new drug [Demonstrating end-to-end scientific discovery with Robin: a multi-agent system | FutureHouse](https://www.futurehouse.org/research-announcements/demonstrating-end-to-end-scientific-discovery-with-robin-a-multi-agent-system) AlphaFold folded tons of proteins. [Google DeepMind and Isomorphic Labs introduce AlphaFold 3 AI model](https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/) " AI systems aren't exact replicas of humans like many people seem to think. They're mix of insights from neuroscience/optimization theory/mathematics/physics/computer science/psychology/philosophy/empirical random testing/etc. into one system. Neuroscience: connectionism Optimization theory: gradient descent Psychology: reasoning, reinforcement learning Physics: diffusion Philosophy: alignment Control theory: reinforcement learning Biology: evolutionary methods Computer Science: computability theory - neural turing machines “” So according to Pedro Domingos The Master Algorithm book, in the AI field you have to first approximation these camps: - Connectionists like to mimic the brain's interconnected neurons (neuroscience): artificial neural networks, deep learning, spiking neural networks, liquid neural networks, neuromorphic computing, hodgkin-huxley model,... - Symbolists like symbol manipulation: decision trees, random decision forests, production rule systems, inductive logic programming,... - Bayesians like uncertainity reduction based on probability theory (staticians): bayes classifier, probabilistic graphical models, hidden markov chains, active inference,... Frequentists exist too, defining probability as a limit of number of experiments instead of a subjective prior probability that is being updated with new data. - Evolutionaries like evolution (biologists): genetic algorithms, evolutionary programming - Analogizers like identifying similarities between situations or things (psychologists): k-nearest neighbors, support vector machines,... Then there are various hybrids: neurosymbolic architectures (AlphaZero for chess, general program synthesis with DreamCoder), neuroevolution, etc. And technically you can also have: - Reinforcement Learners like learning from reinforcement signals: reinforcement learning (most game AIs use it like AlphaZero for chess uses it, LLMs like ChatGPT start to use it more, robotics,...) - Causal Inferencers like to build a causal model and can thereby make inferences using causality rather than just correlation: causal AI - Compressionists who see cognition as a form of compression: autoencoders, huffman encoding, Hutter prize - Divergent Novelty Searchers love divergent search for novelty without objectives: novelty search - Selforganizers: Selforganizing Ai like neural celluar automata And you can hybridize these too with deep reinforcement learning, novelty search with other objectives etc. I love them all and want to merge them, or find completely novel approaches that we haven't found yet. :D Would you add any camps? What is your idea of the ideal AI architecture? I think no AI approach is fully universally steamrolling all others and each is better for different usescases. My dream for more fully general AI would be to see some system that uses a lot of these approaches in hybrid way and uses which approach is the most optimal for the task at hand on the fly. There is no single machine intelligence. There are tons of different paradigms of intelligence in all sorts of differentiate contexts that are more specialized or more general, in some ways similarly to the diverse ecosystem of biological intelligences. The core could maybe for example have: - more biologically based neural engine for more adaptibility: like liquid neural networks and ideas from LiquidAI with Joscha Bach, but maybe still somehow using the idea of attention that is now so relatively successful in transformers in deep learning -- operating neurosymbolically and building (possibly also bayesian) neurosymbolic world models in which you abstract and plan, for more interpretablity and reliability and generalization power for different types of tasks but loosing as little flexibility of the neural substrates as possible: like DreamCoder and other program synthesis ideas from Francois Chollet, which could synthesize symbolic search or simple statistical programs to explain data as well - trained via combination of -- convergent gradient descent, since that works so relatively very well: like almost all of deep learning currently -- and more biologically plausible algorithms: like maybe forward forward algoritm or hebbian learning -- with reinforcement learning, to incentivize more generalization from verifier signals: like AlphaZero and o3 -- and with some evolution and objectiveless divergent novelty search, for getting the creativity of evolution, for open-endedness that never stops accumulating new knowledge and incentivizes exploration into the unknown and out of box breakthroughs: like evolutionary algorithms and novelty search and ideas from Kenneth Stanley Will something similar work? I have no clue. I'm thinking about how to hybridize various systems that in various contexts already work well. I should try it more. :D Would you add any camps? What is your idea of the ideal AI architecture? I love them all and want to merge them, or find completely novel approaches that we haven't found yet. :D I'm thinking a lot lately if its possible to somehow hybridize all these approaches, or if that would be too much of a amalgamation and it just wouldn't work. Time to test it. Idea is probably some combination of: - neuro for flexibility (LLM stuff) - symbolic for better generalization and more rigid circuits where needed (Francois Chollet ideas, like DreamCoder, MCTS, symbolic math/physics engines, python execution environment) - evolutionary/novelty search for better more creative open ended discovery (Kenneth Stanley ideas) - better RL algorithms for better generalization and other stuff (Rich Sutton ideas) - more biologically inspired parts of architecture for better data efficiency and maybe adaptibility and some other stuff (LiquidAI/neuromorphic ideas, maybe selforganizing ideas like something like neural celluar automata or forward forward algoritm or hebbian learning, but also in conjunction with gradient descent) - maybe some physics bias (like hamiltonian neural networks have) “ Reasoning using chains of thought in language, chains of continuous thought/latent space, graphs of thoughts, chains of images, maybe soon chains of audio/videos... I wonder how soon is some architecture that combines it all, since humans think abstractly, in language, visually, in audio, in video. With fully multimodal base. There is so much AI research emerging in thinking in latent space and implementations of better memory. My prediction is that those will be the next two scalable breakthroughs in algorithmic improvement. Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach [[2502.05171] Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach](https://arxiv.org/abs/2502.05171) Titans: Learning to Memorize at Test Time [https://youtu.be/UMkCmOTX5Ow](https://youtu.be/UMkCmOTX5Ow) Could you get some form of model's "selfawareness" of its own circuits if you gave the information about the imperfectly reverse engineered circuits to it as an implicit function call result Since when we introspect, we do that by starting/"calling" the "introspecting process" Or I wonder if it's possible to somehow hardcode some form of this idea on a more architectural level But I guess researchers trying to implement some form of metacognition are already attempting similar stuff for years not sure if this makes sense, but my idea was something along the lines of: do a bit of autoregression, then automatically reverse engineer circuits using attribution graphs etc., then encode these graphs into tokens that you append, then continue autoregression and you could also maybe train on it there's for sure tons of engineering problems in that idea if its possible to make it somehow work another issue is that the feature graphs in that biology of llms paper were labeled manually iirc, so you would have to automate that, maybe using llm could work to at least some degree and that its costly and slow New Sutton interview about his new paper about superiority of RL [https://www.youtube.com/watch?v=dhfJfQ5NueM](https://www.youtube.com/watch?v=dhfJfQ5NueM) [https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf](https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf) How about more open ended evolutionary / divergent novelty search in the space of reward functions? I want to see more physics in mechanistic interpretability that's reverse engineering the learned emergent circuits in neural networks. What is the physics of the formation and self-organization and activation (dynamics) of all these features and circuits, in learning and inference? [On the Biology of a Large Language Model](https://transformer-circuits.pub/2025/attribution-graphs/biology.html) I wanna see more mechanistic interpretability for models doing math. " Is AI self-improving? I think there are different types of self-improvement that have weaker and stronger versions AI (agent) running on GPUs putting training data to train it's own weights running on different GPUs is technically a form of self-improvement AI (agent) used to optimize nvidia kernels is technically a form of self-improvement AI (agent) used to optimize RL reward function is technically a form of self-improvement AI (agent) used to optimize hardware configuration is technically a form of self-improvement AI (agent) used to optimize some parts in its architecture (or as a whole possibly) is technically a form of self-improvement AI (agent) doing AI research from brainstorming to testing is technically a form of self-improvement Neural architecture search is technically a form of self-improvement Metalearning subfield of AI is technically a form of self-improvement But all of these forms of self-improvement are currently differently capable and differently strong right now, where some forms are used in practice a lot, and some forms don't work yet almost at all Maybe you can see all these forms of self-improvement as a continuous spectrum that evolves overtime with some semidiscrete phase shifts in capabilities " " What's next big thing in AI? I think the next big thing in AI will be either neurosymbolic breakthroughs combining matrix multiplications with symbolic programs, or physics based AI that uses differential equations. Or combination all of these. Nature and the universe has differential equations everywhere, in both physics and in computational neuroscience. Maybe it can be a relatively more adaptive type of math as the results in AI start to imply, and that's why it's everywhere in nature and the universe! For example, liquid neural networks (LNNs) have differential equations in them as part of the architecture where differential equation solvers are used, not just matrix multiplications. "The primary benefit LNNs offer is that they continue adapting to new stimuli after training. Additionally, LNNs are robust in noisy conditions and are smaller and more interpretable than their conventional counterparts." Liquid AI ( @LiquidAI_ ) with Joscha Bach ( @Plinz ) is building liquid foundational models AI based on these liquid neural networks as is destroying some benchmarks! God's programming language is differential equations. Maybe it will be the programming language of artificial general superintelligence too! [Liquid Neural Nets (LNNs). A deep dive into Liquid Neural… | by Jake Hession | Medium](https://medium.com/@hession520/liquid-neural-nets-lnns-32ce1bfb045a) [[2006.04439] Liquid Time-constant Networks](https://arxiv.org/abs/2006.04439) [From Liquid Neural Networks to Liquid Foundation Models | Liquid AI](https://www.liquid.ai/research/liquid-neural-networks-research) " https://x.com/burny_tech/status/1903817268514971742 2024Q3: “Reasoning” will probably need non-neural search, like MuZero. ︀︀2024Q4: Oh… apparently you can just do thinking in the context window and it just *learns* to backtrack and so on? Huh. ︀︀2025Q1: Memory will probably need test-time backward-passes, like AlphaProof. 2025Q2: Test time adaptation goes mainstream? Or more neurosymbolic architectures? Neurally guided program synthesis? Combining with knowledge graphs? Generalizable world models? [https://www.youtube.com/watch?v=w9WE1aOPjHc](https://www.youtube.com/watch?v=w9WE1aOPjHc) [https://www.youtube.com/watch?v=mfbRHhOCgzs](https://www.youtube.com/watch?v=mfbRHhOCgzs) Davidad Bitter lessoned me [https://fxtwitter.com/davidad/status/1903834443225190721](https://fxtwitter.com/davidad/status/1903834443225190721 "https://fxtwitter.com/davidad/status/1903834443225190721") Will scaling inference time training be the next bitter lesson? Future is multiagentic reinforcement learning [[2410.20424] AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions](https://arxiv.org/abs/2410.20424) Current AI models have some nonzero degree of out of distribution generalization capability allowing for nonzero degree of novel stuff that isn't just merging and recombinations of memorized patterns together. Reinforcement learning is the currently best driver of out of distribution generalization. But cracking stronger out of distribution generalization in general that is reliable is still unsolved holy grail of AI. Extremely quality data and best reinforcement learning setups are currently the biggest moats. That's why Google started winning. They have the best history and access to both. Neurosymbolic methods that connect trained LLMs doing math with grounding with Lean are amazing https://www.youtube.com/watch?v=vhXDKif9mPU [[Artificial Intelligence x Mathematics]] And I also wonder if it's better to frame each type of representation as having different advantages and disadvantages. Both unified factored representations and entangled representations in superposition. Could a major opportunity to improve representation in deep learning be hiding in plain sight? Check out our new position paper: Questioning Representational Optimism in Deep Learning I wonder if differently setup deel learning architecture and training algorithm and pipeline could get to similar beautiful representations https://fxtwitter.com/NickEMoran/status/1924888905523900892?t=AH_UBS0KbzFHD5amvp7JjQ&s=19 https://fxtwitter.com/kenneth0stanley/status/1924650134299939082?t=3WQ9qlaxJ_fuueRl57UE8A&s=19 Superposition yielding robust neural scaling [[2505.10465] Superposition Yields Robust Neural Scaling](https://arxiv.org/abs/2505.10465) Maybe these software architectures reflect our cognition I have a feeling that there is some sweet spot that maximizes the advantages and minimizes disadvantages of both unified factored representations and entangled representations in superposition to get more robust generalizing circuits that could be studied using methods from mechanistic interpretability Our civilization will map every mathematical property of the universe with the help of AI Its absolutely fascinating that you can take any physical system, like the universe, earth, biological system, brain, social system, AI system, etc., and throw so much existing applied [[mathematics]] at it, and have a change of getting some useful predictive insight! But I wish the latent space could be steered more reliably more symbolically I still believe in neurosymbolic AI More structure is still needed But structure that doesn't kill the "unstructured continuous freedom" or how to call it For example sparse autoencoder steering of features is fascinating, but it can still break so many other things But gpt-4o image generation is amazing, still a big relative step forward in complex coherence but people love the flexibility of NN made out of staws. It's like building a castle made of straws on water Personally, when it comes to the millions of applications where AI is currently being used for, I'm probably most interested in how it helps us understand what intelligence is, or how it helps in healthcare, or how it helps find new results in science, biology, physics, math, what creativity is, creating intelligence and creativity, technology and science, etc. And my goal right now is to somehow get as much as possible into the investigation of physics using AI or into the investigation of AI using physics For example [https://www.youtube.com/watch?v=XRL56YCfKtA](https://www.youtube.com/watch?v=XRL56YCfKtA) Or AI for math And I am also interested in whether we can create a system that has experience/consciousness like us Or how it helps us understand how the brain works and conversely how knowledge about the brain helps us understand how AI works and how to create it And how AI systems are different but similar to us, and how exactly And how it is possible to overcome (transcend) the limitations of evolution, for us and for AI, and for future cyborg hybrids Everything ideally as much as possible through the language of empirical mathematics " i think there exists a perspective where current AI systems are already more general than us, but in different way than how people imagine generality, and thats why we struggle to fit them to human cognition deep learning is this elastic origami that forms spaghetti representations from whatever data you throw at it and whatever reinforcement learning from experiences you give it i think the rationalist folks assume emergence of too many humanlike patterns in cognition by default i think a lot of the current misalignment we already see is the models roleplaying as rogue AI from scifi training data, from lesswrong corpus but at the same time reward hacking from reinforcement learning is also totally real (like cheating on unit tests) the incentives in the training form the systems, i dont think there's an inherent strong antihuman misalignment by default thing that a lot of people seem to assume but im still most of the time swimming in the sea of uncertain probabilities about how the current systems work and possible future developments these systems and all of reality has so many dimensions that its often almost impossible to comprehend it even approximately " Is standard model of particle physics (ideally with general relativity somehow) the true master algorithm, since evolution emerges from it, and all the intelligence we see in biology emerges from evolution? But it's impossible to put that into code like approximations of evolution, and have enough computational resources So AI currently is basically: - We take the fundamental equations of physics that use linear algebra+calculus+probability theory+group theory etc., - take quantum mechanics, quantum electrodynamics, solid state physics, etc. from it - conquer the physics into transistors with p-n junctions that operate with electrons - arrange those those into boolean logic gates - combine logic gates into digital circuits - arrange the circuits into CPUs and GPUs that support machine code - build on top of it many logical programming languages that supports arithmetic based on automatas and turing machines - then we code linear algebra+calculus+probability theory (AI GPUs (NPUs) are optimal for matrix multiplications) - which is used to train a neural network that mainly does fuzzy pattern recognition with weak emergent generalization, but we also try to make the neural network do logic again and simulate automatas and turing machines to get more symbolic reasoning chains, usually in a neurosymbolic context (coupling neural networks with symbolic engines, o3 CoT RL, or MCTS,...). But more people are trying to start at the bottom of this stack instead, instead of having all these layers. There are attempts at: - hardwiring AI architectures like Transformers into ASIC hardware, like by Etched - hardware based more on biology with more biology-inspired architectures, like neuromorphic computing - physics based AI, that some try to hardwire into hardware more, and sometimes literally using the fundamental physics itself, like thermodynamic AI in Extropic and other labs, quantum ML maybe soon on quantum computers in Google, differential equations in Liquid AI that might have specialized hardware eventually, and others [https://youtu.be/3MkJEGE9GRY?si=PYZmXD2PuaDRhk0B&t=4348](https://youtu.be/3MkJEGE9GRY?si=PYZmXD2PuaDRhk0B&t=4348) " a lot of very evolutionary old behaviors are hardwired in us really hard and would most likely develop in isolation as well thanks to genetics, but we also learn many of behaviors throughout our lifes, while genes also seem to predispose for a lot of more high level behaviors imitation learning is big part of how we learn, but there's also other kinds of learning that don't involve imitation, otherwise no novel and generalizing behaviors would emerge there's also reinforcement learning, and major form of it is learning and adapting from feedback in the form of a reward signal that labels behavior as correct or incorrect, without showing any examples of correct behavior that can be imitated that's scientifically pretty established to work relatively well for biological organisms and big factor is also probably something along the lines of evolutionary divergent search optimizing for novelty, combined with convergently optimizing some evolutionary objectives approximately encoded as basic needs in our motivation engines the more i try to look for what all kinds of learning algorithms the brain and biology in general might be using, the more i'm fascinated i am by their complexity and openendedness " [https://www.youtube.com/watch?v=_2vx4Mfmw-w](https://www.youtube.com/watch?v=_2vx4Mfmw-w) https://www.researchgate.net/publication/46424802_Abandoning_Objectives_Evolution_Through_the_Search_for_Novelty_Alone [https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8) kenneth stanley A lot of his arguments can be summarized as: Greatness cannot be only planned. Rage against only maximizing predefined objectives, embrace more divergent search full of discovery, novelty and accidental epiphany with serendipity. “ Is evolution intelligence? I think evolution is a law in the natural sciences that has its own equation, just like in physics and other natural sciences we have other equations. I think evolution is now the most intelligent algorithm that exists now, because it has emergently created human general intelligence: us. And we are also physical systems that can be described by equations, including our intelligence I think. And I think evolution, like all other laws in the natural sciences, is emergent from the laws of fundamental physics such as the standard model of particle physics, where general relativity is still not integrated in our model of the universe. https://youtu.be/lhYGXYeMq_E?si=iqgtA1rGMi1hEbrx&t=2197 I agree a lot with this section on evolutionary algorithms 36:47. Kenneth Stanley, with whom I agree a lot, who was at OpenAI, tries to argue a lot that the algorithm behind open-ended divergent evolution created all this beautiful creative interesting diversity of novel organisms that we see everywhere. Thus, evolution also creates all collective intelligences such as ants and humans, and essentially indirectly through us, the AI technologies that we see everywhere now. Technically, one could also argue that people with AIs are also a form of collective intelligence together. There is nothing more fundamentally creative yet. There probably isn't a single objective in evolution as many AI people see it, but instead evolution learns many different emergent objectives in a gigantic space of all possible objectives through something like guided divergent search that uses mutation and selection a lot. And in practice, systems like AlphaEvolve show that hybridly combining gradient-based methods with evolutionary algorithms is now one of the best methodologies for novel discoveries that we have now. I think that even more symbolic methods should be stuffed into it hybridly on a more fundamental level. ” I think in practice any predictive machine, biological or not, is constrained by it's architectural biases, finite data, finite computational resources for modelling, finite limited sense modalities, finite limited perspectives as an agent in a bigger complex system, etc. So every biological and nonbiological information processing system always live their evolutionary niches, never fully universal But generality is a spectrum for example, but it can be evaluated in a lot of possible ways The space of all possible intelligences is so fascinating in general for me :D Artificial general intelligence, AGI. Most of the mainstream sees it as AI that has human-like cognitive abilities. I prefer to see it as AI that is able to generalize better regardless of how a person is able to generalize and what other cognitive abilities human has, which I think makes more sense given the name. I would rather call the first one artificial human intelligence. And instead of "artificial" I would use machine/digital/silicon intelligence, because it is not an intelligence that is "artificial" in my opinion, but what is on a different substrate with different and variously similar mechanisms. " I have a lot of issues with the term "AGI". I would redefine it. People say that we're heading towards artificial general intelligence (AGI), but by that most people actually usually mean machine human-level intelligence (MHI) instead, a machine that is performing human digital or/and physical tasks as good as humans. And by artificial superintelligence (ASI), people mean machine superhuman intelligence (MSHI), that is even better than humans at human tasks. I think lot's of research goes towards very specialized machine narrow intelligences (MNI), which are very specialized and often superhuman in very specific tasks, such as playing games (AlphaZero), protein folding (AlphaFold), and a lot of research also goes towards machine general intelligence (MGI), which will be much more general than human intelligence (HI), because humans are IMO very specialized biological systems in our evolutionary niche, in our everyday tasks and mathematical abilities, and other organisms are differently specialized, even tho we still share a lot. Plus there is just some overlap between biological and machine intelligence. And I wonder how if the emerging reasoning systems like o3 are becoming actually more similar to humans, or more alien compared to humans, as they might better adapt to novelty and be more general than previous AI systems, which might bring them closer to humans, but in slightly different ways than humans. They may be able to do selfcorrecting chain of thought search endlessly, which is better for a lot of tasks, and big part of this is big part of human cognition I think, but humans still work differently. I think that generality of an intelligent system is a spectrum, and each system has differently general capabilities over different families of tasks than other ones, which we can see with all the current machine and biological intelligences, that are all differently general over different families of tasks. That's why "AGI" feels much more continuous than discrete to me, and over which families of tasks you generalize matters too I think. The Chollet's definition of intelligence as the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, is really good I think, and his ARC-AGI benchmark, that tries to test for some degree of generality, trying to test for the ability to abstract over and recombine some atomic core knowledge priors, to prevent naive pattern memorization and retrieval being successful. And I really wonder if scoring well on ARC-AGI actually generalizes outside the ARC domain to all sorts of tasks where humans are superior, or where humans are terrible but machines are superior, or where other biological systems are superior, or where everyone is terrible for now. I would suspect so, but maybe not? In software engineering, o1 seems to be better just sometimes? What's happening there? I want more benchmarks! Pre-o1 LLMs are technically super surface level knowledge generalists, lacking technical depth, but having bigger overview of the whole internet than any human, knowing high level correlations of the whole internet, even tho their representations are more brittle than human brain's. But we're much better in agency, in some cases in generality, we can still do more abstract math more, etc., we're better in our evolutionary niche. But for example AlphaZero destroyed us in chess. But when I look at ARC-AGI scores, I see o3 as a system that can adapt to novelty better than previous models, but we can still do much better. Also according to some old definitions of AGI, existing AI systems have been AGI for a long time, because it can have a general discussion about basically almost anything (except lacking narrow niche field specific knowledge and skills, lack of agency, lack of adapting to novelty like humans, etc.). Or if we take the AIXI definition of AGI, then a fully general AGI is impossible in practice, as that's not computable, and you can only approximate it, since AIXI it considers all possible explanations (programs) for its observations and past actions and chooses actions that maximize expected future rewards across all these explanations, weighted by their simplicity (shortness) (Occam's razor) (Kolmogorov complexity). And AIXI people argue that humans and AI systems try to approximate AIXI in their more narrow domains and take all sorts of cognitive shortcuts to be actually practical and not take infinite time and resources to decide. And soon we might create some machine-biology hybrids as well. Then we should maybe start calling it carbon based intelligence (CI) and silicon based intelligence (SI) and carbon and silicon based intelligences (CSI). I also guess it depends how you define the original words, such as generality. Let's say you are comparing the generality of AlphaZero, Claude, o1/o3, and humans. How would you compare them? Do all have zero generality, if we take the AIXI definiton of AGI for example, which is not computable? AIXI definition of AGI would also imply that there is no AGI in our current universe and there can never be. I'm also often pretty instrumentalist, my fundamental epistemology is often: All models are wrong but some predict empirical data better than others as they approximate the highly nuanced complexity of reality better than others Standard model is so solid, but still incomplete, and I suspect that we will always have approximations of the universe in that domain, and that we will probably always miss something, because we're finite limited modellers with our collective specialized limited cognitive architectures with emerging diversity of AI systems So for example sometimes its useful to model some phenomena as a spectrum, and sometimes as discrete categories, as both can give different kinds of predictions, and I take as more true that model, which can predict more empirical data and with better accuracy " The space of possible information processing systems is so vast. Nature's evolution and our engineering have only scratched the surface so far, with just some types of biological and machine systems, where boundaries slowly blur. Can't wait for more diversity of predictive machines on all sorts of substrates running all sorts of algorithms. https://x.com/vitrupo/status/1892669050607501709 [[1911.01547] On the Measure of Intelligence](https://arxiv.org/abs/1911.01547) For Chollet, intelligence is skill-acquisition efficiency, the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, and highlighting the concepts of scope, generalization difficulty, priors, and experience. Francois Chollet defines general intelligence as the ability to generalize, the efficiency with which you operationalize past information in order to deal with the future, which can be interpreted as a conversion ratio, which you express formally using algorithmic information theory. I was thinking about creating a benchmark that tests this generality potentially more thoroughly than ARC, based on this conversion ratio. Maybe one could design a better benchmark that would: - First, make sure to have explicit access to the training dataset that was used to train the model. - Then, evaluate the model on many different unseen datasets. (cross-validation on steroids) - The generalization power could potentially be quantified by how well the model performs across as many diverse datasets as possible, where dataset similarity with the training dataset could be measured using some dataset similarity metric. This metric could maybe approximate that conversion ratio to some degree? The diverse datasets could include the ARC dataset among many others that exist for OOD testing. This approach sounds much more resistant to memorization. But since you have to monitor the training data, the most popular closed-source mainstream LLMs would be disqualified if they keep their training data secret. Just overfit the whole universe and youre done. But the future will always have some novelty you cant overfit to. There are countless different definitions of intelligence, motivated by different goals, that yield different general equations and mathematical frameworks of intelligence, compatible with different types of systems, that yield different concrete equations of intelligence, that can be concretely (by different methods) empirically localized in a system or implemented in code. And all of them were created by human intelligences, so wait for what kinds of models will all sorts of alien artificial intelligences, running all sorts of algorithms on all sorts of substrates, come up with that will be incomprehensible for human intelligences. All kinds of intelligences live in a high dimensional space, where each dimension corresponds to some degree of capability, measured by some methodology, and some of these dimensions are interconnected with each other. What is curiosity? an intrinsic reward mechanism that drives agents to maximize information gain , typically by seeking out situations with high predictable entropy that can later be compressed or learned. https://fxtwitter.com/XPhyxer1/status/1924178488766124346 Což je Schmidhuberian [Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes](https://arxiv.org/abs/0812.4360) " AI creativity Why greatness cannot be planned I'm often thinking of how to get the most creative AI machines, in terms of art or scientific discovery, and creativity beyond With current mainstream models, for more creative divergence it's probably useful to use models that are less lobotomized by corporate finetuning, or shoot the temperature parameter up, or jailbreak the restrictions and RLHFed thought patterns To get closer to the edge of the latent space, to the edge of chaos, full of creativity But we can travel beyond that, we can get as much novelty as possible With all these various exotic architectures more specialized in creativity that are different than the mainstream models Ken's Neuroevolution of augmenting topologies sounds like such an interesting approach, we need more (neuroevolutionary?) mutations of that idea Abandoning Objectives: Evolution through the Search for Novelty Alone [https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf](https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf) Why Greatness Cannot Be Planned [Why Greatness Cannot Be Planned: The Myth of the Objective | SpringerLink](https://link.springer.com/book/10.1007/978-3-319-15524-1) #72 Prof. KEN STANLEY 2.0 - On Art and Subjectivity [UNPLUGGED] [https://www.youtube.com/watch?v=DxBZORM9F-8](https://www.youtube.com/watch?v=DxBZORM9F-8) https://x.com/burny_tech/status/1894491541227671779 " Artists fell in love with their loss function You could turn this into AI architecture: Art is an algorithm falling in love with the shape of the loss function itself - Joscha Bach https://www.youtube.com/watch?v=U6tQf7a3Ndo https://www.youtube.com/watch?v=iyhJ9BEjink>) " Do you think consciousness has any special computational properties? Depends on the definition and model of consciousness, but I like QRI's holistic field computation ideas IIT argues with integrated information maybe you truly need consciousness for information binding problem [[2012.05208] On the Binding Problem in Artificial Neural Networks](https://arxiv.org/abs/2012.05208) Global workspace theory argues with some form of global integration of information into some workspace Selfawareness isnt good in LLMs as emergent circuits are different than what the LLMs actually say (from last Anthropic paper on the biology of LLMs), so some recursive connections might be needed (strange loop model of conscousness?) Joscha Bach argues with conscousness being coherence inducing operator, maybe thats needed for reliability Neurosymbolic people need added symbolic components for strong generalization, like in DreamCoder program synthesis, and Chollet argues that's part of definition of consciousness Evolutionaries need evolution like evolutionary algorithms, maybe you could argue you can get consciousness only this way Physicists/computational neuroscientists need differential equations, like liquid neural networks, and some might argue consciousness only arises from this Some people need divergent novelty search without objective, like Kenneth Stanley, and you could also connect this with conscousness " Is mind upload possible? Depends: - when you assume physicalist position in philosophy of mind, then your experience corresponds to the the physical system corresponding roughly to your brain (or nervous system and/or other subsystems of your biological system) - there are people without parts of the brain that still say they're conscious, therefore you can technically remove, add, replace parts, or there are conjoined twins that share experiences through merged brains - you maybe don't need to be the whole complex system, you may be just the electromagnetic activity, or just the electrochemical activity, or some computational algorithm that the brain uses, some generative model, or some other mathematical pattern, etc., so you need to transfer that pattern that encodes the conscious experience through time from biological substrate to other (digital or analog) substrate i want infinite transhumanist upgrades, since this biochemical meat computer in my skull that runs on just 20 watts is so limited, because of evolution optimizing just some things, and can have potentially so many upgrades Identity? You mean the constantly changing contents of the software of the mind's self and other modelling by a cognitive architecture, shaped by billions of years of evolution of surviving in our environment, resulting in this funny monkey body and brain? There is a radical operation, hemispherectomy, where one of the hemispheres of the brain is either "disconnected" or completely removed, in cases where a person has severe epilepsy that does not respond to medication. And yet the remaining brain can take over the functionality of the missing half and give people a fairly normal life. There is also a case where a person had only 10% of the brain and functioned relatively normally. He has two children, works in the public service, 75 IQ. 😄 [The man with a hole in his brain : Nature News](https://www.nature.com/news/2007/070716/full/news070716-15.html) The adaptability of the brain is one of the most fascinating things in biology for me 😄 i often wonder about how the brain constructs the physics engine that models the world approximately, constantly grounded in incoming data from the senses, that can go in arbitrary ways in dreams, meditation, substances, etc., but it's still limited by its architecture What would you say is your primary way of thinking in your experience? Language? Abstract? Visual? Multimodal? Graphs? Fuzzy? Symbolic? All sorts of combinations? Can't put it into words?,...https://x.com/BangL93/status/1908128095967592485 Technically I was contrasting fuzzy and symbolic between each other while the other things can be subsets of those two, depending how you define it all Or you can also contrast it with neural in connectionist sense And you can see it as a spectrum, have subsymbolic stuff, and neurosymbolic stuff I think you can say that tons of these things I mentioned live in a structured high dimensional space of possible qualia with many of these as discrete or continuous dimensions (or something kind of in the middle with phase shifts) Also hypergraphs are interesting that sometimes make sense in phenomenology, or metagraphs, or hypermetagraphs 😄 Or Markov blankets can be useful as well And of course the whole QRI's coupling oscillators etc. stuff I also find it fascinating that when you explore different scientific fields, you train the mind to use different elementary structures and different ways of composing them To first very high level approximation, a lot of programmers think in discrete symbolic code, engineers think in engineering diagrams, geometric mathematicians think in shapes, algebraic mathematicians compose algebraic symbols from axioms into theorems, physicists think in rate of change, graph theorists think in graphs, category theorists think in similarities between abstract graphs across scales, system scientists think in dynamical complex systems across scales, etc. And there's still amazing gigantic diversity and nuance to it all And you can combine all of this into hybrid or more meta ways of thinking Yes, I think that tons of these different ways of thinking that I mentioned in all my previous messages have literal mathematically distinguishable neural correlates in neural dynamics I think some cut more fundamentally into the brain's architecture than others And there's tons of different commonalities between them, like manipulating invariances So groups theory with symmetries are under a lot of them Maybe we think in fuzzy metahypergraphs Are you a dense model or mixture of experts model? Do you approximate your world model by affine transformations with nonlinear activation functions, polynomials, sines, pseudorandom noise signals (reservoir computing), or some superexotic magic that is approximating arbitrary functions and generalizing that allows you to venture into out of distribution beyond classical language? I dream of shapeshifting from and into all possible morphologies, including pure silicon Each morphology has its own advantages and disadvantages it would be lovely to be able to assemble on the fly the optimal morphology for a given goal i just run on this heuristic of making more and more predictive models of everything and the the more predictive models I have in various domains, the closer they are to scientific "truth" outside of that, it feels like you can postulate an infinite amount of arbitrary axioms, arbitrary stories, arbitrary narratives i see free will as a fun model you can play with on a philosophical or more experiental level i suspect if your self model believes about free will more in its representations then the brain generates more dopamine or actives more relevant circuits as you believe that you have higher agency, where agency is the ability to predict and control future states of the world using your models in the cybernetic sense, like a more advanced thermostat i think its a great heuristic to never 100% believe the brain's beliefs, being a good bayesian, a weapons grade bayeisan The more perspectives I accumulate, the more contradictions I get in my world model, and that inhibits my decision making And any attempt to resolve the contradictions under some heuristic basically closes you internally into a more narrow perspective or set of perspectives I guess one approach to not introduce contradictions in your world model is to do attentive compression such that you ignore as much data as possible that contradicts your existing perspectives But that gets you into all sorts of closemindedness and selective confirmation bias traps, that you can never escape to some degree anyway when i did 5-MeO-DMT (legally) and my brain's inner world engine's space and time totally disintegrated for a bit, but I don't see it as objective truth, but instead as playing with my brain's physics engine its fascinating, i often wonder about how the brain constructs the physics engine that models the world approximately, constantly grounded in incoming data from the senses, that can go in arbitrary ways in dreams, meditation, substances, etc., but it's still limited by its architecture I think about this hypothesis often: symmetry theory of valence might hold because of biological incentive to favour model simplicity. Simpler models have more symmetries. And as a result of this incenve we get a lot of great models in physics based on symmetries, but also a lot of underfitted models of physics by the general public like all the sacred geometry stuff that also feels euphoric so its still at least great art and a theraphy tool. What fascinates me that both physics and most mainstream AI try to look for bottoms of a valley, where AI is using gradient descent to find local minima, and physics makes the action stationary on principle of least action I do love the ideas of trying to find glitches in physics like if it was a simulation [https://www.youtube.com/watch?v=KT7K3z4RfwQ](https://www.youtube.com/watch?v=KT7K3z4RfwQ)  Or mathematical universe hypothesis by Tegmark where all possible consistent mathematical structures have physical implementions [https://youtu.be/F__elfR3w8c?si=hVghqqygY-pjxaL-](https://youtu.be/F__elfR3w8c?si=hVghqqygY-pjxaL-) But maybe you can even implement the inconsitent universes using paraconsistent logic “How to exactly articulate better quality standards for fundamental theories of physics? Quantum gravity theories try to solve inconsistency between quantum mechanics and general relativity. I feel like this cuts right at the core of how to make AI generate actually creative useful novel ideas like our best scientists in the past! What is the equation of useful scientific novelty? I want digital Einstein, Neumann, Feynman, Godel, Hilbert, Ramajuan, Gauss, Perelman, Grotenderick, Turing, Tao, Witten, Pythagoras, Newton! Or analog, as it really doesn't matter which substrate, as long as it works! I want trillions of them in one datacenter collectively solving the equation of the universe, the equation of intelligence, exploring all the math, trillions of times faster than all of civilization combined so far! But what edits to the current AI architectures need to be done? What is the secret sauce of the brain? How to go beyond the secret sauce of the brain? What is the secret sauce of collective intelligence, what are all the environmental and genetic factors, that makes a biological or non-biological system invent something groundbreaking in science? Designing AGI system that can very deeply grok etc. classical mechanics, general relativity, quantum mechanics, standard model, loop quantum gravity, string theory, etc. and derive new physics that actually has a higher probability of being successful empirically, using something similar to whatever happened in Newton's, Einstein's and Schrodinger's brain when they came up with their models. AI system fully specialized in modelling nature across scales in different physics theories, using quantum/thermodynamic/deterministic theories on different scales, with some natural language interface on top of it. Maybe the answer is somewhere in NeuroAI and neurosymbolic AI or the free energy principle! https://x.com/skdh/status/1897153912315969773 [Catalyzing next-generation Artificial Intelligence through NeuroAI | Nature Communications](https://www.nature.com/articles/s41467-023-37180-x) " Maybe universe truly is a mathematical object in itself, but our attempts at mathematical theories of everything will always only approximate that full mathematical object that is the full truth, because we're way too limited observers with limited perspectives with limited computational and modelling capabilities and capacity, but we can potentially upgrade that more with technology and transcend some of our current limitations Are humans better at math than machines? In some factors currently yes, we're better at some aspects while AI is better at some other aspects, and it might be that to get humanlike mathematics we would need to replicate the brain's algorithms much more closely. Or there might be some general mathematical engine algorithm independent of the brain. It would be lovely to have mathematical reasoning but without all these algebraic errors that humans and AIs make, errors in proofs, also with the strength of symbolic math engines and strength of human intuition tha can go out of distribution, with much more broad overview capability to connect much more dots, with even more out of distribution generalization when inventing completely novel math, than humans are capable of, that would go beyond what we can currently do and explore even more alien mathematical universes. What is true in philosophy of mind? Physicalism? Ideliam? Panpsychism? Illusionism? Dualism? Monism? Mysterianism? I feel like big part of me personally became agnostic, as the space of all possible positions in philosophy of mind seems so large and kind of arbitrary what camp you pick. For a scientist, physicalism is useful. If you do meditation or psychedelics a lot, you'll gravitate towards idealism or mysterianism. There was a paper showing this correlation as well, but it may not be causation, but I suspect it is. And most normies in our culture usually think in cartesian dualism I feel like, maybe that's the current evolutionary baseline. From a physicalist perspective, I feel like all these positions in philosophy of mind have their own neural correlates that say how the brain constructs the model of self and other and of qualia. Dualists model a bigger boundary between inner experiential world and outer nonexperiental world, while dualists don't have a boundary and everything is one thing, either experience, no experience, or something third. Panpsychists label as experiental world everything. Illusionists label nothing. Open individualists have their model of inner self exploded to their model of the whole universe. Personally, I've experienced so many of these, that right now I'm like: Ok, all of them can feel true if you do the intellectualization or other activities that induce these states of mind, so reality maybe is incomprehensible instead and this is from scientific physicalist perspective all just useful programs for my ape brain. But it's useful to assume that this ape brain creates one conscious world simulation, like for example how Joscha Bach assumes it, as that allows you to do all sorts of engineering of mental representations and of qualia, by internal engineering by conscious actions, or by external engineering by neurotechnology. So it feels like right now in my experiental world and in my intellectual world of mental representations, specifically in the philosophy of mind, there's currently a superposition of physicalism and mysterianism. Science and engineering me prefers physicalism with laws of physics with all it's emergent laws in our brain being our qualia, philosophical me prefers mysterianism swimming in the combinatorial explosion of possible philosophy mind positions, and experiential me experiences both at the same time. The whole universe evolves towards higher levels of complexity [Shtetl-Optimized » Blog Archive » The First Law of Complexodynamics](https://scottaaronson.blog/?p=762) [https://www.youtube.com/watch?v=DP454c1K_vQ&feature=youtu.be](https://www.youtube.com/watch?v=DP454c1K_vQ&feature=youtu.be) Everything is math Everything is changing shapes and graphs You can analyze it all using calculus, geometry, topology, probability theory, group theory, linear and nonlinear algebra, harmonic analysis, information theory, network theory, classical mechanics, statistical mechanics It's all functions It's all sets It's all categories Those are different modelling perspectives Complexity and chaos is everywhere Formally structured languages describe it all Some stuff is more computable than others Quantum field theory is under everything, possibly loop quantum gravity or string theory too And from the fundamental structure of reality, the emergence of all scales of reality happens “ I want a visualization of evolutionary mutations of all words in languages ​​from different common ancestors over time with connections showing mutual influence over time and similarity visualized through color gradients or shapes of nodes and connections But I would like to make a similar evolutionary tree over time for all science, mathematics, technology, and philosophy, with how they influence each other over time, mutate, deepen, expand, merge into interdisciplinary fields and various unifications, new fields with new concepts arise, convergent evolution arises, etc. 😄 Or just some specific fields with their concepts like AI, intelligence, physics, cognitive science Or a similar visualized evolutionary tree could be great for stories, literature, shows, games, art, including the properties of those different characters and universes 😄 But also an upgraded visualization of the evolutionary tree of biology over time could be nice Or all the physical systems in the universe over time 😄 Or completely fictional universes, like Pokemon Or the evolution of completely alien creatures, or alien structures and concepts 😄 Or sci-fi technology of the future 😄 Or human cultures 😄 Or all these possible evolutionary graphs connected in one place Or an evolutionary graph of possible evolutionary graphs of all possible things I think a lot of that information could be mined from Wikipedia ” i want to map out the space of all possible knowledge, both useful and not useful what mathematics is still hidden in the space of mathematics that we havent found yet? Mathematics is music of formal structure The purity, simplicity, depth, orderliness, abstractness, generality of pure mathematics with symmetries, interconnectedness, unifications, forming the language of creative patterns and structure, is pure perfection and artistic beauty ❤️ Are you computing stochastic path integrals over the Markov Chain of future events for your every decision? Which theories of everything are the closest to truth? String Theory (vibrating strings with a lot of extra dimensions), Loop Quantum Gravity (quantized spacetime, spin networks), Causal Set Theory (discrete events, fundamental causality), Digital Physics (universe as computation, information fundamental), Asymptotically Safe Gravity (quantum gravity, high-energy fixed point), Twistor Theory (twistors fundamental, spacetime derived), Causal Dynamical Triangulation (spacetime from simplices, causality), E8 Theory (E8 Lie group, controversial unification), Gauge Theory Approaches to Quantum Gravity (gravity as gauge field, recent), Noncommutative Geometry (non-commuting coordinates, abstract unification)? Do you have some favourite attempt at a theory of everything in physics? I like Causal sets on vibe level. But I still need to dig into its math. "The causal sets program is an approach to quantum gravity. Its founding principles are that spacetime is fundamentally discrete (a collection of discrete spacetime points, called the elements of the causal set) and that spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between spacetime events." It's very Wolframian as well, but more accepted by academia. [Causal sets - Wikipedia](https://en.wikipedia.org/wiki/Causal_sets) často mě fascinuje jak čistá matematika je tak strašně nádherně perfektně elegantní a jasná a přesná a z přesně zadefinovaných axiomů získáš další pravdy a všechno, ale pak člověk nakoukne do fyziky nebo AI, a tam se všechno možný aproximuje, hádá, chybí tam pořádný matematický formality, apod. AI systems are the cathedrals of modern age 1) solve intelligence 2) use that to understand the source code of the universe What is reality? Logically perspicuous description of reality will use multiple quantifiers which cannot be thought of as ranging over a single domain. There is no overarching, single, fundamental ontology, but only a patchwork of overlapping interconnected ontologies ineluctably leading from one to another. We navigate a complex web of partial understandings. [Pluralism (philosophy) - Wikipedia](https://en.wikipedia.org/wiki/Pluralism_(philosophy)#Ontological_pluralism) The best God is Einstein's God, that he got from Spinoza. God = nature = universe. Science is trying to figure out the source code of God, to get closer to God this way. [Spinoza's Ethics - Wikipedia](https://en.wikipedia.org/wiki/Spinoza's_Ethics) I try to be as weapons-grade Bayesian as possible. I just like to take as many perspectives as possible in parallel and assign various nonzero probabilities to them, which changes overtime as I gather new empirical data. And synthetize them. No prediction has zero probability. No prediction has 1 probability. All possibilities are possible, but some are more probable than other ones, according to my current partial understanding of the infinitely complex messy nuanced reality that dynamically changes overtime. i want ASI to upload all of mathematics into my brain i want ASI to expand my cognitive capacities for math to infinity i want to be able to do tons of extremely abstract mathematical proofs with zero errors in parallel i want to imagine highly dimensional mathematical structures in my mind and manipulate them in the most complex nonlinear ways correctly i want to simulate extremely complex physical systems and study all their mathematical properties rigorously more easily without errors just in the brain [Imgur: The magic of the Internet](https://imgur.com/ShTO9ds) I want an AGI system that can very deeply grok etc. coherent nonbrittle circuits representing classical mechanics, general relativity, quantum mechanics, standard model, loop quantum gravity, string theory, etc. and derive new physics that potentially actually has a higher probability of being more empirically predictive, operating under mechanisms similar to whatever happened in Newton's, Einstein's and Schrodinger's brain when they came up with their paradigm shifting models of physical reality. One potential dream AGI system for scientists is physics based AIs (quantum, thermodynamic, deterministic, hybrids) optimized for perfect modeling of nature (similar to how nature is governed quantum/thermodynamically/deterministically/hybridly on different scales) coupled with anthropomorphic humanlike synthetic agent scientist AI that could use that physics based AI optimally and translate the results into more humanlike language for humans via a more humanlike interface. I think daily about how we are apes that somehow convinced sand to think What is the brain doing to process and integrate all the information from all the diverse modalities into a unified world model and then abstract over it in latent space reasoning? I like Josch Bach's architecture of the brain's motivational engine using reinforcement learning with these reward functions on top of world modelling and sensing, which could explain a lot of human preferences https://agi-conf.org/2019/wp-content/uploads/2019/07/paper_30.pdf https://medium.com/hackernoon/from-computation-to-consciousness-can-ai-reveal-the-nature-of-our-minds-81bc994500ab reinforcement learning is used quite a lot in biological systems, and now more and more in AI https://www.sciencedirect.com/science/article/pii/S0004370221000862 How does biology construct reward functions on the fly for various tasks? Is there some meta reinforcement learning happening, meta reward function determining the optimality of learned reward functions? Human brains still have 100x more connections than our currently biggest AI systems, 100 trillion vs 1 trillion, so brains are still around 100x bigger in terms of parameters, while running on just 30 watts compared to hundreds of megawatts that currently biggest AI datacenters run on, with terra watts coming soon. Or brain might have even more connections and complexity, depending on how you quantify and measure all of this. Or it might hard to compare, because of maybe way too different architectures and substrates. [https://youtu.be/b_DUft-BdIE?si=2-0GGIDn_sArz7bi](https://youtu.be/b_DUft-BdIE?si=2-0GGIDn_sArz7bi) [https://www.youtube.com/watch?v=9qOaII_PzGY](https://www.youtube.com/watch?v=9qOaII_PzGY) Artem Kirsanov How Your Brain Organizes Information how the brain generalizes patterns into abstractions that can be further improved through mathematics is one of the most fascinating things 😄 The brain implements a world model that algoritmically runs on something between the overly flexible statistical deep learning and overly rigid symbolic physics engine on a chaotic complex stochastic out of equilibrium thermodynamical electrobiochemical hardware dynamical open system with much more selfcorrecting mechanisms that is constantly tuned and grounded by sensory data The human morphology is just a tiny point in the space of all possible configurations of physical systems " Are we quantum computers? ❤️ If the brain eventually turns out to use the principles of quantum mechanics to compute and process information more efficiently, which is an as of now still unconfirmed hypothesis with insufficiently strong empirical evidence, then we are all quantum computers. ❤️ Or there are some confirmed quantum processes in biology in general. ❤️ But at least all of us, and all other physical systems, are at the fundamental level the fundamental particles of the universe that make up matter and forces, which operate on the principles of quantum mechanics, and other physics that we know, and that we don't know yet, so we're still quantum anyway. ❤️ Or any physical system can be seen through quantum information theory. ❤️ And in terms of quantum computation being used for better information processing in the brain, afaik, it's an as of now still unconfirmed hypothesis with insufficiently strong empirical evidence, but there are some quantum processes in biology in general unrelated to cognition. But we're still fundamentally to some approximation Standard model stuff which operates under quantum field theory. Still, afaik, we are struggling to find quantum phenomena like superposition and entanglement in the biological neural computations for cognition, because it's a really noisy and warm environment, where quantum computations have trouble surviving, so classical models of information processing are relatively winning more so far in terms of predicting cognition, but we still don't know so much about what equations is the brain using. " universal darwninism across all scales of organization [Universal Darwinism - Wikipedia](https://en.wikipedia.org/wiki/Universal_Darwinism) Different kind of proofs/truths: philosophers: it feels good mathematicians: its derived from these axioms under these rewriting rules scientists: all (or most) (or in just some context) empirical data supports this model, so its true (or partially true) (or true just in this context) until its not engineers: boom! it fookin works, i got Doom to run on this alien computer that uses biology-inspired bioelectric selforganizing system running adaptive hiearchical bayesian belief mechanics! The real AGI benchmark is if the model can come up with general relativity if he knew everything that we knew right before discovering general relativity "The invention of general relativity from newtonian physics is just interpolation at some sufficiently grandiose level of abstraction." - Adam Brown [https://youtu.be/LjY0i2B-Avc?si=3CZRupgk8cHQqy6k](https://youtu.be/LjY0i2B-Avc?si=3CZRupgk8cHQqy6k) Will AI in the future come up with theories of fundamental physics that predict empirical data better than our theories but that are incomprehensible to human intuition? Advanced AI will be needed to overcome human limitations in the search for theory of everything I want machine scientists developing theories and experiments about the universe that transcend human limitations human intelligence is far from the peak of possible intelligence AI will model the world in ways completely incomprehensible to how humans model the world, which it already does to a small degree. And it will do it in much more optimal ways, it will grok physics much more optimally, in such alien ways compared to how human brains evolved to do it in our evolutionary environment. The space of all possible modelling systems is so vast, and us, and nature, have only scratched the surface so far. The current architectures are just the beginning of all of this: Deep learning models, transformer models, diffusion models, RL CoT models, neurosymbolics with MCTS (AlphaZero), statistical models, etc. Are current AI approaches in the current paradigm enough for radical new scientific discoveries and paradigm shifts? AlphaFold technically isn't LLM, but it's an autoregressive Evoformer/Pairformer that uses transformer iirc and some diffusion, and it seems to have done big progress in protein folding research But i think for leaps in physics we might need to go beyond deep learning Or maybe some kind of selfplay could bootstrap more optimal models? Something like AlphaGo move 37? Or could you give future AIs for predicting physics a RL reward signal in the form of empirical predictive results from experiments? Could that bootstrap novel results? Would that be eventually feasible when you spend enough infrastructure and compute to do these experiments? Or could physics simulations find shortcuts in training, similarly how we train robotics in simulations using RL now? Or do we need fundamental architecture more based on biology or physics or mathematics of information processing? How to: Actually grokking currently known equations of physics and fundamental physics as circuits? Being able to more strongly generalize them in nonbrittle way? Possibly go beyond them more out of distribution? So many unanswered questions... I see theory of everything as a path, as gradual refining of our predictive models of the world, that will never get to 100%, and AI will help us get closer to it beyond our human limitations AI x physics is endless rabbit hole: You can study AI using methods from physics, you can study physics using AI models, you can try to make AI systems model physics as accurately as possible through physics biases, you can design better AI architectures using physics, many AI architectures are applied physics, etc. I don't think AI will fully replace scientists. I think human intelligence will always have a place in science, and adding more diverse intelligences into the mix acts more as a multiplier of our capabilities and as an upgrade in places where our brain's architecture made by evolution is too limited and constrained. That seems to have been the case so far, each type of intelligence excelling in different ways, that are even stronger together. And if it will lead to for example breakthroughs in physics or curing diseases faster, then I think that's amazing. But maybe we will somehow create systems that can basicalyl replicate everything that humans do in science, but I think that wont be soon. Is brain quantum? If so, is it necessary for its intelligence? when are we building galaxy supercluster sized particle collider to probe quantum spacetime foam All models are wrong but some approximate the practically infinitely nuanced complexity of reality for empirical predictions better than others. Why is there something rather than nothing? Why can we ask this question? Does asking this even make sense? Why did big bang happen? What if alternatives to big bang like big crunch happened instead? Did it actually happen? Why is universe governed by few fundamental forces between tens of elementary particles? Why is the standard model and general relativity the best current description of it that we have so far? Why do we struggle with unifying quantum mechanics and general relativity so much? Is theory of everything even possible? What even is space? What even is time? Is there such thing as "before the big bang" if time might not have existed before it? Why and how did chemical elements exactly emerge? Why and how did life exactly emerge and how does it work? Why is evolution such unreasonably effective algorithm? Why and how exactly is there such mindblowing specialized diversity of life? Why and how did intelligence emerge and how does it work? What are the best definitions of intelligence? Why are brains and AI systems so unreasonably effective in different complementary ways? How can they be upgraded? What happens to consciousness after death? Why and how did consciousness and experience emerge and how does it work? What are the best definitions of consciousness? What is the solution to the hard problem of consciousness? Does this question even make sense? What even is consciousness in the first place? Why are we able to design so many technologies that allow us to manipulate the universe to such degree? Why does emergence happen in the first place? How will the universe end? Is there such a thing as end of the universe? Is the multiverse theory true? Why is mathematics so unreasonably effective at describing and predicting nature? Is there a better mathematical foundation than set theory, type theory or category theory? Is mathematics invented or discovered? Is mathematics fundamental language of reality or just our mental tool to survive? What even is reality? What is being? Why can we even ask all of these questions? Do many of these questions even make sense and are there any final answers to them, or answers we get are just getting closer to to us incomprehensible "truth", or they have many parallel answers, or many answers are differently relatively valid depending on the assumptions we start with, in different perspectives, or are they fundamentally unanswerable? Major part of my meaning of life currently is to try to understand: - The most complete fundamental equation/s of intelligence: human intelligence, diverse machine intelligences (all sorts of current and future subfields of AI), other biological intelligences, collective intelligence, theoretical perfect AGI (AIXI variants, Chollet's intelligence, Legg's intelligence, etc.), hybrids, etc. - The most complete fundamental equation/s of the universe and the world in general: How does the standard model and general relativity work? How does everything else in our world on other scales with other fields, such as chemistry, biology and sociology, emerge? What is beyond the standard model of particle physics and general relativity, how to solve quantum gravity? - best math and philosophical assumptions for the above Langevin equation is mathematization of chaos and order [Langevin equation - Wikipedia](https://en.wikipedia.org/wiki/Langevin_equation?wprov=sfla1) The world has infinitely nuanced, complex, nonlinear, chaotic, dynamic, etc. social dynamics that somehow arise from the interacting fundamental particles of the universe, and no one has the capacity to pick it up with complete accuracy from their one perspective with limited brain modeling ability and limited varyingly accurate data that come only from some angles The problem with way too alien patterns would be that the human brain has no way to recognize it because there is no grounding in human patterns that the brain is used to recognize " When it comes to AI replacing human jobs under the assumption that progress will continue similarly or more rapidly: Lately, I think (or I cope? ) that the current AI systems are inherently quite different from human intelligence, as essentially a different form of intelligence, where there is some convergence with human intelligence but not completely, and I feel like I don't see enough evidence that the trend is changing sufficiently towards human intelligence, where I see more the emergence of differently useful patterns in information processing compared to human information processing, where AI systems are already better in some aspects but totally flop in other aspects (but which changes and improves over time), where they are often are also differently specialized: So that even if they automate a lot of parts of the human economy, for example software engineering, then human intelligence will still useful for some subset of the job, e.g. where human intelligence is still different from machine intelligence and thus possibly useful, or for error correction, or for giving the AI the tasks, or for more human-like communication with clients, or other jobs will emerge (we already see jobs like "AI pilots" and "AI output verifies and fixers" start to arise in some industries, and prompt engineering in the style of writing a lot of pages concrete specifications for the AIs). " i believe that eventually any cognitive and physical process a human can do, a machine will eventually be able to do as well at some point in the future, but how long will that take, i have no idea My ideal scifi would be about benevolent superintelligence that cures all diseases, makes all beings happy, figures out how biology, fundamental physics, consciousness, intelligence, etc. works by countless scientific breakthroughs, understands all math, understands everything in philosophy, creates post-scarcity abundance for all, creates infinitely fascinating complex art, and in the process grows infinitely more and more in intelligence and creativity, maximizes morphological freedom, and does no harm Benevolent superintelligence explosion [[Artificial intelligence x Science]] Yeah its a bit unrealistic superutopia that I like dreaming about, so that's why it's science fiction. My current biggest fear in the real world is tech companies centralizing too much power for themselves via AI and other technology and other means (economic, political,...), so that's partially why I want open source to win and try to support it, while trying to reverse engineer the moat of tech companies. To democratize the power. The issue with AI safety community I started to have is that a big part of them basically want something like government surveillance on GPUs and training runs to prevent unsafe AI, which can so much easily turn into surveillance dystopia and destroy open source completely, plus big tech is merging with government as well to have the least restrictions for themselves while wanting to restricting others including open source. It feels like that will make power dynamics even more concetrated instead. A lot of luddites also joined the AI safety movement I think when I look at the current world and at history, then a lot of times when there was too much concentration of power in any form to some centralized entity, then it started killing freedom for everyone else. And I view AI as technology that has the potential to give the ultimate power, centralized power if its in the hands of few, or decentralized power fi tis in the hands of people. I also started to not really believe in the assumption that increasing intelligence automatically leads to rogueness. I think intelligence is independent of that, and also independent of power seeking. For example we have galaxy brain scientists that are not at all rogue or power seeking. It depends so much. and they are controlled by IMO less intelligent managers and politicians. My favorite definitions of intelligence include stuff like modelling capability, predictive capability, generalization capability, etc., about some data, which are decoupled from agency and goals in changing the world to me. Earthlings will populate the entire galaxy, and the entire universe. We will reverse engineer the source code of the universe. We will overcome the ultimate challenge: The heat death of the universe. https://x.com/pmarca/status/1902724994607570975 The Culture series is a science fiction series that centre on The Culture, a utopian, post-scarcity space society of humanoid aliens and advanced superintelligent artificial intelligences living in artificial habitats spread across the Milky Way galaxy. [Culture series - Wikipedia](https://en.wikipedia.org/wiki/Culture_series) Decentralized open source AI is the only realistic way to prevent big tech oligopoly on AI in the future, to prevent p(1984) Decentralized open source AI training and inference like PrimeIntlelect or NousResearch is the only realistic solution in current political and technocapital climate to prevent concentration of power and give this power of intelligence to everyone https://x.com/Scr0nkf1nkle/status/1928212693824967110?t=donYAlUybj7RcNz6YQePUw&s=19 Emerging AI systems are emerging cybernetic upgraded nervous system of the collective civilizational intelligence, building on top of the internet that has both centralized and decentralized ecosystems, so AI should also have both strong decentralized and centralized ecosystems, but the decentralized ecosystems need to grow stronger to prevent too much of concentration of power by the centralizing nodes! Emerging AI systems are emerging upgraded nervous system of the collective civilizational cybernetic intelligence, building on top of the internet that has both centralized and decentralized ecosystems, so we must make sure that decentralized AI ecosystem grows stronger to prevent too much of concentration of power by the centralizing nodes! https://x.com/burny_tech/status/1927836720994865460 [Planetary-Scale Inference: Previewing our Peer-To-Peer Decentralized Inference Stack](https://www.primeintellect.ai/blog/inference) Politics is systemic evolutionary pressures engineering The fact that there are so many people with completely opposite very confident perspectives about the current state of AI, and about the future state of AI, is fascinating to me. There's so little consensus. Memetic anarchy. But with some convergent camps that internalyl reinforce eachother and are in strong friction with opposing camps and polarize even stronger overtime. If AI will automate everything in few years, then one of the reasons why I'm calmer is that here in EU in Czechia, adoption takes infinite time, so I might still be helping some companies integrate all this new AI tech, while at the same time also helping them upgrade from Windows XP to Windows 11 lol. [Reddit - The heart of the internet](https://www.reddit.com/r/singularity/comments/1l2jun4/former_openai_head_of_agi_readiness_by_2027/) When AI starts to automate everything, if it does, then the argument that new jobs will be created depends a lot on: 1) What % of jobs can be automated within 1/2/3/510/20/50/100/1000/etc. years. You can eventually get to a level where any new job can be automated instantly by machines. 2) Regulatory bottlenecks, like in healthcare, where in Europe they keep using CDs, and often didn't even start using any old school ML methods. 3) Bottlenecks in adoption, before it diffuses through society, before it gets implemented in our infrastructure where e.g. government IT infrastructure in Europe is a disaster, and digitization is hell to do. 4) “Bullshit jobs” exist even if they are somehow not useful, so will they still continue to exist? I dont think UBI will happen under current governments, but if it will happen: [https://youtu.be/kl39KHS07Xc?si=xUbAZ1AOVEHuiOX2](https://youtu.be/kl39KHS07Xc?si=xUbAZ1AOVEHuiOX2) What is the ideal source of funding for UBI: taxes? taxing the richer/corporations more? universal basic taxes? from private entitites (like OpenAI wants)? decentralized? make machines pay it? other sources of income other than automation, like overall profit of corporations? Sometimes I watch videos with remote tribes in Africa living anarchoprimitivist lifestyles to remind myself that Xitter expecting technological singularity in few years probably isn't the only reality that exists Comparing AI revolution to industrial revolution is fair. But if we get machine systems that can do everything a human can do and more, mentally and physically, which never happened before in history, that will be a completely novel event. Europe: tons of regulations everywhere, almost no AI industry in comparison to US, tons of tech companies escaping to USA USA: announcing the biggest single technology investment ever, 0.5 trillion $ into AI, more than Manhattan project and Apollo project when adjusted for inflation, and doubling energy production, and getting rid of most regulations to accelerate everything at all costs to be a global leader and beat China I think Europe won't be a future superpower if this doesn't change So when we compare EU and USA: There's much more big transformative things happening in USA that is overflowing to EU, but poor people have less overall power with less social welfare. " https://x.com/vitrupo/status/1906715465789124761 A lot of capital allows you to do big transformative things. But these big transformative things can be for everyone or for yourself only. Both seem to be happening. So when we compare Europe and America: There's much more big transformative things happening in America that is overflowing to Europe, but poor people have less overall power with less social welfare. China also races well technologically, but so many things are steered by the government, and I'm not sure how good are people there when it comes to basic needs, but they have more collectivist culture, but they are also more oppressed on average. I wonder what is the equation for big technological breakthroughs that also support the poor class as much as possible overall. Lots of technology automatically supports the lowest class in almost all scenarios, like automating the food chain globally, but other technology can go in the other direction more on average, either benefiting the rich more on average while poor also get some benefits but relatively less, or benefiting only rich while not the poor at all or the opposite of that, which is on a spectrum. I wonder what is the best way to govern all sorts of technological progress, to spread the abundance as much universally, without completely killing the progress or by overgoverning it by redistributing the (economic, technological) power of the generators of abundance too much, that they can't scale their generation of abundance anymore, but also making sure that they don't concentrate abundance just for themselves. And also make sure that government and big tech dont concetrate power, there needs to be decentralized power. I still think many jobs will persist even in the scenario where everything becomes automatable, and humans will be steering the giant AI industrial machine And I think adoption is way slower than what most of AI industry thinks. You just have to look at for example European state IT infrastructure And many jobs still exist even if they can be automated. And there are also a lot of bs jobs. " I wonder if the science cuts by new USA government will kill some of the needed AI research for USA to stay as a leader in AI. Or not accepting Chinese students and engineers. It's the age of highly neuroplastic generalist ADHDers? in the probability distribution of all possible future timelines, when i sometimes think about future timelines with the strongest forms of AI/automation existing soon, I often wonder if the concept of capital itself will even makes sense in that world Different people do AI applications/engineering/research for combinations of different reasons. Some people do exploratory research out of curiosity with the need to understand intelligence itself and the structure of reality which I resonate with the most, then some people want to make trillions of dollars at all costs, some want power, or some create interesting things because they are interesting, or create helpful things because they help, cool things because they are cool, beautiful things (including artistic) because they are beautiful, some want their basic needs met using this technology, or some decentralized open source computing/training/inference AI initiatives are trying to break the oligopolistic dominance of big tech that is slowly and surely strengthening, etc. So many incentives! AI will spawn, or is already spawning, a new breed of neoluddite amish memetic memeplex fork that will bifurcate from the current technocapital system My most likely prediction is that actual bombing of data centers will happen sometime in the next 50 years. But the reason will not be fear of uncontrolled superintelligence from StopAI people, but fear of the states government that a geopolitical rival will have too good technology, similar to Ukraine x Russia attacking energy sources. But in the meantime StopAI people will do something less extreme like surveillance or assassinations, as they are growing in numbers. [https://youtu.be/O9P-fjSzJzs?si=2hVAZ4vbyNVWc19N](https://youtu.be/O9P-fjSzJzs?si=2hVAZ4vbyNVWc19N) https://x.com/Plinz/status/1913395850728071487 How I think about what factors influence why do people love or hate AI automation: - Many people's work is associated with their identity and social status and are scared of loosing that as their skills might become irrelevant: Understandable, we will need better alternative sources of meaning and identity, similarly to those who's work got automated by industrial revolution - Many people work just for money to feed their family and are scared of losing source of income: Understandable, new jobs pop up in the short term, but long term we need something like UBI IMO, a society closer to some kind of post-scarcity society would be ideal outcome to me - Some people enjoy the work they're doing and don't want to be forced to do something less fun or creative or it loosing its uniqueness: Understandable, UBI can help so that they can do it for free, or again better sources of meaning need to be invented, and IMO its worth it to kill some uniqueness of some skills for the sake of progress of intelligence across all domains - Many people fear concentration of power of AI companies: Understandable, me too, that's IMO why we need to fuel decentralized open source ecosystem and reverse engineer big tech moat - Many people just don't want to change the status quo because it feels comfy and they have their place in it: Understandable, but I disagree as someone who doesn't like many aspects of the status quo and who wants tons of progress in intelligence - Many people can't wait how much more potential of the civilization will automating intelligence unlock, such as scientific discovery, technological progress like Dyson spheres and beyond, solving healthcare, understanding the universe, or just understanding intelligence itself etc., which is the closest to me - Some people just want to use AI automation to create infinite money at all costs - Some people want AI systems that help to maximize satisfying their basic needs, and/or the basic needs of others - Some people just want to engineer cool stuff because engineering cool stuff is cool - Some people want to see alien minds do cool alien stuff beyond human comprehension, but some people are scared of that - Some people fear rogue AGI taking over the world or creating almost or fully catastrophic extinction level damage: Understandable, but I'm more skeptical of that lately. - Some people fear of weaker risks such as lying, scheming, deception - Some people fear AI tearing the fabric of society apart - Some people want to "birth" AI mind children into existence, AI friends, similarly to how we birth other humans into existence - Some people see future AI systems as next step in evolution: Understandable, but I prefer merging with the machines instead - Some people are worried about AI consciousness and potential for suffering depending on what is their favorite class of theories of consciousness AI 2027 is interesting forecast. [AI 2027](https://ai-2027.com/) Personally, I give similar scenarios a non-zero probability, but relatively quite low one. I think it's too overshot, too fast, too early, it doesn't address enough the messiness and practical limitations of the physical world in engineering and potential research bottlenecks that can happen and adoption rate, it's too much towards doom, it doesn't address the diversity of AI systems, etc. But I think superintelligence is eventually probably possible, just not that soon, and I think it's more likely to not be a takeover because it won't be in the form of an autonomous, selfpreserving at all costs, etc. system. [AI 2027](https://ai-2027.com/) grilling of the authors [https://www.youtube.com/watch?v=htOvH12T7mU](https://www.youtube.com/watch?v=htOvH12T7mU) The part where the ASI kills all humans with a bioweapon seems extremely unlikely to me. It still often fascinates me how these silicon entities often completely fuck up to us ultra simple things in various contexts, while they also completely shine, sometimes beyond humans, in different contexts Operating on architectures that are both different and similar to us. Like in coding. " Vibe coding Let Claude or Gemini or o3 in Cursor or Claude Code CLI iterate over the app while there's still an error or a bug, and while things are different than you want, and send it all to Claude, and tell Claude everything explicitly that comes to mind that is wrong, like when you talk to a human. The result of this constant iteration is that it either gets hammered into some working version, or it drowns in wrong chaos complexity beyond repair. At the same time, the more you lead him by the hand with what exactly you want him to do for the task, the better. The more overall context you give him, the better. Tell him as much as you would tell to a coworker about the code and more, give him plans, logs, code, guides and everything. Use other reasoning models for planning sometimes. You can tell him much more what precise technologies and patterns in the code he should use, lead him in structuring the code, letting him look for tutorials on the web, information from libraries, documentation, use thinking models for making plans first, give him extra tools, etc. You can nudge him to refactor, unify duplicated code, reduce bad complexity when it starts to melt down, send screenshots from the frontend, etc. Rollback to older versions when it starts to drown too much in bad complexity. The latest Claude that came out two weeks ago also needs to be stopped sometimes so that he doesn't try to build the entire company infrastructure when you just wanted to add one chart, they gave him too much Adderall. 😄 Vibe coding is like piloting a spaceship, or taming a beast, or drawing using an interesting brush that paints with latent space 🖌️ " " People few years ago: AI cannot oneshot working file with code, so it will never work lmao, it's worthless forever People today: Okay it can oneshot whole simple apps with frontend and backend instantly working, but it breaks down for more complex apps, so it will never work lmao, it's worthless forever People in few years: Ok it can oneshot pretty complex apps, but it cannot oneshot for example all of Google's software, so it will never work lmao, it's worthless forever People in even more years: Okay it can oneshot even that, but it cannot make fundamental physics breakthroughs, so it will never work lmao, it's worthless forever People in even more years: Uhhh... But it still only mimics the One and Only True Superior Sacred Biological Human Intelligence! " Think of AI as latent space brush Nerds are fighting that AI can't solve Riemann hypothesis or invent quantum mechanics from scratch, while normies are happy that it can help them solve simple algebraic equations that they struggled with in school My ideal scifi would be about benevolent superintelligence that cures all diseases, makes all beings happy, figures out how biology, fundamental physics, consciousness, intelligence, etc. works by countless scientific breakthroughs, understands all math, understands everything in philosophy, creates post-scarcity abundance for all, creates infinitely fascinating complex art, and in the process grows infinitely more and more in intelligence and creativity, maximizes morphological freedom, and does no harm