[https://youtu.be/g3IZ-aB4pLE](https://youtu.be/g3IZ-aB4pLE)
Philosophy, science, AI, intelligence, math, physics, brain, geopolitics, future of AI and humanity
1. Synthesizing philosophical assumptions and scientific models across all domains and scales to create a comprehensive predictive framework.
2. Adopting an instrumentalist perspective where the productivity of models is prioritized over specific ontological assumptions.
3. Embracing multiple, parallel truths from different fields and scales (e.g., physics, chemistry, biology, neuroscience) as long as they provide predictive power.
4. Exploring isomorphisms and symmetries across mathematical frameworks and domains to create a unified knowledge structure.
5. Considering various definitions and aspects of intelligence, including adaptability, representation compression, generalization, and self-regulation.
6. Discussing the current state and future of AI, including:
- The surprising generalization capabilities of large language models
- The need for integrating various modalities and capabilities (e.g., planning, search, causal reasoning)
- The potential for developing more coherent and stable symbolic manipulations
7. Emphasizing the importance of AI interpretability for setting rules and ensuring steerability.
8. Addressing existential risks such as climate change, misaligned superintelligence, and surveillance dystopias.
9. The optimistic outlook on humanity's ability to navigate challenges and create a positive future for all using advanced technologies.
10. Envisioning a future with diverse intelligent systems, including biological, artificial, and hybrid forms.
11. Advocating for the use of technology to create abundance and improve lives for all sentient beings.
12. Considering the long-term future of intelligence spreading throughout the universe and beyond the heat death.
13. Balancing the trade-offs between open-sourcing AI systems and potential risks.
14. Discussing the potential for reprogramming genomes and creating novel biological systems.
15. Emphasizing the importance of collective intelligence and the interconnectedness of human and artificial systems.
first cyborg stuff will be hackable, 24/7 rickroll straight into your brain through neuralink [Imgur: The magic of the Internet](https://imgur.com/JtgzkYw)
Better intelligence can make better decisions than humans. But steerability is also important. [Imgur: The magic of the Internet](https://imgur.com/4E8Tll1)
Realtime generated world in virtual reality is imminent. Soon controlled by just thoughts.
WonderWorld: a novel framework for interactive 3D scene generation that enables users to interactively specify scene contents and layout and see the created scenes in low latency. https://x.com/burny_tech/status/1836107994355908791
"I didn’t expect there to be much time where there’s two totally different roughly intelligence matched (winning on different dimensions) species but that seems pretty clearly where we’re at?"
The ecosystem of different intelligences will become so much more diverse!
https://x.com/burny_tech/status/1836106678153970111
People criticizing AI often be like:
>someone used a tool badly
>it's the tool that's at fault and inherently useless in all other possible ways of using it in all possible usecases
"God grant me the wave equation to change the things I need to change, the diffusion equation to equanimize that which I cannot change, and the Hamiltonian to know the difference." https://x.com/algekalipso/status/1836111565969592726
And the Schrödinger equation to embrace life's uncertainties! :D
Quantum Darwinism is the ultimate unifier of the classical and the quantum world!
Hyperflourishia! Hyperunderstandia! Hyperintelligencia! Hyperwellbeingia! Hyperlongevitia! Hypercuriousia! Hyperomnia! Effective Omni!
I want to see AI solve novel interesting problems in physics, biology and mathematics
I want to understand as many mathematical patterns governing this universe, organisms, and our technology, as possible
If the scaling hypothesis believers are right, as they have been to a certain degree so far, then as Leopold Aschenbrenner predicts, superintelligence is coming soon. However, if they're wrong, all the hundreds of billions and potentially trillions of dollars invested could be viewed as one of the biggest failed bets on resources in human history. Microsoft etc. wants to build 100 billion $ supercomputer for example. OpenAI o1 now showed new inference time scaling laws with step performance with apparently no ceiling reached so far there. So, we will see, how far will this go. [https://youtu.be/QCcJtTBvSKk](https://youtu.be/QCcJtTBvSKk)
[https://www.youtube.com/watch?v=kgMFnfB5E_A](https://www.youtube.com/watch?v=kgMFnfB5E_A)
Every neuron, like every cell, is an individual reinforcement learning agent that tries to survive by cooperating with its environment. The main difference between neurons and other cells is this ability to communicate over long distances rapidly. Other cells typically only communicate with adjacent cells. A neuron is essentially a "telegraph cell" that can send messages over very long distances quickly within an organism. The neuron's ability to process information quickly over long distances allows for the development of a model of the world that can be updated at higher rates than the rest of the cellular system.
"
I'm hypercurious about the physics of brain, machines, intelligence, and our reality on the fundamental level!
I want to understand the practical physics governing our biology and brain and the machines we construct (computers, robots), like classical mechanics, quantum mechanics, statistical mechanics, electromagnetism, fluid mechanics, control theory, information theory, dynamical systems theory, systems science, neural network theory, etc.... I want to understand the physics of intelligence using all sorts of methods!
I also want to understand our best models of our universe, like the Standard model gauge quantum field theory and attempts at quantum gravity and other unsolved problems in physics, from empirical experimentalist lens and from theoretical lens. I want to understand the physics equations and attempts at rigorous axiomatic mathematical foundations like axiomatic quantum field theory.
In general I'm curious about physics, math, brain, intelligence, philosophy, AI, futurology and many other special interests!
"
"
I think nobody actually knows right know if the scaling hypothesis believers are right, as it's not yet fully tested empirical falsifiable question, and recent evidence (from both empirical and theoretical results) seems to update towards increased probability of success of the scaling hypothesis.
I started giving higher probability to AI bubble crashing scenario a few months ago, but then in the last few months AlphaProof (that got silver medal in international math olympiad), o1 (that showed new inference time compute scaling laws with step change in benchmarks and capabilities: had gold medal in international informatics olympiad, also scored well in math olympiad, nontrivially contributed to frontier AGI research and engineering, had great PhD level benchmarks, etc., not just from OpenAI, and there's also lots of anecdotal evidence), AlphaFold got updated (predicting all sorts of life molecules), AlphaProteo (designing new proteins), update to FermiNet (progress in quantum chemistry), some time ago FunSearch (discoveries in math) and others etc. came, so I updated my beliefs towards it probably not really popping right now, which people have been predicting over a year now, and that haven't really realized yet.
Also I'm personally wondering more about the actual technical capabilities of the AI technology in terms of how good is it for for example science, math, engineering, healthcare, automation, etc., and not much about how much money it can squeeze out of the capitalist system. And I don't mean just LLMs and GenAI, which are big part of the AI field, but the whole AI field that is more than that.
I think AI might or might not be overestimated in the short term but I think it is heavily overestimated in the long term if you extrapolate the overall progress in AI and technology in general.
But if the advancements in the AI field don't come fast enough, I think the current short term AI boom will have issues, because of wayyy too early too big overly inflated expectations, but I think then AI will basically quickly boom again in a few years when new systems get released that are orders of magnitude scaled, or algorithmically improved or with smarter data engineering, better hardware, or all, or something else. Similarly to how it seems to be booming again right now with these new recent step change AI systems I just mentioned with new algorithmic, data, etc. advances.
I'm a big optimist in AI timelines when I extrapolate the progress and I think that a lot of the current inflated expectations will turn out to be true quickly soon in few years anyway, but so many of the expectations are too early, some things won't happen in a year, but in more years. Also some exponentials in technological progress are sampled too discretely, as we've seen with lots of the recent step changes in AI capabilities.
I think this will boom and deflation happen again and again. Booms and deflations will be closer and closer to eachother. Faster and faster, more compressed, closer to eachother overtime gartner hype cycles. A global exponential made of closer and closer local sigmoids. This is how I see the current technological singularity.
*pulls out Leopold Aschenbrenner's "just look at the fucking scaling laws line" and Ray Kurzweil's straight lines on log graphs about the progress of compute* :D
https://fxtwitter.com/dwarkesh_sp/status/1739654775816462796
[Will scaling work? - by Dwarkesh Patel - Dwarkesh Podcast](https://www.dwarkeshpatel.com/p/will-scaling-work)
"
"
I want to see AI applied much more in science, technology, engineering, math, healthcare, altruistic usecases, etc. I want to see it as a tool that generates abundance for everyone. I want the technology to build better future for all. I want the technology to fight poverty and other world problems and risks. I want the research to help understand the nature of intelligence. I want the technology to empower all humans that don't want to see the world burn or are not dictators. I want the power of it be used for good. I want the power to not be concentrated. I want to see it developed safely and ethically in steerable way. I want people to get compensated properly. I'm trying to push that and help to work towards these goals more.
I think in various industries AI is already technologically disruptive. AI is everywhere right now, and there's more and more of it, not just GenAI. Stuff like AI for foundational research and engineering in science+math supercharges all sorts of engineering+technology across the board. More and more programmers are using some sort of coding copilot, which is useful, and most of them are not using SotA systems like Claude, Cursor Sh, Perplexity, Replit etc. because of not knowing or because of the points above often. Or lots of code monkey stuff or unit testing or simple web dev, etc. is being automated. It's contributing to nontrivial frontier AI research and development. It's used to design better chips and robots. Or for example lots of translators and certain types of writers are rip. Then many companies squeeze for easy profit at all costs image/video/text gen for for example PR or in entertainment and art industry, but that is IMO often recently giving the technology bad reputation as it's often profit over quality and ethics, which sucks, and this technology can be used in much better ways there with more quality and ethics, but the incentives have to be aligned better. Automated call centers and customer service (sometimes better, sometimes worse quality). Autonomous vehicles are now reality, robot dogs, automated drones and other machines are already used in surveilence, defence, and wars right now, I don't want that, but some are using them for good and useful stuff too, like all sorts of specialized robotics for automation in resource and technology production and for household usecases is in it's glory, and humanoid robotics is just emerging. Planning systems are also big in defence and wars (I don't want that). Healthcare is supercharged with for example disease classification from images (I love AI for healthcare!). Financial market is ML bots fighting, recommender systems are everywhere in social media (often useful, but also often curse), semantic search is everywhere (often useful), visual recognition and editing of photos is used often (often useful), optimizations of supply chains, better techniques for agriculture (we need more there), automated thread detection in cybersecurity, optimizations in energy sector, AI powered scams etc. exist, and I wanna regulate that harmful usecase. This exists with a lot of dual use technologies.
And I think that big factor limiting AI's impact inside industry, outside of academia, and outside of stuff like being superhuman in various games like Go, Chess, Dota, Poker, etc., are:
1) bureaucracy of integrating the technology is so slow compared to the progress of technology
2) People are learning to use the technology very slowly
3) issues around privacy, copyright, ethics in some contexts, and other legal issues
4) engineering around adapting the foundational systems for specific usecases is slower than the progress of the foundations systems
...
AI can be used for both bad, good, and neutral things. Let's maximize the good usecases!
"
Some people want you to think that seeing possible optimistic trajectories of the future is a crime
"
I agree.
I think from practical perspective you can also be more optimistic when it comes to what it can already do in both research and industry in all the various applications. I think all the capabilities it can already do, where it's already used in practice for good and bad, the potential it has, etc. should all be covered, so I apprecitate your perspective.
My perspective comes mostly from collecting a lot of research, SotA reports, playing with the systems daily, practical working in the industry.
But I also think that currently all the more negative news about it's shortcomings and bad usecases are much more present in public discourse, so I'm trying to fill the hole with the IMO realistic optimistic parts about what it's already doing, so that people are aware of these as well.
For example in healthcare, you probably mean some very specific AI systems, and not these various AI systems mentioned for example here that I meant. Here they show it's practical progress with concrete examples of results in healthcare and also it's limitations:
[Review of AlphaFold 3: Transformative Advances in Drug Design and Therapeutics - PMC](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11292590/)
[Google DeepMind and Isomorphic Labs introduce AlphaFold 3 AI model](https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/)
https://www.fiercehealthcare.com/ai-and-machine-learning/google-scales-generative-ai-healthcare-advances-assist-clinicians-and-give
https://www.mdpi.com/2076-3417/13/13/7479
[AI in Healthcare: Uses, Examples & Benefits | Built In](https://builtin.com/artificial-intelligence/artificial-intelligence-healthcare)
And when it comes to the future of AI, I assign nonzero probability to your predictions about the technology not really progressing in the various applications, but according to what I observe in research recently that I mentioned in my previous messages, and what I seein industry personally, my prediction about the most likely scenario is currently different from yours and much more optimistic, mainly because of the recent step change breakthroughs in AI for STEM, and that's always subject to change depending on new evidence.
And when it comes to the future, I personally think we should think about both the present, short term term and long term future.
Later we can deconstruct some implicit assumptions and definitions that we use in our messages and language to see the cruxes of our messages more clearly. :D
"
"
*can't make myself do chores*
někdy mi v ten moment pomůže extra stimulating hudba jako např [https://youtu.be/tyWGDf2SFvo?si=SDcjx547j7SmqoMA&t=54](https://youtu.be/tyWGDf2SFvo?si=SDcjx547j7SmqoMA&t=54)
nebo odhodím mobil co mi žere fungovací energii na druhou stranu místnosti a zamediju abych zregenroval a pak to jde líp samo
nebo začnu dělat nejakej divnej polocvik což zapne cvičící motivující náladu co pak eskaluju
nebo si v hlavě vytvořím narrativ že když nevstanu tak zhoří svět nebo já
a nebo narrativ že když nevstanu tak nedostanu medaili
a nedostanu bod za odskrnutou polozku v mym todolistu
a většnou si k chores poustím nějaký podcast co stimuluje (s tou hudbou) (a pak moji pozornost seřeze nějaká wikipedia rabbithole, rip XD)
nebo si člověk může dát k posteli něco stimulujiciho jako ovoce, cukr, kofein, cokolada, l-tyrosin, adhd meds,..
nebo ten chore co musím udělat rozložím na několik malých tasků a rozplánuju každej pidi detail abych změnšil overwhelm a nejistotu
nebo si řeknu že to budu dělat jen 5 minut denně, a pak si člověk řekne "tak když už jsem začal"
nebo si dám nějakou odměnu za to že to udělám (video, matiku, jídlo, snack, sladký,...)
"
Steer the collective memeplex towards realistic cautious optimism
Follow your curiosity
The easiest way to learn mathematics, physics, AI, coding is to do mathematics, physics, AI, coding
Try to read raw mathematics from the original source, instead of popular science
Studying physics so that I can appreciate nature's beauty in it's purest form
All of physics can be derived from mathematical axioms and empirical observations and their generalizations
Does consciousness equation fundamentally reside on substrate independent emergent layer like software or on fundamental ontological layer like fields of physics or both are different lenses on the same conscious system on different levels
[Bartoš otočil, končí v čele Pirátů. Rezignovalo i celé vedení — Deník N](https://denikn.cz/1531403/bartos-otocil-konci-v-cele-piratu-rezignovalo-i-cele-vedeni/) Asi žiju v solidních bublinách když porovnám kolik progresivních liberálních protechnologickych humanitarianskych stredolevicově smyslících lidí je kolem mě a jak ve vládě skoro neexistujou
Does human level AGI have to be Turing complete?
Biological organisms with brains aren't Turing complete as they don't have unbounded memory. Unless, perhaps, you could continue adding compatible functioning neurons through neurotechnology or something similar. Or external tools and technologies, like writing notes, books, computers, and other external memory kind of count a bit in some ways. As a result, in the case of implementing human-like cognitive functions in a machine, we wouldn't need Turing completeness for it.
But maybe the nature of (Fristonian) collective intelligence, emergence, self-organization, or maybe quantum effects (if Penrose or others are actually right) etc., which might or might not be considered to go beyond classical Turing machines, are relevant too for proper replication of humanlike cognition.