AI risk debate: Eliezer Yudkowsky + Scott Aaronson + Liv Boeree + Joscha Bach [AI and the end of Humanity | Eliezer Yudkowsky, Scott Aaronson, Liv Boeree, Joscha Bach](https://iai.tv/video/ai-and-the-end-of-humanity?_auid=2020) https://www.reddit.com/r/singularity/comments/1dguabl/what_is_the_end_goal/ You can just go and learn string theory from lectures from Stanford and MIT online, why arent you doing that [The Enduring Mystery of How Water Freezes | Quanta Magazine](https://www.quantamagazine.org/the-enduring-mystery-of-how-water-freezes-20240617/) [Valeria de Paiva: AI tools for Better Math - YouTube](https://www.youtube.com/watch?v=C7NBGlJb2DQ) [[2406.11717] Refusal in Language Models Is Mediated by a Single Direction](https://arxiv.org/abs/2406.11717) [I Edited My DNA On A Secret Island (To Live Forever) - YouTube](https://youtu.be/bax8to_s07Q?si=hjQ4bmKah4OB6u3K) I think it's still useful to use different labels as you can be transhumanist (for upgrading humans for example), singularitian,(believing that singularity is close) and e/acc (in the sense of accelerating the singularity by your actions) and also prohuman (believing path to singularity and singularity itself will be great for humans and believing in low AI risk) by whatever method (anarchocapitalist assumptions method for example), and optionally labeling this cluster of beliefs as a form of e/acc I sometimes tend to very generally define accelerationism as accelerating some metric using some method, which technically makes lots of other people for example technical AI alignment accelerationists XD politically centrist riskaware effective accelerationism technical AI controllability accelerationism The fact that current AI systems are in many contexts better at emotional nuance than logical reasoning is something I think no one would predict years ago What's very close to me is transhumanist politically centrist riskaware effective accelerationism focused on building steering wheels for AI systems acceleration but also on accelerating the diversity of AIs and in general safely accelerating singularity by science and technology, effective altruist giving everyone the fruits of abundance and technology, scaling sentience to the stars and cosmos, scaling great sentient experiences, preventing existential and suffering risks while building grand utopian/protopian futures, and focusing on AI as one of the main tools I wanna dissapear for 1000 years and learn all the available lectures and books on mathematics of physics and artificial intelligence d/acc I'm not losing trust because there are all these models apparently better than ChatGPT or Claude or Gemini constantly coming out and then you use them in practice and nope https://x.com/burny_tech/status/1803101379025572321?t=m1ukj-cy2dz6BmlyLXImKQ&s=19 I think about this often when someone says "superintelligence will never be possible" From The universe of minds by Roman V. Yampolskiy [[1410.0369] The Universe of Minds](https://arxiv.org/abs/1410.0369) An approach to non-equilibrium statistical physics using variational Bayesian inference [[2406.11630] An approach to non-equilibrium statistical physics using variational Bayesian inference](https://arxiv.org/abs/2406.11630) https://x.com/mjdramstead/status/1803180987138101493 "I wish introductory mathematics could be taught as systematic first principles of thinking and perception. It should start with identifying the meaning of substrate, object, pattern, interpretation, representation, data, type, transformation, relation, number, model and meaning." https://x.com/Plinz/status/1802991369012867104 https://www.reuters.com/markets/us/nvidia-becomes-worlds-most-valuable-company-2024-06-18/ [[2402.09090] Software in the natural world: A computational approach to hierarchical emergence](https://arxiv.org/abs/2402.09090) [China could start building world’s biggest particle collider in 2027](https://www.nature.com/articles/d41586-024-02005-4) [AI took their jobs. Now they get paid to make it sound human](https://www.bbc.com/future/article/20240612-the-people-making-ai-sound-more-human) [[2406.11035] Scaling Synthetic Logical Reasoning Datasets with Context-Sensitive Declarative Grammars](https://arxiv.org/abs/2406.11035) [Runway Research | Introducing Gen-3 Alpha: A New Frontier for Video Generation](https://runwayml.com/blog/introducing-gen-3-alpha/) Generalization [Chollet's ARC Challenge + Current Winners - YouTube](https://www.youtube.com/watch?v=jSAT_RuJ_Cg) [Deriving 3D Rigid Body Physics and implementing it in C/C++ (with intuitions) - YouTube](https://www.youtube.com/watch?v=4r_EvmPKOvY) [[1002.2284] Markets are efficient if and only if P = NP](https://arxiv.org/abs/1002.2284) [What Enlightenment Does to Your Brain - YouTube](https://www.youtube.com/watch?v=qwQrwPhK06I) https://phys.org/news/2024-06-physicists-optical-analog-krmn-vortex.html [#13 - Microtubules are Biological Computers: searching for the mind of a cell - YouTube](https://www.youtube.com/watch?v=tHhAx3dWyTA) TOP AI research Papers last week (June 10th - 17th) https://x.com/TheAITimeline/status/1802842400101900596?t=dj55HzXXhqF_esyWnCZGEQ&s=19 You can't extinguish everything that's burning in parallel, but you can pick something very fundamental Effective Altruists, Effective Accelerationists, AI notkilleveryoneists, Effective Whatevers, LLM/Deep learning believers, LLM/Deep learning sceptics, scaling maximalists, neurosymoblics proponents, bayesians, Yann Lecun's architecture proponents, selforganizers, Transhumanists, Extropians, Singularitarians, Cosmists, Rationalists, Longtermists, Postrationalists, Team Consciousness, TPOT,... WE ALL WANT AI REVOLUTION TO BE THE BEST FOR ALL! LET'S MAKE THAT REALITY TOGETHER INSTEAD OF POLARIZING AND TRIBALIZING! WE ALL SHARE MORE IN COMMON THAN WE TEND TO THINK! The space of possible information processing systems is actually much bigger than we tend to think Does the brain compute many different attention heads in parallel? I kind of feel bad everytime an LLM starts apologizing 1000 times after making a mistake fully illustrated toy calculation of 1 transformer layer, transformer diagram https://x.com/zmkzmkz/status/1798421634883490274 The fact that this AI architecture can achieve so many things so relatively well is constantly extremely mind-boggling to me You can't tell me what to do Illya's company [Safe Superintelligence Inc.](https://ssi.inc/) Safe superintelligence (SSI), the ultimate goal If you mean effective accelerationists and their takes on rogue superintelligent AI risk, actually talk to various people identifying as effective accelerationsts and you will see that they are fighting exactly for the opposite, not human extinction! (though exceptions are ofc everywhere in all camps) And they will often tell you that they want humanity to grow and populate the universe like cosmists and have low probability of AI x-risk scenarios because of different forecasting priors than people that have bigger probability of AI x-risk (plus stuff about the technology most likely not being mature enough this soon for a lot of x-risk scenarios if those are even probable, power centralization, regulatory capture being a bigger risk and so on). I see rogue superintelligence risk probability somewhere in the middle because of arguments from both sides! [GitHub - google-deepmind/dangerous-capability-evaluations](https://github.com/google-deepmind/dangerous-capability-evaluations) [[2406.12824] From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries](https://arxiv.org/abs/2406.12824) https://x.com/omarsar0/status/1803254134289895555?s=46&t=pNiPoM95FfYEFMSA5jsCMA [[2403.08319] Knowledge Conflicts for LLMs: A Survey](https://arxiv.org/abs/2403.08319) "I DONT FUCKING UNDERSTAND" is great prompt for LLMs to suddenly give pretty clear explanation lol [Chollet's ARC Challenge + Current Winners - YouTube](https://www.youtube.com/watch?v=jSAT_RuJ_Cg) https://x.com/bryan_johnson/status/1803592074572472395?t=v_mR6HIvMovdJ-uTgQC-3A&s=19 "99% of those living can't see the future when it arrives. In any other time in history, it's easy to YOLO your way to justify debauchery, self destruction and indulgent behaviors as the virtuous way to live. On the eve of giving birth to superintelligence, we no longer know how long and how well we can live. A different era is now present whether it can be seen or not. https://x.com/bryan_johnson/status/1803592074572472395?t=oxRTMtcJRtCy2wIVsxI3Yw&s=19 The question is not whether health habits of today will punch through the 120 ceiling, rather it's if we can create the new systems and norms to systematically eliminate the current Die culture which creates disease, misery, and impairment. Don't Die will superseded Die simply because we can." Let's go! Not dying is within "our lifetime" with longevity escape velocity! Don't die! [[2402.03507] Neural networks for abstraction and reasoning: Towards broad generalization in machines](https://arxiv.org/abs/2402.03507) [Syllabus — CPSC 330 Applied Machine Learning 2023W1](https://ubc-cs.github.io/cpsc330-2023W1/syllabus.html) [Jared Kaplan - Human level AI by 2030? (Technical talk at physics conference Strings 2024) - YouTube](https://www.youtube.com/watch?v=4a5lzYreMME) AGSMUIHIIE (Artificial General Super Mega Ultra Hyper Infinite Intelligence Enlightenment) People saying they can't find any usecase for LLMs in their life have massive skill issue People thinking current AI systems are as good as they will get in their lifetime have massive imagination skill issue [[2406.12843] Can Go AIs be adversarially robust?](https://arxiv.org/abs/2406.12843) https://x.com/farairesearch/status/1803448946108342286 Implementing reality from scratch [[0903.0340] Physics, Topology, Logic and Computation: A Rosetta Stone](https://arxiv.org/abs/0903.0340) LLM tree search https://x.com/kohjingyu/status/1803604487216701653?t=irpH5hIc4nB212k7KInvNA&s=19 Claude is again and again smarter compared to ChatGPT for many engineering and explaining tasks I give them. Benchmarks lie. [Why Machines Learn by Anil Ananthaswamy: 9780593185742 | PenguinRandomHouse.com: Books](https://www.penguinrandomhouse.com/books/677608/why-machines-learn-by-anil-ananthaswamy/) [Reconstructing higher-order interactions in coupled dynamical systems | Nature Communications](https://www.nature.com/articles/s41467-024-49278-x) [A Walkthrough of A Mathematical Framework for Transformer Circuits - YouTube](https://youtu.be/KV5gbOmHbjU?si=hiMsnae3Q4nVQk8B) I thought about unifying attention mechanisms and activation functions but someone already did it [[2007.07729] Attention as Activation](https://arxiv.org/abs/2007.07729) Semantic search for mathematical models of intelligence on arxiv [arXiv Xplorer](https://arxivxplorer.com/?query=https%3A%2F%2Farxiv.org%2Fabs%2F1911.01547) How to make sure that everyone gets the fruits of technology? [Open Model Bonanza, Private Benchmarks for Fairer Tests, More Interactive Music Generation, Diffusion + GAN](https://www.deeplearning.ai/the-batch/issue-254/) https://x.com/yacineMTB/status/1803918663684362651 "the future does not belong to AI researchers. It belongs to software engineers." actually to CEOs of corporations or governments or whoever else has the biggest fundamental influence over how is technology and other power used [Donald Hoffman Meets Stephen Wolfram For the First Time on TOE - YouTube](https://www.youtube.com/watch?v=1m7bXNH8gEM) [[2109.13916] Unsolved Problems in ML Safety](https://arxiv.org/abs/2109.13916) Everytime someone makes a claim about an intelligence of a system, first thing I want from him is to define intelligence mathematically in engineering terms and how to measure it because otherwise it's just semantics and vibes disconnected from physical reality that go nowhere All structures in reality are kind of statistically somewhat semistable enough for everything to not instantly collapse to void (second law or thermodynamics or false vacuum pls don't ruin it) IQ tests suffer from the same memorization issues like AI benchmarks [claudette](https://claudette.answer.ai/) [Entropy | Free Full-Text | Evolutionary Implications of Self-Assembling Cybernetic Materials with Collective Problem-Solving Intelligence at Multiple Scales](https://www.mdpi.com/1099-4300/26/7/532) [Attention Output SAEs Improve Circuit Analysis — AI Alignment Forum](https://www.alignmentforum.org/posts/EGvtgB7ctifzxZg6v/attention-output-saes-improve-circuit-analysis) The middle way between growing AI models (increasing their degrees of freedom and giving them generalist learning algorithms, letting them be free and explore and find their own solutions and so on) vs designing AI models (decreasing their degrees of freedom, controlling, hardcoding or softcoding architectures with inductive biases, various structures and algorithms, various priors) it's all just a thermodynamics wrapper The physics of AI/brain dynamics doing mathematics [Reconstructing higher-order interactions in coupled dynamical systems | Nature Communications](https://www.nature.com/articles/s41467-024-49278-x) math books https://x.com/FrnkNlsn/status/1803805720963616995 biology is the most interesting next frontier for AI beyond video and embodied intelligence Didn't expect Trump to go e/acc Donald Trump says AI is causing an unprecedented demand for electricity which will require new energy solutions, including nuclear energyhttps://x.com/tsarnick/status/1803992844644028894 Competing incentives internally in governments, corporations etc. in all forms are everywhere and I'm not sure what to think about this Mira Murati says OpenAI gives the government early access to new AI models and they have been advocating for more regulation of frontier models https://x.com/tsarnick/status/1803893981513994693 Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning [[2406.14283] Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning](https://arxiv.org/abs/2406.14283) https://www.reddit.com/r/math/comments/fc1m18/where_do_i_find_the_latest_mathematical_news/ https://www.reddit.com/r/AskScienceDiscussion/comments/p1ehnn/is_there_a_list_of_places_to_get_the_most/ https://www.reddit.com/r/Physics/comments/2yjwfs/what_are_some_good_physics_news_sites/ [Sentience Institute | Sentience Institute](https://www.sentienceinstitute.org/) [String Theory Unravels New Pi Formula: A Quantum Leap in Mathematics](https://scitechdaily.com/string-theory-unravels-new-pi-formula-a-quantum-leap-in-mathematics/) [[2406.10743] Occam's Razor for Self Supervised Learning: What is Sufficient to Learn Good Representations?](https://arxiv.org/abs/2406.10743) [[2406.06387] Time-tronics: from temporal printed circuit board to quantum computer](https://arxiv.org/abs/2406.06387) Steering features in images [Feature Lab](https://www.featurelab.xyz/) https://x.com/gytdau/status/1804266728781943003?t=FtiNo_Wlis4hlkGdsAoxxg&s=19 Anti AI safety arguments https://x.com/Luck30893653/status/1653912286996643840?t=gKmIpA_KQzQZF0A7-eAxgw&s=19 [DigiRL](https://digirl-agent.github.io/) [[2406.11896] DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning](https://arxiv.org/abs/2406.11896) I think it's gonna be less about how we do control superintelligence, but more about who is the group that is controlling the superintelligence https://www.reddit.com/r/math/s/GLPwzOLVtC Same with various LLM benchmarks? We need better testing everywhere! Artificial life [Leniabreeder](https://leniabreeder.github.io/) https://x.com/maxencefaldor/status/1803803486179434642?t=6eJjBHLt95EH7n7GBmpcIA&s=19 https://x.com/BertChakovsky/status/1804040248332284107?t=WUOj_TMy0BhTAgNbseI0OQ&s=19 [[2007.00849] Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge](https://arxiv.org/abs/2007.00849) Does the whole universe have agency? [Language is primarily a tool for communication rather than thought | Nature](https://www.nature.com/articles/s41586-024-07522-w) Be infinitely open to all possible possibilities about everything, but ground yourself with long-term predictive power of models [[2406.14546] Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data](https://arxiv.org/abs/2406.14546) https://x.com/OwainEvans_UK/status/1804182787492319437?t=VG-MNt_DJlK81SRSoC40Yw&s=19 Learn everything. Apply everything. Become everything. Practice omnidisciplinary metamathemagics! https://x.com/burny_tech/status/1804536271533908144?t=31hmTbgXKMbTLpqfw7pxeQ&s=19 I dont want artistic human creativity with little machine assistance to lower as rapidly because of automatizing economic incentives steamrolling it and support these people instead [PsychonautWiki](https://psychonautwiki.org/wiki/Main_Page)