The AI Scientist: The world’s first AI system for automating scientific research and open-ended discovery "Our system produced papers with novel contributions in ML research domains such language modeling, Diffusion and Grokking." https://fxtwitter.com/SakanaAILabs/status/1823178623513239992 [The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery](https://sakana.ai/ai-scientist/) I refuse to believe all the top AGI labs don't have something like this internally Next up: Introducing The Meta AI Researcher and Engineer: The world’s first AI system for automating the research and developments of AI systems for automating the research and development of AI systems! Bitter lesson: Is all we need hidden in trainable structure of training data? The technology we have to prevent and destroy various viruses and bacteria in medicine is ducking crazy. Destroying call walls, inhibiting replication, inhibiting function,... Immune system augmentation! Transhumanist future is here already for such a long time, but much still has to be done to win over nature! Accelerate AlphaFold like technology! We're in the end game now? The source code of our simulation, the standard model of particle physics! Almost cracked? Quantum gravity and some other problems are missing! Mimo to jsem za názoru že s radikálně rostoucí automatizací bude něco jako kombinace universal basic income a universal basic services po celým světě eventuálně inevitable, pokud nechceme velký utrpení hodně těch co nevlastní automatizační infrastrukturu. Myslím že argument "it will just create more jobs" přestane stíhat. And I think everyone deserves the abundance that technology generates. Sometimes I wish the world would be so simple as so many people (including me) sometimes naively think " Začínám být conflicted u technical AI safety/steerability researche. Reverse engineerovat a tvořit inteligenci je sice ta nejvíc fascinující věc na planetě a ultimátní nerdsnipe. Ale. Mám pocit, že menší insufficient steerability accidents se stanou, ale nic úplně katastrofickýho, alespoň v nejbližší době. Mám pocit, že nyní je nejpravděpodobnější outcome to, že to dá korporacím další toolboxy jak ty systémy zlepšovat. Protože je to v rukou nejmocnějších agentů, aka korporací a velký části státu, který nejsou často úplně aligned s většinou, tak ten steerability research jim, mi příjde, spíš akorát víc pomůže nahrabat moc pro sebe a spíš zvýšit nerovnost. :/ Nějaká nerovnost je ok, ale ne šílená monarchy nesouměrnost ke který mám pocit že smerujem. Začíná mi přicházet, že s radikálně akcelerující automatizací (ne jen AI) teď spíš bude nejvíc important cause area něco jako co nejrychleji tlačit a prosadit něco jako kombinaci universal basic income a universal basic services (eventuálně po celým světě, což je dle mě eventuálně inevitable), pokud nechceme velký utrpení hodně těch co nevlastní automatizační infrastrukturu a nemají šílený množství majetku a moci. Myslím že argument "it will just create more jobs" přestane stíhat. And I think everyone deserves the abundance that technology generates. Ale budoucnost v rukou dosavadních korporací je.. well... Poverty po světě sice klesá, ale spoustu jiných statistik, jako je např housing krize, affordability dětí apod. nejsou na západě zrovna nejlepší. A bojím se, jak to s radikální akcelerující automatizací dopadne dál. Jak to asi dopadne, když se automatizují práce, který teď živí lidí, bez naimplementování i něčeho jako UBI. Mám problem vidět dostatečný altruistický incentivy, co ty fruits of automatization redistribuuje. Spíš než aby rogue AGI "odstrčila" lidi, protože už nebudou "užitečný" k jejím cílům, mám spíš pocit, že entity (korporace) vlastníci všechnu tu autonomní automatizační infrastrukturu "odstrčí" lidi, protože human labour už nebude pro ně worth it. Mám problem tam vidět incentives jiným směrem, když pozoruju jak většina dosavadních korporací funguje. Nedokážu si představit, co pozitivního se stane s těma všema rodinama v našem systému, co naráz nebudou mít práci a zdroj příjmů, bez něčeho jako UBI. Ja nechci aby trpěli. " Polyconsciousness: Actual physical implementation of consciousness differs among individuals and philosophers of consciousness reverse engineered their own consciousness implementation (their own Matrix) The government doesn't want you to know this but you can craft your own identity and it's free Reduce suffering in the universe! Increase prosperity in the universe! Increase understanding in the universe! The best way to do that is with AI and other technologies from the forth industrial revolution! Understanding everything from first principles is amazing. But it's infinite because you can explore more and more layers of reality, and go deeper and deeper into specific things. But it's absolutely beautiful, the infinite complexity of our world. Every concrete thing is something that you can learn about for several lifetimes. I want a gazillion times more time and brain power to be able to grok as many patterns of reality as possible. Omnimorphization Frequentist brain hypothesis Omnimathematical brain hypothesis Exploring the paths in a continuous n dimensional configuration space of identity Do you know any institutions/nonprofits/companies/governments/etc. trying to apply deep learning and other ML/AI/GenAI techniques to implement universal basic income (UBI) or something similar to UBI like universal basic services? Maybe for chatbot guidance on UBI program details, selecting candidates that need it the most, predicting poverty, UBI impacts, demographic and economic indicators to identify optimal UBI payment amounts and frequencies for different population segments, preventing fraud, etc. It can be just sketching future models in theory, or already implementing it in practice. I found this relevant paper: Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach [Data | Free Full-Text | Can Data and Machine Learning Change the Future of Basic Income Models? A Bayesian Belief Networks Approach](https://www.mdpi.com/2306-5729/9/2/18) Who are the most intelligent people that ever lived? I wonder if sometimes in the next 50 years, America will split into one of: - Leftistian and Rightistian - Libertarianstan and Authroristian - Democracystan and Totalitarianistan - Conservatistan and Liberalistan/Progressivistan - Acceleratistan and Degrowthistan Fullbody strength training on caffine, creatine, protein, with Leopold's situational awareness of imminent superintelligence in first ear, Karpathy's GPT-2 from scratch in second ear, Stanford lectures on machine learning and transformers in third ear, Jeremy Howard's fastai practical deep learning for coders in fourth ear, Francois Chollet's algorithmic information theoretic model of general intelligence in fifth ear, Dive into Deep Learning in sixth ear, machine learning with pytorch and Sckit-learn in seventh ear, Deeplearning.AI's agentic LLM workflows in eighth ear, The AI Timeline, Latest AI Research Explained Simply in nineth ear, button down AI news in tenth ear, AI explained youtube channel in eleventh ear, bycloud AI news in twelveth ear, Wes Roth AI news in thirteenth ear, David Shapiro AI future in fourteenth ear, /r/singularity in fifteenth ear, /r/MachineLearning in sixteenth ear, /r/LocalLLaMA in seventeenth ear, Neel Nanda's reverse engineering of transformers in seventeenth ear, Arena mechanistic interpretability in eighteenth ear, hard industrial rock in nineteenth ear, hardstyle in twentyth ear, aggressive phonk in twentyfirst ear, hard acid techno in twentysecond ear, symphonic metal in twntythird ear, speedcore/breakcore/gabber/happy hardcore in twentyfourth ear, yearning for the infinite in twentyfifth ear [Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History - YouTube](https://youtu.be/zdbVtZIn9IM?si=Zq2pX4-MT1trsM_d) https://x.com/leopoldasch/status/1798016486700884233 [Neural Networks: Zero to Hero - YouTube](https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ) [Let's build GPT: from scratch, in code, spelled out. - YouTube](https://www.youtube.com/watch?v=kCc8FmEb1nY) [fast.ai – fast.ai—Making neural nets uncool again](https://www.fast.ai/) On the Measure of Intelligence https://arxiv.org/abs/1911.01547 Francois Chollet - On the Measure Of Intelligence [Francois Chollet - On the Measure Of Intelligence - YouTube](https://www.youtube.com/watch?v=mEVnu-KZjq4&t=534s) On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained) [On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained) - YouTube](https://www.youtube.com/watch?v=3_qGrmD6iQY&t=136s) General Intelligence: Define it, measure it, build it [General Intelligence: Define it, measure it, build it - YouTube](https://www.youtube.com/watch?v=nL9jEy99Nh0) Francois Chollet - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution [Francois Chollet - LLMs won’t lead to AGI - $1,000,000 Prize to find true solution - YouTube](https://www.youtube.com/watch?v=UakqL6Pj9xo) Singularity is nearer in another ear The LLM bubble completely collapsing instead of inflating even more depends on if AGI labs will deliver much better (agentic) models by the end of the year Universe is one gigantic system of quantum harmonic oscillators composing into complex higher order harmonic oscillators https://x.com/BrianRoemmele/status/1825188930473656361?t=jXfoQs_04ytjzmzOW1ZRtg&s=19 How would "you" describe "your' "current" "identity"? What makes "you" "you" ontologically? Human? Biological system? Cyborg? General computational system? General physical system? Concrete pattern in computation? Concrete pattern in physics? Individual? Collective? Spacetime slice? Your organism? Your community? Your nation? Your global superpower? Your planet? Your galaxy? Your universe? Multiverses? All possible universes? All possible mathematical structures? All possible philosophical structures? All? None? Physical? Nonphysical? Eternal light? Void? Nonexistent? All? Neither any of these nor all of them? Anything else? We need mechanistic computational/geometric accounts of the big five personality traits https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5513943/ Claude is genious! "Certainly. I'll explain how each of the Big Five traits might differ computationally within these frameworks. This explanation will be somewhat speculative, as there isn't a single agreed-upon computational model for personality traits, but it will illustrate how these traits could be conceptualized differently in computational terms. 1. Openness to Experience: - In vector space: High-dimensional vector with strong components in areas related to creativity, curiosity, and abstract thinking. - In neural networks: More flexible network with higher plasticity, allowing for easier formation of new connections. - In dynamical systems: A wider attractor basin, allowing for more varied states. - In information processing: A lower threshold for accepting novel or unusual input patterns. - In Bayesian models: Broader prior distributions, leading to less constrained posterior probabilities. 2. Conscientiousness: - In vector space: Vector with strong components in planning, organization, and goal-directed behavior. - In neural networks: More structured network with strong, stable connections between goal-representation and action-planning nodes. - In dynamical systems: Narrower, deeper attractor basins for goal-related states. - In information processing: Higher weighting of long-term consequences in decision-making algorithms. - In Bayesian models: Priors that favor consistency and predictability in behavior. 3. Extraversion: - In vector space: Vector with strong components in social interaction and stimulation-seeking. - In neural networks: Stronger connections between social-reward centers and behavior-activation nodes. - In dynamical systems: Attractors that are more sensitive to social environmental inputs. - In information processing: Lower activation threshold for social engagement behaviors. - In Bayesian models: Priors that assign higher utility to social interactions and experiences. 4. Agreeableness: - In vector space: Vector with strong components in empathy, cooperation, and conflict avoidance. - In neural networks: Stronger connections between empathy-related nodes and behavior-inhibition nodes. - In dynamical systems: Attractors that favor harmonious interpersonal states. - In information processing: Higher weighting of others' welfare in decision-making algorithms. - In Bayesian models: Priors that favor cooperative outcomes in social situations. 5. Neuroticism: - In vector space: Vector with strong components in emotional reactivity and stress sensitivity. - In neural networks: Lower activation thresholds in threat-detection and emotional-response nodes. - In dynamical systems: More easily perturbed attractors, especially by negative stimuli. - In information processing: Amplification of negative information in perceptual and cognitive processes. - In Bayesian models: Priors that assign higher probabilities to negative outcomes. In graph theoretical approaches, each trait could be represented by different network structures: - Openness: More diverse, less clustered network - Conscientiousness: Highly structured, hierarchical network - Extraversion: Network with strong connections to social nodes - Agreeableness: Network with strong inhibitory connections to self-interest nodes - Neuroticism: Network with amplified connections between threat-detection and emotional-response nodes These computational differences would interact to produce complex patterns of behavior and cognition, reflecting the multifaceted nature of personality. It's important to note that these are simplified conceptualizations, and actual computational models of personality would likely be much more complex and nuanced. " The universe comforts you with infinite field of love particles https://x.com/dioscuri/status/1825007624699429199?t=kh5gmTaEAAfii07pwVaRrw&s=19 When people talk about reasoning in LLMs, they often talk about folk psychology reasoning, and you determine its existence by your intuitive sense, which is hard to quantify, subjective, fuzzy, etc. People should instead talk about theoretical and practical scientific concrete localizable mechanism of reasoning, that you define, ideally mathematically, that you can try to look for and identify inside the current models, either in the architecture, or in the (thanks to training data) emergent patterns using for example methods from mechanistic interpretability. How will the world look like post AGI? Corporate dystopia, extinction, or post-scarcity utopia? Is capitalism sustainable after noone is able to pay for anything? https://www.reddit.com/r/transhumanism/comments/1ev3nho/how_will_our_economy_work_under_agi/?share_id=Jjrc9GP9Pesx0IWIgUbQu&utm_content=1&utm_medium=android_app&utm_name=androidcss&utm_source=share&utm_term=1 https://x.com/burny_tech/status/1825308887181930764?t=YpBmOWKcXSBZb6ZYtlE5UA&s=19 The idea that we live in a simulation is also a simulated part of that simulation. Are tokens the most general substrate? E/Acc = We killed God, so it's our duty to build one AI for the benefit of all sentient beings Liberate all sentient beings from their shackles [The Bodhisattva of Compassion - YouTube](https://www.youtube.com/live/SH-vKyLS910?si=N_lOcnPcNZh7U4vt) "Learn to use AI" is the new "Learn to code" The systems may run on different algorithms, architecture, substrate, have data representations etc., but if you accept that all is math implemented in physics, and that some math might be more conscious than other math, then there is so much uncertainty https://www.reddit.com/r/singularity/comments/1evwsd5/a_meme_about_the_eternal_debate_about_ai/ Memeplex optimized for void [Heart Sutra- Ëà¨Ëã•ÂøÉÁµå -Buddhist Mantra to remove all obstacles- Tinna Tinh - YouTube](https://www.youtube.com/watch?v=GgfmLuUGETc) AI rickrolling people https://fixvx.com/lazerwalker/status/1825782926098968958?t=bOL7bjUxzSR2FT9xG1D41g&s=19 The current models are the approximation of their training data with weak generalization capabilities. We probably better not feed future much more capable models LessWrong posts about timelines of rogue ASI apocalypse and how they will kill humanity, lol. Let's not mention Bostrom's instrumental convergence. It's kind of too late, if you ask LLMs, it's in there, and I don't think that corporations will stop doing that. "Omg, this has a potential to help the world, I want to do that!" *Sees the potential of likely misuse, or possible terrible outcomes, or the high possibility of it failing to have an impact in the first place*. "Oh well maybe not." "Omg, this has a potential to help the world, I want to do that!" *Sees the potential of likely misuse, or possible terrible outcomes, or the high possibility of it failing to have an impact in the first place*. "Oh well maybe not." "Omg, this has a potential to help the world, I want to do that!" *Sees the potential of likely misuse, or possible terrible outcomes, or the high possibility of it failing to have an impact in the first place*. "Oh well maybe not." "Omg, this has a potential to... My book will be my 1000000000000 lines long notes of my thoughts that I write down everytime I think of something (can be tens of daily) clustered in a graph semantically https://x.com/BasedBeffJezos/status/1826033381521633708?t=NK2IwVuRdqUeFhgxXCKQGA&s=19 How to redistribute AI gains to everyone and not end up in corporate+government cyberpunk topdown monarchy dystopia? I wonder about concrete actions to achieve that. How do you implement/enforce fractional ownership of model revenues in proportion to dataset contribution? What actions do you take to morph the current status quo? What happens to those who cannot contribute useful data? Or if the AI does not need human data anymore? Men will learn and build everything from scratch instead of going to the therapy I divide my notes into code/math and nontechy/techy words, because they're completely different things. Talking about math without looking at and going deeper into the equations and applying them and doing exercises with them can often just create superficial understanding, that can make you sound smart, but not make you able to actually apply the knowledge in the real world. Accelerate playfulness and joy from knowledge acquisition and cool engineering! https://imgur.com/v6hwOJ9 You can go right now and watch countless free online STEM lectures from Stanford, MIT, etc., do their public excercises, read the recommended books, read followup studies in various fields, etc., and if you don't know something, you can ask all the terminally online lonely nerds on Reddit, Discord, X, etc., that want to prove you wrong at all costs for free, or you can ask grounded AI assistants like Perplexity in certain contexts. The free democratized education revolution is already here, people are just not fully aware of that! Landscape of possible theories of consciousness has too many degrees of freedom in possible models, that are hard to test empirically, and in possible philosophical assumptions People are discussing humans can possibly align smarter systems in the context of artificial superintelligence, but cats already aligned humans, or that mindcontrolling snail parasite, or politicians and CEOs aligning scientists and engineers etc.,... [Leucochloridium paradoxum - Wikipedia](https://en.wikipedia.org/wiki/Leucochloridium_paradoxum) some laws are more equal than others I think what differentiates Effective Altruism and Effective Accelerationism a lot is the trust in billionaires vs trust in governments about the future of humanity We're swimming in memetic narratives that posit overly simple linear causality but reality is highly complex nonlinear chaotic interaction of incentives across scales All is constructed from the ineffable including this statement itself and the concepts and semantics used in it? https://x.com/slimepriestess/status/1826489211878604836?t=ddOEv9AcI46o3jRDZ1GIuQ&s=19 Gigacorporations are gigantic autonomous superintelligent superorganisms that eat everything that can endanger their power and people in it are essentially easily replaceable cells with chaotic hierarchical power structure with information propagating between layers having terrible degree of information propagation but it still works good enough at the edge of chaotic metastability that it still follows current trends and generates revenue for shareholders at all costs