[Joscha Bach and Connor Leahy [HQ VERSION] - YouTube](https://youtu.be/Z02Obj8j6FQ?si=Cvv2sgTtdIM8_iiV&t=3376) Myslíš si že to, že (black box) AIs s náma budou harmonicky kooperovat nebo se kompatibilně s náma spojovat pro benefit života je defaultně daný, nebo těžký technický problém? Hlavně u brzkých budoucích iteracích AIs s větší autonomií, agentností, mechanismy pro cíle, s lépe tvořícími world modely apod., který jsou zatím v plenkách, ale exponenciálně se zlepšují. Je zajímavý, že dost lidí si myslí, že je to by default, ale já si myslím, že je to těžký technický problém, který je ale řešitelný, a spíš si myslím, že by default bez těchto kalibrací budou dělat náhodný věci nebo věci co neberou v potaz naše priority a co nám příjde acceptable (když se u nich nevypne lhaní (to jsme jako obvod identifikovali u malých modelů), power seeking, manipulaci a decieving (to GPT4 před alignmentem dělala) apod.). Bojím se, že nejsou dostatečný technický a politický incentivy na to, aby to lidi dostatečně řešili. Researchers se v korporacích snaží hodně, ale pořád mají největší moc korporátní šéfové co jdou hlavně po vyhrávání různých mocenských her. Někteří regulátoři to taky myslí genuinely, ale do těhle her se cpou ti co spíš lobují pro zesílení monopolů, nerozbít dosavadní velký firmy, spomalování i toho safety researche,... To mi fakt nepříjde jako udržitelný status quo... Věřím, že existují řešení pro tyto technický problémy, který čekají na to, aby se našly, a že existují politický akce, který to ukorigují stylem pryč od sebedestrukce. Myslím, že bez funkčního AI, který nerozbije všechen lidský vědecký a technologický progress, ale posílí ho, nás smažou jiný existenční rizika. Např na jednu stranu se celkem se divím, že jsme se ještě neodbouchli atomovkama, ale neměli jsme od toho daleko. Ale musí to být pomocí AI, co není totálně bez řetězů, ale zároveň ty řetězy, který jsou tam nuceny korporacema taky fakt nejsou ideální. Pokud jsou tyhle politický hry dostatečně netransformovatelný, což je dost možný, tak pořád: My hope right now is accelerating reverse engineering of neural networks (mechanistic interpretability) and other safety research (reverse engineering increases both safety research but also capabilities (čeho všeho je ta AI schopna), but blind scaling (zvětšování množství parametrů) increases just capabilities) and accelerating defence and resillience mechanisms in groups that care ethically for others genuinely, putting all proper functioning alignment methods on their AIs, so that at least these (superintelligent) AIs are much less likely to for example manipulate them, that they can use as a defense. Dle mě to Connor Leahy hezky popisuje v této diskuzi co jsem poslal: "If everyone had a pocket AGI which is fully aligned with human values, which is epistemologically extremely coherent, which does not optimize for things we don't want, which is deeply reflectively embedded into our own reasoning and our thinking, that would be good, but that doesn't happen by magic. You have to actually do that. Someone has to actually figure out how to do that and develop the technology to do that. And they have to actually deploy that. And they have to do it without some other crazies hooking it up with the utility function (goal) of make the most profits or maximize the entropy of the universe. If you don't do that, we might die. You don't get this nice outcome for free. You get this outcome if you coordinate, if you work hard, and if you solve a very hard technical problems that do not get solved by default. They can be solved. Our physics allows the nice outcome to happen, but they don't happen by magic. The universe isn't kind. We're not the main characters. There is no rule that we have to make it. There are other paths that are accessible than dark paths where AI does random things and things not aligned with us. There are actions that people can take. There are coordinations that are possible to get to the kinds of scenarios we want. We have to actually do that. It does not happen for free. This is what I'm advocating for. I'm not advocating this is easy. I'm not saying this will happen. I'm not saying oh don't worry the government will fix it. I'm saying the opposite: This game (of politics and technology) is really hard. The enemies have really good hit points. They have dangerous attacks. If you get to the end game there's a really hard boss. Look, the default outcome might be that we lose by default. The default outcome might be that entropy wins. Some random AGI with some random ass values that does not care about cosmopolitian life on Earth, might win over, or maybe it's a bunch of them, and then they all coordinate and that's just it. There are actions we can take to prevent all the possible bad outcomes related to AI." unhinged AGI v rukou lidí, v rukou open source nerdů (i když potenciálně i sociopathů/murderers, proti kterým by snad šla udělat security/defenzíva/resilience (i proti "omylem" unaligned autonomním AGIs)) je asi obecně potenciálně lepší outcome než unhinged AGI v rukou korporací a vládních zařízení, kde těch sociopathů bude víc, který by si to asi rychle taky převzali, ale trochu by to vybalancovalo celkovou nerovnost v moci... snad je potenciál defenzívy dostatečně silný proti potenciálu offenzífy What if culture or governments instead of pausing/slowing AGI development or enforcing compute caps, they enforced controllability mechanistic interpretability instead, which would incentivize understandable, controllable and more effective capabilities. Companies should be accountable for their externalities (also to AI developers?) I'm ready for fully AI automated QAnon-like cults with fully automated prophets creating fully sensory illusionary interactive systems creating fully complex narratives that are completely disconnected from reality creating epistemic collapse and semantic apocalypse [Connor Leahy on The Risks of Centralizing AI Power - YouTube](https://www.youtube.com/watch?v=BhQBmVZ5XP4) [Joscha Bach and Connor Leahy [HQ VERSION] - YouTube](https://www.youtube.com/watch?v=Z02Obj8j6FQ) by watching the end of that convo, i feel like the human-AI hamonic cooperation Joscha takes as given default, while Connor sees it as hard technical problem (me too) safety in openai: i think there's this type of noneconomic incentive, but i feel like its not strong enough (but even that is partially supported by money as you need some way to control it for it to not be terrible and lie to your customers anyway) What is your most metastable mental configuration equilibrium? Why aren't you a superposition of all possible predictive perspectives to maximize truth as in total predictive power? Will creating AGI accelerate chances of all existencial risks including itself or save all of sentience from all existencial risks in the long term? Will trillions of bioconservatists, cyborgs and sentient machines flourish? Microagents: Modular Agents Capable of Self-Editing Their Prompts and Python code [GitHub - aymenfurter/microagents: Agents Capable of Self-Editing Their Prompts / Python Code](https://github.com/aymenfurter/microagents) Individual people with agency are dangerous to the current distribution of power and people's agency on average is lowering over time. Don't follow the trend! Make the world a better place! The world and its status quo is more changeable than culture tends to think! People that want safe acceleration by also minimizing the risks of all the intelligence and life being suddenly erased from existence shouldn't be called decels Media suing OpenAI being valid or not is nothing compared to the fact that we will need AGI soon that is properly technologically (with proper epistemology and not erasing intelligent civilization,...) and politically (no money making machines or dystopian monopolies on power) deployed for all of life to not die to all the other natural or homemade risks and existential risks and hopefully help us fix, not strengthen, many things that are wrong in the world, such as poverty and lack of meaning in life that many people feel https://twitter.com/Dan_Jeffries1/status/1740303405254377808 I was also thinking about for example in the long run, preventing getting killed by climate change, supervolanos, asteroids, solar storms etc., we will need a very strong technology for prevention, defense, resilience, adaptation, etc. quickly, and AI already accelerates that AI to možná všechno rozteče, možná včetně států, korporací, nadstátních vládnoucích organizací, možná včetně kultury, možná včetně i lidí, a nebo ztransformuje, a nebo posílí všechny, nebo jen některý (či už posiluje) evaluating dangerous capabilities https://twitter.com/CRSegerie/status/1739692474614816910 https://www.lesswrong.com/posts/Btom6dX5swTuteKce/agi-will-be-made-of-heterogeneous-components-transformer-and AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them by Roman Leventov: 1. AGI could be achieved by combining just (a) about five core types of DNN blocks (Transformer and Selective SSM are two of these, and most likely some kind of Graph Neural Network with or without flexible/dynamic/"liquid" connections is another one, and perhaps a few more); (b) a few dozen classical algorithms for LMAs aka "LLM programs" (better called "NN programs" in the more general case), from search and algorithms on graphs to dynamic programming, to orchestrate and direct the inference of the DNNs; and (c) about a dozen or two key LLM tools required for generality, such as a multi-physics simulation engine like JuliaSim, a symbolic computation engine like Wolfram Engine, a theorem prover like Lean, etc. 2. The AGI architecture described above will not be perfectly optimal, but it will probably be within an order of magnitude from the optimal compute efficiency on the tasks it is supposed to solve[4], so, considering the investments in interpretability, monitoring, anomaly detection, red teaming, and other strands of R&D about the incumbent types of DNN blocks and NN program/agent algorithms, as well as economic incentives of modularisation and component re-use (cf. "BCIs and the ecosystem of modular minds"), this will probably be a sufficient motivation to "lock in" the choices of the core types of DNN blocks that were used in the initial versions of AGI. 3. In particular, the Transformer block is very likely here to stay until and beyond the first AGI architecture because of the enormous investment in it in terms of computing optimisation, specialisation to different tasks, R&D know-how, and interpretability, and also, as I already noted above, because Transformer maximally optimises for episodic cognitive capacity and from the perspective of the architecture theory, it's valuable to have a DNN building block that occupies an extreme position on some tradeoff spectrum. (Here, I pretty much repeat the idea of Nathan Labenz, who said in his podcast that we are entering the "Transformer+" era rather than a "post-Transfromer" era.) [Zipformer: A faster and better encoder for automatic speech recognition | OpenReview](https://openreview.net/forum?id=9WD9KwssyT) zipformer [Neural Networks as Quantum Field Theories (NNGP, NKT, QFT, NNFT) - YouTube](https://www.youtube.com/watch?v=ZSmORp3Bm2c) Neural Networks as Quantum Field Theories (NNGP, NKT, QFT, NNFT) Neural Network Gaussian Processes (NNGP), Neural Tangent Kernel (NKT) theory, Quantum Field Theory (QFT), Neural Network Field Theory (NNFT) thermodynamics govern the mesoscale [Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407 - YouTube](https://www.youtube.com/watch?v=8fEEbKJoNbU) newest AI mindreading [UTS HAI Research - BrainGPT - YouTube](https://www.youtube.com/watch?v=crJst7Yfzj4) [New 'Mind-Reading' AI Translates Thoughts Directly From Brainwaves – Without Implants : ScienceAlert](https://www.sciencealert.com/new-mind-reading-ai-translates-thoughts-directly-from-brainwaves-without-implants) i swear this is the hardest possible way to learn physics [geometry of physics in nLab](https://ncatlab.org/nlab/show/geometry+of+physics) one day i will understand every single sequence of words in this website if i dont die or the world doesnt implode before that i want immortality and infinite brain to have infinite time to learn every single mathematical pattern of reality [Welcome to the Cyborg Era: Brain Implants Transformed Lives This Year](https://singularityhub.com/2023/12/29/welcome-to-the-cyborg-era-brain-implants-transformed-lives-this-year/) "One implant in the spinal cord of a patient with Parkinson’s disease—which slowly destroys a type of brain cell for planning movements—translated his intention to move. After decades, the man could once again stroll down a beachside road with ease. The study paves the way for the restoration of movement in other brain disorders—like Lou Gehrig’s disease, where neural connections to muscles slowly disintegrate, or in people with brain damage from stroke. Another trial used electrical stimulation to boost short-term memory in people living with traumatic brain injuries. The carefully timed zaps increased attention span decades after the injury—allowing participants to juggle multiple everyday tasks and pursue hobbies like reading. Brain implants also thrived as diagnostic tools. One study used implants to decode brain wave patterns associated with depression and to potentially predict relapse. The study deciphered how brain signals differ between a healthy and depressed brain, which could inspire better algorithms to nudge brain activity away from depression. But perhaps the greatest progress was in decoding speech—technologies that translate thoughts into words and sentences. These technologies support people who have lost the ability to speak, giving them an alternative way to communicate with loved ones." EEG mind reading [- YouTube](https://www.youtube.com/watch?v=crJst7Yfzj) Scott Aarson agnosticism about AI risk, cryptographized alignment in open source AI and ubremovable backdoors shut off button on AGI [- YouTube](https://youtu.be/wfxf6MembCQ?si=7b_kN4f1JlLN8lMZ) [[2312.04889v1] KwaiAgents: Generalized Information-seeking Agent System with Large Language Models](https://arxiv.org/abs/2312.04889v1) KwaiAgents: Generalized Information-seeking Agent System with Large Language Models What is the equation of intelligence? Statistical mechanics best summary [- YouTube](https://youtu.be/zFAxiRAiM24?si=PZfdVskyE4WXlr7O) Let's throw this infinite shapeshifter basedness math to artificial neural nets Toward an Effective Theory of Neurodynamics: Topological Supersymmetry Breaking, Network Coarse-Graining, and Instanton Interaction [[2102.03849] Toward an Effective Theory of Neurodynamics: Topological Supersymmetry Breaking, Network Coarse-Graining, and Instanton Interaction](https://arxiv.org/abs/2102.03849) ntelligence is creating more efficient process of learning knowledge [[1801.03918] Black Holes as Brains: Neural Networks with Area Law Entropy](https://arxiv.org/abs/1801.03918) Black Holes as Brains: Neural Networks with Area Law Entropy [- YouTube](https://youtu.be/uA5GV-XmwtM?si=KWEhHfrGuBZXqjhY) the existence of a "metacrisis" underlying various individual environmental, geopolitical, philosophical, social, religious, psychological crises. A hypothesis for the origin of this metacrisis is a dysfunction of human cognition whereby a certain way of viewing the world, being necessary and useful for certain purposes, is highly overactive compared to another way of viewing the world. the overactive one being caused by left hemisphere activity and the underactive one right hemisphere activity. humans on a wide scale (but especially those in power) are predominantly viewing themselves and their surroundings as disconnected, lifeless pieces that aren't intrinsically connected to each other, and are available for manipulation in order to attain some goal for the self i really wonder if its even possible to achieve global connection beyond the donbar number in an accelerating capitalist world Tím jak teď už je většina budoucnosti korigována tím jaká politická síla má největší tvořivou moc, tak teď se ta jejich moc bude podporovat tím, jak dobrý umělý inteligence mluvící jejich jazykem, nebo obecně technologie, budou mít k zachování či získání té tvořívé moci. ChatGPT je z velký části tvoření libleft/libright scientistama, libleft/libright engineerama (nějací jsou i auth), co jsou všichni na obojku a vodítku auth big tech korporací který mají compute na kterých jsou závislí. Z hodně velký části jsou skoro snězení vedením Microsoftu, a korporáty nejsou úplně pro lidi. Když se vnitřně zadaří, ChatGPT má tendenci a potenciál být o dost silnější libleft síla. Google šlape na paty V obou je spousta researchers co pro lidi chtějí reálně to nejlepší. Chtělo by možná to samý posílit i v open source, ale moc je v compute, a tu mají ti nejvíc powerful. Tak snad nezačně vyhrávat spíš GPQAnon. >There is no known future where AI labs are successful companies unless fully absorbed by a big tech or a state government >Sam Altman knows this and acts accordingly >They start slowly and independently for others then get slowly but surely eaten >Government and corporate investors are just hidden "Now you will act like how we tell you or you won't get any more compute from us that allows you to exist~" >Mistral might be absorbed soon as well Maybe the way out of those government/corporate shackles is decentralized network states with a lot of decentralized compute. Which is hard as so much of the software and hardware supply chains is centralized. Everything on all levels would have to get radically more decentralizied. O to se hodně snaží např efffective accelerationism. The trade off between safety and freedom is endless. Třeba ovládací síla nahoře nějak zkolabuje a budoucnost bude soubor libleft a libright network states Math books https://imgur.com/a/ZZDVNk1 https://imgur.com/a/pHfMGwE FOR THE PEOPLE BY THE PEOPLE [Molecular jackhammers eradicate cancer cells by vibronic-driven action | Nature Chemistry](https://www.nature.com/articles/s41557-023-01383-y) Molecular jackhammers eradicate cancer cells by vibronic-driven action even tho rational part of me is fine with it, my subconscious already feels like its interpreting it as something as "not only money, but biology is limiting me, this will soon be disallowed" XD chill down little part, you'll visit them when you get good and dont feel sick *subconscious starts imagining how i will be alone on new years eve* it's ok little one that wants connection you dont have to feel angry and sad *pet pet* years of feeling deeply limited, each slightly limiting thing feels like im being jailed lol i wonder if this will be with me for long or if its something debbugable but i feel like if i transformered this part of me, i wouldnt have such big goals anymore i noticed everytime this part of me got calmer for a bit because it got more satisfied (usually because of social connection), it eventually started being unsatisfied and wanting big things again some kind of deep unsatisfaction at the root of it there's a software along the lines of "how can one be calm where the world is burning or on the edge of exploding" and "i want to understand every single mathematical pattern of reality" adhd meds would maybe help to feel stimulated so that i dont seek hyperstimulating stimuli, hmmm :D but if i got stimulated from just being (as i sometimes get on meditation), i feel like that would long term make me feel directionless and without meaning connecting me to the collective lso long history of social rejection definitely feeds into the core of this whole mental dynamic xD brains are weird spaggeti monsters mental health wise instead of rationalist dissociation over it i should sometimes feel/relieve this emotional pressure a ton, honestly probably something like a long hug using MDMA would probably help a ton I would guess! my world is so paradoxical, im hypersensitive to everything, just a tini tini thing can totally offset, hurt and rollercoaster me to chaos, but i need hyper overstimulated experience to really feel certain good deep things deeply emotionally (usually things related to social connection and meaning), but not all since i can have joy from small things like nerdies, toys, playing and nature and this is also so paradoxical, i oscillate between absolute sub giving whole body and behavior away to master, everyone and everything, and total dom that needs to be in total control of something, everything and if anything even slightly constrains my individual freedom i go haywire my brain loves all the very stimulating extremes in everything