ACCELERATE AI FOR REPAIRING, MAINTAINING AND UPGRADING HEALTH, LONGEVITY, WELLBEING AND INTELLIGENCE! ACCELERATE AI FOR REVERSE ENGINEERING THE UNIVERSE AND BIOLOGY! Lately I feel like promoting AI for healthcare much more in Effective Altruism. By effective altruist ITN framework, I would argue, that it's very tractable, and I think it's also neglected relative to it's potential, and I think that it's very important (not just healing but also "upgrading" our biological substrate), you can scale it to healthcare centers all over the world, and you see direct benefit for people very quickly relative to other cause areas. [ITN framework - EA Forum](https://forum.effectivealtruism.org/topics/itn-framework) Are they machine children? The potential of AI in healthcare is enormous! We've only scratched the surface so far! The scaling laws of increasing frequency of discussions about scaling laws hitting a wall and what is real reasoning and intelligence over the years Illya is right. This will become relevant soon. Technology will become much more powerful than humans in much more aspects compared to now. Industrial revolution made human strength irrelevant, and the artificial intelligence revolution will make human intelligence irrelevant in similar way. Most people don't have imagination about future technology. Back when industrial revolution started, imagine people arguing that human strength will never become irrelevant, because the newest manufacturing machine still hasn't surpassed the best humans in strength. Do you see the shortsightedness there? https://x.com/kimmonismus/status/1857118297655017888?t=0Ztw6YQ54ifCnHRlhvDUEw&s=19 I see parallels between AI safety people and Rust programmers I was at a police station today and literally nobody knew about ChatGPT there. I live in such a bubble. It was 10k city in the middle of nowhere in Czechia tho. Sam Altman getting more involved in government is the only way for him to not get squashed by Elon Musk becoming the US government https://x.com/burny_tech/status/1858645871140696288?t=z8HHNZppeFoiCddcaURXmA&s=19 https://x.com/BasedBeffJezos/status/1858575619975475582?t=62xBZpLNa1kgxkxwkCaN4g&s=19 https://x.com/sporadicalia/status/1858573966660825398?t=eziPIyuwqWLHolCsfjQuew&s=19 https://x.com/BasedBeffJezos/status/1854256336801816920?t=udsbVfOmHUpUpVjVxOW3Gw&s=19 Universal basic income (UBI) is needed [Reddit - The heart of the internet](https://www.reddit.com/r/ChatGPT/s/j1Tfqhld5w) Deepest learning Future retrocausal omnimodal superintelligent AI models, give me your theories of everything, in both human known modalities and beyond human comprehension modalities, in all of them, all at once Future retrocausal omnimodal superintelligent AI Gods, give me your theories of everything, in both comprehensible and beyond human comprehension forms, all at once 🙏 How are you doing fellow universal function approximators When you took DMT and the interdimensional entity starts shapeshifting in front of you [Reddit - The heart of the internet](https://www.reddit.com/r/Decart/comments/1h6bv1a/interdimensional_spider/) https://x.com/burny_tech/status/1864579264894435820?t=SW0yongWG0NhH4Z_CJe-fA&s=19 " i personally keep changing my model about AI existential risk, but since currently power concentration issues are high on my list, i feel like democratizing AI technology will yield better future than just letting it be in the hands of the currently most powerful people on the planet, even with all the other risks associated, it feels like a better trade off in terms of risks trade offs i am trusting less and less the current big AI companies that will probably be the ones who will develop the next gens of powerful AI tech Antropic and OpenAI's merging with the military few days ago killed my trust even more, I dont want this technoloy to kill people, but create abundant future for all, solve science, health and so on i dont want any wars on this planet, i want all armies to stop existing, but i guess im naive, that will never happen it's quite a moral dilemma whether to support West's army so that the Russia and China doesn't eat the West, as someone literally living next to Ukraine in a EU country still heavily traumatized by pretty recent soviet dictatorship, although paradoxically supporting these wars probably increases the overall amount of wars happening same with autonomous drones and robodogs etc. in Ukraine, I want them there so the Russians don't eat us, but I don't want this to happen because I don't want the technologies to be used to kill people it makes me laugh, cry, and angry at the same time to see most of the biggest AI companies mysteriously removing "we will never use our AI in military" clause from their "rules" over the last year and suddenly going all into AI for military use I would rather see any kind of extremely powerful technology in the hands of people instead of more and more concetrated in the hands of efficient power-grabbers, which are often the exact opposite of people who are interested in AI for science, healthcare, in using technology for good. The currently most powerful technology on the planet right now is technically in the hands of mostly corporations, militaries, governments, super-rich tech investors,... and the inequality in who has access to what could be even worse... Best AI systems are getting more expensive and there's less access to them by all. But I understand the arguments on the other side, as random psychopaths and antinatalists hating humanity with nuclear weapons could be dangerous, but I'm not sure to what degree I trust the current tech and world leaders more. I'm surprised we haven't nuked eachother yet sometimes. But I still believe that the world can still be a better place and most of these issues can be somehow solved, if enough good people try. I sometimes wanna shapeshift my belief system into purely naive optimism or "nothing ever happens" mindset forever, depression solved! " AI art hate: Sites limiting AI art: Perspective on the pros and cons of sites splitting art and AI art in general (I wanna support both human artists and novel AI creativity beyond human comprehension a lot at the same time) and IMO the causes of all this specific and broader resistance: IMO: splitting is bad from: - my personal site usage perspective (loss of centralized browsing interface for all types of images) - future of much more potentially novel different AI art getting less traction perspective, I wanna see machine creativity beyond human comprehension - increasing polarization between humans and AI - certain artists perspectives - moderator perspective (making some stuff harder) good from: - moderator perspective (making some stuff easier) - certain artists perspectives (humans getting more traction, supporting their livelihoods, etc.) - usage of the site by others with different preferences etc. It feels to me that the recent resistance wave is largely a consequence of: - some people using it for spamming and total memorization instead of novelty, I wish more people would strive for as unique novel creations as possible - corporations and some people maximizing money instead of creativity - competition problems - philosophy of art, where a lot of people see the process by humans that leads to that result as the most important part of art, where I personally find that similar but different machine process similarly fascinating, meaningful, sacred for other reasons... Which I notice a lot of people just have different feelings about, but for me it's largely a result of how I like to daily explore how various AI models (for all sorts of things) work internally in terms of theory, practice, engineering, math, and I find it extremely fascinating :D - broader feelings from broader AI resistance because of broader potential for job loss, corporations giving it bad reputation, concentration of power, etc. " Both extreme complete anti-AI narratives and complete naive pro-AI narratives with insane oversimplifications and selection+confirmation biases currently running in our societies are often so much focused into one sticky concrete perspective focusing just on part of the whole story and completely ignoring everything else and all other perspectives, not seeing the nuance in everything that exists there The effects of irrational political polarization around a scientific topic, that is as a technology used in the context of capitalist incentives, in its full power And we've been there already with so many other scientific/technological topics in the past Tohle je problém jakýkoliv společnostní polarizace obecně jednu chvíli jsem zkoumal veškerou neurovědu co jsem za tímhle mohl najít V machine learningu je tohle přesně forma overfittingu, jenom je to na úrovni společnosti... 😄 If industrial revolution made human strength much more irrelevant, and artificial intelligence revolution might make human intelligence much more irrelevant, what becomes the next more relevant thing? Me doing math with my silly little brain while expecting AI to automate math soon AI dystopia risk vs uncontrollable rogue AGI existential risk rigorous rationalist bayesian analysis of priors? Četl a poslouchal šílený množství hodin diskuzí a argumentů kolem trade offu výhod a nevýhod, šancí různých rizik, jak spolu interagují, apod. (což sem tam dělám pořád), měl z toho pár mental breakdownů, a rozhodl se, že je to skoro unpredictable, ale ta power concetration dystopie stejně působí nějak víc likely než uncontrollable rogue AGI causing catastrophic harm on its own, když jsem teda zrovna před chvílí neposlouchal třeba Eliezera. Nejvíc se snažím extrapolovat dosavadní trendy, a mám pocit, že power concetration dystopie má víc data points. moje asi oblíbená diskuze na tohle téma co zformovala můj pohled celkem dost [https://www.youtube.com/watch?v=0zxi0xSBOaQ](https://www.youtube.com/watch?v=0zxi0xSBOaQ) Přijde mi zajímavý jak jsem viděl z obou stran hodně krát claimy že ta druhá strana nedává smysl :kek: mě dávají smysl oba, ale žijou v celkem jiných mentálních světech tbh 😄 jeden z hlavních cruxů e/acc frameworku je mi příjde že je P(stable totalitarianism | AI regulation) > P(catastrophic harm by rogue AI | no AI regulation), thus, minimize AI regulation a druhá strana má nerovnítko naopak but i think there are more cruxes too, like the strongly empiricist vs rationalist epistemology about predicting future events - believing something very weakly until they see it and the emphasis on the unpredictability of long term future a myslím, že to, že e/acc koreluje s libright, a "druhá strana" často v týhle diskuzi spíš s authleft, taky bude hrát mega roli, což se mega reflektuje v jejich values 😄 další dichotomie dle mě je silný nevěření vládě vs silný nevěření korporacím, who do you allocate the trust about all of this to 😄 někteří radši corportate diktaturu než klasickou government diktaturu nebo někteří dost argumentují pro total decentralization of power, aby ji neměl nikdo jak zneužít a vládnout nad jinýma totální nejde no, ale zase tím směrem jde jít, minimalizovat libovolný riziko zneužití individuální moci tím, že se za co nejvíc decentralizuje v rámci možností osobně bych ale celkem bral aby např policie existovala, takže až tak daleko nejdu 😄 open source je jedna cesta, kolektivní decentralized training je další cesta, na což začíná vznikat infrastruktura ale je problém že hlavně korporace mají přístup k biliardě GPUs, takže by se nějak muselo sabotovat to to je nevýhoda v tomto worldview no, Čína už open source využívá ve velkým, ale zase dává open source i světu (jako např deepseek co teď vyšel, co vypadá, že je jenom 6 měsíců za closed source leading AGI labs 🤔 https://fxtwitter.com/teortaxesTex/status/1871933391823949942 , https://x.com/arankomatsuzaki/status/1871950031554773428 [deepseek-ai/DeepSeek-V3-Base at main](https://huggingface.co/deepseek-ai/DeepSeek-V3-Base/tree/main) a QwQ v tom novým reasoning paradigmatu [QwQ: Reflect Deeply on the Boundaries of the Unknown | Qwen](https://qwenlm.github.io/blog/qwq-32b-preview/) ) ještě jsou noncrypto decentralized ai projekty https://fxtwitter.com/PrimeIntellect/status/1844814829154169038 v praxi si ale myslím, že korporace se víc mergnou s vládou, což už se kind of děje, viz kolaborace OpenAI/Anthropicu s military national security defence industry poslední měsíce (Palantir), kolaborace Muska a Trumpa, atd., a budou mít ještě větší moc dost predikcí od Leopolda se celkem začínají dít [https://www.youtube.com/watch?v=zdbVtZIn9IM](https://www.youtube.com/watch?v=zdbVtZIn9IM) [Introduction - SITUATIONAL AWARENESS: The Decade Ahead](https://situational-awareness.ai/) a nebo to, že US gov už taky začíná https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/ sám nevím jak moc věřím spíš korporacím, spíš vládě, spíš lidem kde jsou nějací psychopati, jak moc se bojím spíš číny, spíš uncontrollable rogue AI atd. furt se mi na tohle pohled mění >war like AI i mean, AGI labs začínají víc a víc spolupracovat s military, a Čína taky víc a víc začíná https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/ nebo autonomní drony taky mega akcelerovali už v existjících válkách 😄 future looks fun abych z toho neměl další mental breakdown, tak možná spíš použiju ostrich algorithm, a asi budu pokračovat copovat tím, že si budu užívat úspěchy benefital aplikací těhle technologií např v healthcare a ve vědě, and enjoying cool nerdsniping intelligence and overall STEM research, under technooptimist worldview, což je z velký části důsledek tohodle všeho :FeelsWowMan~1: or become bernie sanders of AI https://fxtwitter.com/norabelrose/status/1873823909168242715 [By default, capital will matter more than ever after AGI — LessWrong](https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi) UBI? Jak řešit to, že se v našem světě víc a víc moci koncentruje do rukou míň a míň lidí, což většinou (ne vždycky) benefituje hlavně těm pár lidem a ne všem? I think that a lot of power (economical, technological, political, social, governmental...) in the world must be decentralized so that everyone benefits from abundance as much as possible. Jsem taky pro decentralizaci a demonopolizaci. Ale na druhou stranu, furt někdo musí nějak tu decentralizaci/demonopolizaci nějak enforcovat, jinak se naturally tvoří power law distribuce toho jak jsou jednotlivý entity mocný. Pak je to o tom, komu člověk na to dá ten trust. Sám nevím jak moc věřím spíš vládám, spíš největším firmám, spíš dalším lidem kde jsou taky power hungry lidi,... A v technologickým podporu podporovat open source je fajn, ale tam je zas problém, že přístup k největšímu výpočetnímu výkonu ve formě GPUs, kde ten kód spustit (hlavně AI), mají spíš korporace a billionaires, místo normálních lidí. Tam decentralizovaný computing infrastruktury vznikají, ale je to furt relativně dost slabý. Mám pocit že ti větší players co zmiňuju a co zmiňujete prostě relativně rostou víc a víc a mají větší a větší moc a všechny tyhle snahy nejsou dostatečně silný. A ostatní lidi mají míň a míň moci. Často mám pocit že je potřeba něco radikálnějšího aby se tenhle trend rozbil. U některých jednoduššeji automatizovatelných oborů už se to začíná dít v praxi. Jsem pro automatizaci, ale chci aby ta automatizace sloužila k lidem (jako např to že teď máme přístup k tolik jídlu mnohem jednodušeji než před stovky lety), a ne aby sloužila hlavně tech gigantům co z toho exponenticalne rostli víc a víc hlavně pro sebe a měli větší a větší moc nad lidma. How this technology became so politicized and so much connected to the right sabotages so many potentially rational debates about it So many people completely absorbed by the oversimplying ultra polarized culture war narratives are often totally unable to talk about it without political emotions completely blinding their mind as a result and completely unable to think about the any actual technical aspects It's similar to how problems in energy production are also full of polarized narratives