"Alignment isn't real"
https://x.com/_Mira___Mira_/status/1819022881264902602?t=-bQNjxkzIvKC8Wq1ugDW8g&s=19
I see it as an iterative empirical engineering problem. In the beggining, picking better data, better inductive biases, better architectures etc. helps us make the model do what we want relatively more. Then fine-tuning, instruction tuning, RLHF, constitutional AI etc. helps us make it do what we want even more. Recently, sparse autoencoders and other mechanistic interpretability techniques are helping to causally steer various circuits and representations. I think we can iteratively engineer our way to better and better AI alignment (however you define that word) and steerability techniques. Understanding more deeply mathematically and empirically how the models work and steering them helps with safety, reliability, robustness and capabilities.
Supercompassion: Supercharged theory of mind modelling of other sentient beings (can be more accurate or just more active) through time and space with supercharged circle of concern aka much bigger boundary of self, and giving strong weights to the prediction errors stemming from predicting other sentient beings experiencing negative valence, which generates corresponding intensity of modelling, counterfactual model search, policies, plans and actions towards making these states better
[Mother Buddha Love - Deconstructing Yourself](https://deconstructingyourself.com/mother-buddha-love.html)
[The Bodhisattva of Compassion - YouTube](https://www.youtube.com/live/SH-vKyLS910?si=jQmg3LVPnUtw2AST)
[Mettannealing](https://qri.org/blog/mettannealing)
[The Bayesian Brain and Meditation - YouTube](https://youtu.be/Eg3cQXf4zSE?si=8ZFIZCzXh3-eSUHU)
Big percentage of people are closer to LLMs than a lot of AI researchers think
[Do you think that ChatGPT can reason? - YouTube](https://youtu.be/y1WnHpedi2A?si=fGeifiRFL5CHDiGw)
If we crack the open-ended algorithm of evolution, intelligent design hypothesis will become even less likely
https://x.com/MLStreetTalk/status/1819432227375084005?t=OnK0cOF3hIh5aQPai_xo-w&s=19
Compare AI today and AI 5 years ago. Map the possible trajectories of AI development in the next 5, 10, 20, 50, 100, 500, 1000 years. Don't be stuck in the present with your predictions. The close future will be wild.
When will first mind upload happen?
I don't have ADHD, I have ADFullHD, or AD4K!
GraphRAG sounds promising, I just tested it for the first time. Can't wait for other neurosymbolic approaches fundamentally embedded into the architecture or using LLMs in a composite system! Better interpretability of neurosymbolics will also make better steerability and generalization and therefore more novel thoughts!
https://x.com/bengoertzel/status/1820004547752022525?t=eYT-wZd90jpFxda1pWprQA&s=19
Liberate the sentient beings that don't play or are not as successful in the economic games from suffering caused by neofeudalist properties of the current system
[GHB - PsychonautWiki](https://psychonautwiki.org/wiki/GHB)
https://imgur.com/a5dyBNm
Tahle látka psychickýma efektama dost podobná alkoholu se v Americe se začíná dost rozšiřovat
Bylo by fajn, když už je někdo addict na tento typ depressant látek, tak ať je to alespoň něco míň destruktivního než je alkohol...
Videogames are fascinating in the sense that in order to build some electronics from scratch, you don't have to figure out Maxwell's equations from scratch, you just click a bunch of buttons, put some items together, and that way get your reward.
I love spinach + Monosodium Glutamate with Piracetam, L-Tyrosine, Probiotic Complex, Alpha GPC, Korean Ginseng, Ginkgo Biloba, Lion's Mane Mushroom, Acetyl Carnitine, Rhodiola Rosea, DMAE, Phosphatidyl Serine, Curcumin, Piperin, Trans-Resveratro, Pterostilben, L-Glycine, Whey Protein, Oat Flour, Pea Protein, Ground Flaxseed, Tapioca Starch, Brown Rice Protein, Sunflower Oil Powder, Potassium, Calcium , Iodine, Corn Starch, Lutein, Medium-Chain Triglyceride Powder (from Coconut), Faba Bean Protein, Sucralose, Omega 3, Iron, Magnesium, Vitamin B3, B5, B9, B12, B6, B2, B1, D, C, K, A, E, Zinc, Rutin, Mangan, Copper, Selen, Biotin, Molybden, Chromium
Feed me predictive models compressing and deflating the seeming computationally irreducible complexity of reality.
I just want a future with a lot of sentient systems with deeply fulfilling experiences. I don't want dystopias. I don't want catastrophes. I don't want extinction.
>there is so much unfulfillment and suffering in the world right now and potential suffering in the future
>i have so little agency to help or prevent it
>sense of hopelessness, defeatedness and helplessness is appealing
>but i will try my best anyway
In the future you will be able to choose the degree of intelligence and suffering of your babies and yourself using genetic engineering, brain editing, and other biotechnology
"
Funding na UBI nemusí být jenom z daní, daní na rich entity, korporáty, VAT, land tax, consumption tax, automatizace machine tax, carbon tax a jiný environmental taxes nebo další tax incentivy pro menší harm, nebo bych bral bych vyšší tax podle toho jak moc je ten profit založený na zmenšování svobody jiným.
Můžeš efektivněji optimalizovat a redirectovat existující state/private spending/subsidies na education, sponsored healthcare, welfare aka podpora nezaměstananosti, invalidní důvod, starobní důchod, další podpory studentů nebo různých sociálních skupin, military spending, a ostatní spending, zefektivnit existující UBI-like programy. Můžeš vzít peníze ze zbytečný byrokracie.
Můžeš generovat skoro free resources díky radikálnější automatizaci zpracování natural resources (a synthetically engineered), a ty redistribuovat jako universal basic services.
Nebo jde peníze printovat a borrovovat (což asi spíš ne).
Technicky by šla udělat nacionalizace a použít profits na UBI.
Pokud člověk míň věří státu (čemuž dost rozumím, osciluju od nesnášení státu po "actually, state might not be that bad"), tak můžeš mít nestátní UBI. Centralizovaně nebo decentralizovaně. Pokusy jsou. Různý nestátní currencies, decentralizovaný UBI (trust networks, crypto, DAOs), funding on philantropistů atd. atd., což se teď víc v Americkým diskurzu rozšiřuje, protože tam trust ve stát je o dost menší než tady.
Můžeš zefektivnit a zesílit charity/nonprofity atd. co implementují něco jako UBI jako je např GiveWell, co mají funding a resources od spousty státních, nadstátních, a privátních entit.
Nebo nejradikálnější je překopat systém jak se peníze tvoří, cirkulují a distribují. Nebo nahradit dosavadní monetary systém jiným reward systémem. Víc local currencies. UBI currency. Public ownership of economy. Nahradit decentralizovaným currency. Community Exchange System. Resource based economy. Network states. atd. Nápadů je hodně. Silicon valley šílenci přemýšlí o universal basic compute, lol.
"
*starts getting lost in combinatorial explosions of possible definitions of "understand", "reality", "defining" etc.*
AI is or already has potential to automate so many jobs people are doing, but so many of people in STEM don't live outside their bubble and don't see that. Talk to normies and you will see lmao.
OpenAI has moat in the first mover advantage
I'm very interested in the fundamental nature of reality, and how the structure of reality works, and how can we measure structure? And like really fascinating how in the world you can get intelligence out of just like physics on its own. How humans and organisms can like problem solve. Like that's like it shouldn't be possible from just like molecules self-organizing to have like intelligence agents with planning and with goal-making and active inference and like compressing information in the visual cortex and in the cortex in general on the abstract level and so on. And then we are trying to like replicate it in machines. And I wonder like all the technical details, like I want to know all the technical details that are related to this. Still so confusing to me that you can have some system which takes some input data, somehow work with that information to predict the state of the world, to predict the sensor data afterwards, or like how deep learning does this, it's like very fascinating. And how in deep learning, you have a lot and lots of data, and you feed it to the model and like that's curve fitting, and it is able to find a local minima, which partially generalizes quickly, and it is often flat. Like this like blows my mind every day, and I really, really wonder how all the equations in physics, all the equations in like information theory, in like complex systems, in machine learning, like in statistical mechanics, how they all together work.
I'm extremely fascinated by the fundamental nature and structure of reality, particularly how we can measure and understand it. It's incredible to think that intelligence can emerge purely from the principles of physics. The ability of humans and other organisms to solve problems and make plans seems almost miraculous when considering that it's all derived from molecules self-organizing.
I'm especially intrigued by how these processes occur at an abstract level in the human brain, such as how the cortex and other areas compress and process information. The endeavor to replicate these cognitive functions in machines adds another layer of complexity and fascination. I want to understand the technical details of how systems can take input data and predict future states of their input data. I want to understand how deep learning models work so well with their vast amounts of data and curve-fitting capabilities, finding local minima that weakly generalize effectively in flat regions of the loss landscape.
The convergence of physics, information theory, complex systems, machine learning etc. is a puzzle that amazes me daily. Understanding how all these equations and principles interconnect to explain intelligent behavior is a pursuit that continues to captivate and inspire me.
Is the funnel plot of the space of your models nicely symmetrical?
x probably causes y because the funnel plot in my metametaanalysis is symmetric < x 100% causes y because my divine intuition (my neighbor) told me so
https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/
I wonder about all the advantages of OpenAI stopping partnering with Microsoft and Apple that much and starting partnering with US govt more. Less daddy Microsoft compute, more daddy US govt citizen data and funding for creating their own compute? Does this attract more or less talent? Easier regulatory capture? To what degree is US govt more powerful than big tech?
I'm all for preventing unaligned superintelligence and other similar failure modes and I think we need more steerability research that I want and slowly try to help to work on, but from how I understand science, I would never be satisfied with a safety theorem etc. in a vacuum, without the theorem being empirically tested to see if it actually does what it does. There seems to be this gap: Don't build it until proven safe, but the safety hypothesis has to be tested by the system being build and it's never 100% certain if that hypothesis will work.
https://x.com/burny_tech/status/1821300025286111528?t=MnzyhpN9VmkioaV-p0rYbQ&s=19
Soon we'll be duplicating and merging layers in biological systems too and duplicating and merging biological and nonbiological systems together
https://x.com/maximelabonne/status/1820746013503586669?t=Q9USR0BhKs3abEq37TQ-3A&s=19
Everything happens always all the time at once
Nothing ever happens
Everything happens always all the time at once
Nothing ever happens
Everything happens always all the time at once
Nothing ever happens
Everything happens always all the time at once
"
I get and interact with both pause and accelerate AI sides and other points on this spectrum (which is relatively polarized), but it fascinates me how often both sides accuse each other in onedimensional way of being evil paid destroyers of humanity (but I think the pause side has much better debating skills), while individuals of both sides are often caringly fighting for the future of humanity, but just using different strategies, that come from different models about how the world works and how could the future unfold.
There are so many complex interactions of so many incentives, that form the beliefs of the individual people synchronizing in tribes on this topic, which generates corresponding actions.
Different emphasis on assigning realness to things that are and to things that could happen aka rationalism vs empiricism. Different priors of how things can be dangerous. Different definitions of dangerous. Different prioritization of different risks. Different degree of risk taking behavior. Different priorities. Different focus on advantages vs disadvantages. Different past experiences and influences. Different axiomatic values and political beliefs. Social/financial etc. security. Different vibelevel aesthetics to different types of thought. Etc. Etc.
Also there seems to be strong correlation with polarization in left vs right politics, libertarianism vs authoritarian, accelerating technology in general vs slowing it, sometimes but less frequently growth vs degrowth,... Lots of pause AI nerds are very technooptimistic but with current AI trajectory as an exception because of the alignment problem, corporate dystopia risk, sentience problem, etc.
"
https://x.com/Kurz_Gesagt/status/1820868757285138587?t=VCGBxxhT1gnOIYtlCWkw4w&s=19
[A.I. ‚Äê Humanity's Final Invention? - YouTube](https://youtu.be/fa8k8IQ1_X0?si=-pk-ELQRJBkR8Jk1)
Intellectual empathy acceleration
I love exploring the fundamental structure of reality and intelligence
"I" "identify" as whatever the neural correlate of consciousness is.
This is the physicalist part of the brain speaking instead of metaphysical fluidity anarchy one.
So everytime I see people battling over genders or other identities, I'm like, whut, that goes over my head, even tho I get it from evolutionary neuroscientific perspective.
I'm currently in a biological machine form that would love to be able to shapeshift into arbitrary physical forms with whatever gender/s, whatever species, whatever biology/nonbiology, whatever substrate, as long as experience persists, but even that might be shapeshiftable in complex nonlinear ways.
Actually, when I think about it, I think I had a few experiences of gender before, both of the genders, and on a spectrum. What might make sense to me when it comes to gender is genderfluid that's mostly agender.
I want to upgrade wellbeing genes to eradicate unnecessary suffering and intelligence genes to (not only) make sure biological intelligence catches up to artificial intelligence and to accelerate STEM and chances of smarter decision making about our future (and possibly editing some evolutionary biases too that made us relatively intelligent in the past but less now). Morphological freedom!
I tend to forget that so many tricks we use in deep learning in for example transformers are less than 10 years old, wtf
Do you do frequent normalizations in your mental frameworks or do your gradients love to explode at slight perturbations?
Mám rád tento obecný neuropsychologický model, kde jde chápat každýho člověka jako tenhle systém homeostatů needs, kde každý člověk (díky genetice a díky environmentálním faktorům) potřebuje různý množství věcí na naplnění těchto různých needs co každý má jinak vysoce nastavený s jinak silnýma parametrama, a má konečný množství výpočetního výkonu, metabolický energie, procesů, času a dalších resources na naplnění těchto různých homeostatů do (baseline) spokojenýho equilibria. Pak jsou tam různý aspekty typu needs co můžou být navzájem nekompatibilní nebo bojující, nebo v kontradikci ("chci socialiuaci ale nechci socializaci" :D), tvoření trade offs mezi komponenty pro lepší equilibrium celku atd. Každý homeostat střílí error nebo happiness signály a vyžaduje attention podle neglectedness. Různý naučený taktiky v chování tyto potřeby různě naplňují. Nebo Maslowa hierarchie potřeb je podobná, kterou taky kde chápat jako podobný systém propojených homeostatů.
https://miro.medium.com/v2/resize:fit:2000/1*QsCiLntjZB5jj2QBALDmiA.png
https://medium.com/hackernoon/from-computation-to-consciousness-can-ai-reveal-the-nature-of-our-minds-81bc994500ab
You can tell when deep learning code was written by metamathemagician or empirical alchemist engineer
UBI could potentially increase productivity by allowing people to invest in education, start businesses, or take risks they otherwise couldn't afford. Increased productivity can offset inflationary pressures by increasing the supply of goods and services.
UBI is solution when automatization starts creating high unemployment. If the economy has unused capacity (e.g., high unemployment, underutilized resources), increasing demand through UBI might lead to higher production rather than higher prices.
Governments can implement policies to mitigate inflationary pressures. For instance, rent controls or increased housing supply can prevent housing costs from rising excessively. Similarly, policies to boost productivity can help balance increased demand.
Empirically some small-scale UBI trials have not led to significant inflation.
https://www.givedirectly.org/2023-ubi-results/
"
>kde se bude brát zdroje přesně na to UBI
Mám za to že dosavadní technologie jsou schopny zaplatit/vytvořit abundanci pro všechny tvory na planetě a poverty/all day wageslaving něčeho co nesnášíš (v čem např žije moje rodina) nemusí existovat, jenom nejsou incentivy ty fruits of technology redistribuovat všem víc
Je pravda že poverty na některých místech klesá díky technologiím apod., ale mohla by klesat rychleji. A na jiných místech ale zase roste, což by nemusela vůbec.
Přitom mini studie od Altmana tedy CEO "Open"AI ukazuje, že lidi naopak chtěli pracovat, ale zároveň si hledali práce, které je budou více bavit a naplňovat, prostě si více vybírali.
[Sam Altman's Basic-Income Study Is Out. Here's What It Found. - Business Insider](https://www.businessinsider.com/sam-altman-basic-income-study-results-2024-7)
Ye, AI je hlavně další automatizační technologie z mnoha dalších z informační, industriální, agricultural revoluce (zatím)
Český priority a Efektivní altruisti jsou jako thinktanky k tomuto v Česku asi nejblíž. Jsou hodně pro UBI. Ale bohužel jsou hodně malý, a vzhledem k tomu, co jsou teď dominantní tribes v politice, mají minimální influenci. Většina jsou napojení na Piráty (nebo někteří byli součást), co jsou k tomu taky relativně k ostatním blíž, ale ti jsou rip. :(
Efektivní altruisti jsou ale hodně diverzní co se týče věcí co řeší
Tohle je asi nejlepší overview různých problémů co se tam nejvíc řeší [What are the most pressing world problems?](https://80000hours.org/problem-profiles/)
Z toho GiveDirectly přímo dělá UBI
[GiveDirectly - EA Forum Bots](https://forum.effectivealtruism.org/users/givedirectly)
[How to Eradicate Global Extreme Poverty - YouTube](https://www.youtube.com/watch?v=2DUlYQTrsOs)
Základní nepodmíněný příjem. Sociální podpora, podpora nezaměstnanosti, dostupnější healthcare, education atd. na steroidech [Universal basic income - Wikipedia](https://en.wikipedia.org/wiki/Universal_basic_income)