The current AI systems are already superhuman level at some things and babyhuman level at other things, which is already a kind of alien intelligence. Sam is heavily investing in neuromorphic computing. I think that's gonna be a big thing in a year or two. Analog architectures, architectures closer to the brain in general, might come soon giving less energy consumption and hardware speedups. We might have to get really close to brain using neuromorphic chips or similar hardware, or get even closer to the brain by implementing forward forward algorithm, more nonlinearities in neurons like in biological neurons, or resonance network doing field computing, or implement top down neuronal dynamics organizing control by local electric field potentials, which might give humanlike way of energy consumption efficiency/speedup or humanlike less alien intelligence. Or we might pursue thermodynamic computing for really fully using and digesting every single bit out of all the computation in the dynamics of physics using stochastic bits that few groups of people work on, since the laws of laws of algorithmic or classical information theory and statistical mechanics might be enough for fully general intelligence that is better than humans at everything. Or we might be able to get to human intelligence or better than humans level at every task level with current architectures, data, software and hardware or just with tiny mutations given enough insane compute, or completely different architectures on the same hardware, implementing search, explicit symbolic hybrid world model with planning, or something completely different, or hardcoding in the training process economy of properly generalizing learned circuits using mechanistic interpretability, or maybe the current architectures and approaches are enough and we dont even need insane compute and we might need just a few tricks with current LLMs. More empirical experiments, benchmarks and mechanistic understanding of everything please! Or maybe penrose is right and we need quantum gravity quantum computer The power of top down local field potentials organizing neurons is needed for humanliek AGI? Probably is needed for human-like intelligence but not necessarily for human-level intelligence As long as the laws of information theory/statistical mechanics hold for that system for archieving the levels What circuits get learned vs input and output mapping with analysis of speed and consuption I think circuit level is still pretty hardware agnostic consciousness is a different topic https://twitter.com/jankulveit/status/1736012613232841090?t=7prvFfOMVepc4Ky9gTX5nw&s=19 A page of informal distilled knowledge about “how GPTs works”, "how they generalize" and “comparison to humans”, illustrated using #NeurIPS papers. Distillation is systematically undersupplied by the research ecosystem - incentives point toward novelty. One pretty sensible way is to see LLMs as assemblies of probabilistic programs, implemented on transformers. By "program," I mean, for example, a set of operations to “perform linear regression”, “calculate modular arithmetic”, “create a simple linear classifier”, and so on. Also, programs working basically as a memory/database, where asking for “London” gets you “UK”. But there are also larger and more complex programs. [1] In contrast to normal coding where programs are written, programs inside LLMs are learned. How and why? Intuitively, because of the prediction objective, and because of compression. During training, the LLM is essentially asked to predict the continuation of a sequence of tokens. These are normally language tokens, but it's sometimes easier to understand what is going on if we start just with sequences of numbers. (LLMs are actually pretty good at this [2]) For example, look at the sequence: 11, 14, 23, 1, 23, 1, 18, 6, 23, 1, 9, 23, 1, 6, 9, 11, 18, 23, 1, 17, 23, ? What's your guess about the next number? One pretty sensible answer is “1”, because in all previous instances when the sequence contained “23”, the next number was “1”. You can imagine that if the LLM sees a lot of data like that, it learns a simple program: “if the number n is 23, predict 1”. Now, look at the sequence: 3, 10, 5, 0, 7, 2, 9, 4, 11, 6, 1, 8, 3, 10, 5, ? One sensible answer is “0”, either because it seems the sequence started repeating, or because in the previous instance, what followed “5” was “0”. Seeing a lot of sequences like this, you can imagine there are many programs competing to be learned! For example: “copy what was 11 tokens before”, or “if the number is 3, predict 10”, and so on. Or, if you look carefully, there is a more abstract program: “new = (current + 7) mod 12”. (Mod stands for modular arithmetic) What will the transformer learn? The answer turns out to be “it depends”. [3] https://twitter.com/burny_tech/status/1736407591633273342/photo/1 covid rccesion climate change AGI machine learning neural fields Generalised Implicit Neural Representations [[2205.15674] Generalised Implicit Neural Representations](https://arxiv.org/abs/2205.15674) [Dr. Daniele Grattarola at NeurIPS - Generalised Implicit Neural Representations - YouTube](https://www.youtube.com/watch?v=v5NysEyZkl0) [Dr. Daniele Grattarola at NeurIPS - Generalised Implicit Neural Representations - YouTube](https://www.youtube.com/watch?v=v5NysEyZkl0) diffusion models for molecule synthesis [Quantum Computers cross 1000 Qubits Threshold! What does this mean? - YouTube](https://www.youtube.com/watch?v=XlCsi8zagNw) Quantum Computers cross 1000 Qubits Threshold Transhumanism, technology, intelligence, growth, expansion, complexity, adaptivity, resilience, safety, security, decentralization, freedom accelerationism The way to stop AGI+ASI to minimize all risks would be total regulation of big tech and total control of open source community to stop all progress in this domain, and chances of that happening and succeeding feel nonexistant, but even if it happened, this would in practice IMO result in terrible dystopia instead without freedom thanks to bad actors getting concentration of asymmetric sociopathic power, and unsustainable status quo dynamics or other existential risks would eventually kill all life. While when we pursue AGI+ASI, even if there is also a risk of extinction (that can be decreased by (cyber)security and safety research while building it, cultural/political transformation, using technology to mitigate all other existential risks,..), there are increased incentives to build and upgrade decentralized (maybe some really minimal governance or decentralized nongoverning institutions incentivizing freedom, wellbeing and resilience against existential risks might be good) open source democratic civilizational technology as defense against bad actors that want to destroy all life or want concentration of asymmetric sociopathic power to nonaltruistically rule over others, creating collective resilience against other existencial risks, and that way eventually getting into decentralized cosmic constellation of trillions of transhumanist and other sentient beings throughout the universe, resistant to total extinction, flourishing. But we really need to do all the political, cultural and safe controllable technological development correctly, increase free decentralized adaptivity, security and resilience, to prevent all kinds of dystopias and catastrophies creating eternal suffering or erasing all of sentience. We either won't grow and die locally from unsuitability and other existential risks, or expand, adapt, build resilience, security, upgrade ourselves and conquer the cosmos in all sorts of groups resisting tyranny and extinction. It feels like our only hope for survival and flourishing of all of sentience, protopias or utopian visions. https://twitter.com/MelMitchell1/status/1736405370220728364 papers arguing against generalization The papers themselves say there is some degree of generalization, not "no generalization" We are identifying generalizing circuits inside smaller models that can be directed Saying they dont generalize at all or they fully generalize is misleading https://twitter.com/burny_tech/status/1736632762453364738 experience itself might be irreducible, but its contents (which are part of the experience in QRI lens) are analyzable, predictable and manipulatable using the gazilions tools we already have i love panpsychsim where each chunk of matter (and internal and external partitioning depends on what boundary/binding models do you choose) has a degree of experience by default, there's a form of irreducibility in that there's both top down and bottom up causation, so it both follows and creates the rules electric field potentials might be orchestrating top down causation according to newest evidence or certain brain regions like mid cingular cortex or thalamus or some more abstract statistical force for example hebbian learning or bayesian mechanics are lenses to look at how the rules are created internally or externally, bottomup or topdown, directly or indirectly if existing empirical data doesnt fit into your world model and you have to deny their existence instead, that seems like the opposite of rationalism in the bayesian sense [Etched | The World's First Transformer ASIC](https://www.etched.ai/) Transformer hardware Eliezer vs LeCunn https://twitter.com/ESYudkowsky/status/1736615595830137181 One thinks very slow takeoff with easy gradual controllable AI progress like computers and the internet is most likely Other one thinks ultra fast take off with uncontrollable ASI creating massive deaths bigger than nuclear weapons is most likely I think reality will most likely be something in the middle, so let's upgrade our tools from mechanistic interpretability to steer them in the ways we want and work towards preventing all the other risks as well including not losing freedom and not slowing down progress too fast that will probably be needed for long term survival of sentience Let's do everything to prevent concentration of power in the hands of sociopathic agents You can optimize for many (interrelated) objectives in your model of reality, from predictivity to connection to compassion to creating to pleasantness to raw existing to global impact under some definition of impact and good Bral bych volitelný státní služby v daních Navíc stát ty peníze má tendence rozdělovat extrémně neefektivně, na věci co velká část lidí nechce (což demokracie trochu opravuje) nebo je tam strašně korupce Ale je pravda že v dnešní hyperindividualistický společnosti dokopat lidi aby platili pro common good je fakt nemožný a bezstátí by asi spíš vytvořilo militantní dystopii, ale to dost záleží na tom jaká kultura se uchytí a jaký ovládací síly vyhrajou, což je eventulně pak trochu jiná forma státu nebo přímo nový stát AI might destroy everything we call society today but we might be able to rebuild it afterwards Entropy veritasium [The Most Misunderstood Concept in Physics - YouTube](https://youtu.be/DxL2HoqLbyA?si=2F0ugfPcLp56pm-0) Omniperspectivity acceleration, tribalistic polarization deacceleration, nuance acceleration, multiplicity of compatible process acceleration, plurality acceleration Best unrestricted open source LLM [Mixtral is Now 100% Uncensored üòà | Introducing Dolphin 2.5- Mixtral üê¨ - YouTube](https://youtu.be/SGkaWMDKM9g?si=tnmvHXVJGgVOoHGx) https://twitter.com/4Maciejko/status/1736745951442678239?t=BESroQa1aXHXH3_Q6kOvBA&s=19 Yud: no acceleration Sama: topdown controlled acceleration Beff: bottomup controlled acceleration technological identity anarchism is closest to me, physical pattern with potential infinite upgrading capacity that is theoretically not limited to the biological brain, but can merge with quadrilions of biological or artifical neurons or other compatible physical substrate hard problem of consciousness, boundary and binding problem give the solutions to engineering this [The Information Theory of Aging | Nature Aging](https://www.nature.com/articles/s43587-023-00527-6) The Information Theory of Aging, the aging process is driven by the progressive loss of youthful epigenetic information, the retrieval of which via epigenetic reprogramming can improve the function of damaged and aged tissues by catalyzing age reversal [Redirecting](https://linkinghub.elsevier.com/retrieve/pii/S0047-6374(21)00155-X) The tumor suppression theory of aging This validation might be speculative, but more or less has to be true because single cause mechanistic theories of aging are inherently stupid i feel like this is repeating pattern in biology - we think we figured the biological correlate of some phenomenon, or function of some part, and then find out it has 10000 correlates or 10000 flexible functions and we are still stratching the surface all the models of aging or wellbeing using all sorts of analytical tools from all sorts of levels of analysis life is extremely complex chaotic nonlinear stochastic system of interaction everywhere on all sorts of levels ductaped by evolution Walk around and ask ChatGPT how everything you observe and touch (or yourself) works from first principles in physics on as technical and mathematical level as possible with all the laws in all the interacting layers of abstraction on top of it creating different scientific fields and engineering disciplines, and anytime you dont know a word or a sequence of words, or want to dig as deeply as possible into any concepts to fully understand it out of curiousity, ask it to explain or expand again and again, optionally drawing stuff by plotting in Python or generating images in Dalle, while ideally also double checking or reading Wikipedia, books, lectures or other sources or making GPT4 or other LLMs search it in a document or on the internet! "You're an expert scientist and engineer, how does this process of writing on my phone screen with operating system on a hardware that I'm touching with my fingers work from first principles in physics on as technical and mathematical level as possible with all the laws in all the interacting layers of abstraction on top of it creating different scientific fields and engineering disciplines? Or artificial intelligence? Computers? Brain? Society? The universe?" [ChatGPT](https://chat.openai.com/share/a20c2e7b-06ec-456b-9bbb-5ad06d7e254b) [ChatGPT](https://chat.openai.com/share/035b8ffa-83a2-4069-ac5c-b5e99c838db5) [ChatGPT](https://chat.openai.com/share/f4c028e0-92b5-4265-80bc-140a32d0cfe6) [ChatGPT](https://chat.openai.com/share/e605968f-68ae-441a-9539-301b661dd2f8) [ChatGPT](https://chat.openai.com/share/71b07154-8bd4-4292-a874-32f96637b40b) [ChatGPT](https://chat.openai.com/share/644fb896-97f9-4d42-8338-8edefe205712) [ChatGPT](https://chat.openai.com/share/0adc687d-b3a0-4b2e-b81f-782c45474ae9) [ChatGPT](https://chat.openai.com/share/db921339-4cea-48ea-aaa3-62b2d5392223) [ChatGPT](https://chat.openai.com/share/a9f2e802-2cc4-468b-bb88-b4f30716e701) [ChatGPT](https://chat.openai.com/share/96436429-139b-4751-b73b-3f9d31797bbb) [ChatGPT](https://chat.openai.com/share/a86f86c3-4d1c-4469-83fe-aa7b9ec2ca7c) [ChatGPT](https://chat.openai.com/share/60669030-1b4d-4ca2-a2ed-b0c6fcbafdac) [ChatGPT](https://chat.openai.com/share/c3c02fa6-227b-47e0-8613-d91918753597) [ChatGPT](https://chat.openai.com/share/2939acf7-684c-457e-ad60-df6937e7d65e) [ChatGPT](https://chat.openai.com/share/2369a022-1412-446d-8c61-946b01c151c1) [ChatGPT](https://chat.openai.com/share/18dbb255-863b-4151-9df1-63e7a0566140) [ChatGPT](https://chat.openai.com/share/024f1414-476b-417b-8359-e7bfca83a483) [ChatGPT](https://chat.openai.com/share/54901c74-5873-4083-a18a-9130549e01ed) You're an expert scientist and engineer. Explain how does the brain work from first principles in physics on as technical and mathematical level as possible with all the laws in all the interacting layers of abstraction on top of it creating different scientific fields and engineering disciplines. os and bios [ChatGPT](https://chat.openai.com/share/fe04d472-44d0-4dda-9954-accf524fa341) fluctation theorem [ChatGPT](https://chat.openai.com/share/f28f5974-31bb-4ed5-9463-fddf0b59d135) math of spacetime [ChatGPT](https://chat.openai.com/share/bf710fb0-a7f7-4c15-9aaf-92d7430df255) Free Mistral LLM https://twitter.com/JosephJacks_/status/1736070261433303347?t=3FCwQRPGQ8rc0cJQja6NkA&s=19 LLM leader board [LMSys Chatbot Arena Leaderboard - a Hugging Face Space by lmsys](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard) Mixtral uncensored Locally runnable fully open source model better than GPT-3.5Turbo without any safety restrictions 🤔 I feel like all these AI model safety attempts go down the drain when new open source model better than GPT3.5 got its safety removed few days after release a few days ago and GPT4 level open source model is planned next year and beyond. If it got regulated, I feel like it would happen "illegaly" anyway by some anonymous decentralized open source org posting torrent link on 4chan/deep web. Also it's interesting that France in EU is open source AI king instead of San Francisco in America. https://twitter.com/rohanpaul_ai/status/1736827830971867312 https://openai.com/safety/preparedness https://twitter.com/burny_tech/status/1736857487008260314 Na druhou stranu extrémní množství rosoucího e/acc je v SF 😄 Hmm, kdyby Beff nezkoušel s thermodynamic AI (třeba mu to ale výjde a do pár let bude nahoře), a udělal decentralizovanou e/acc org na LLMs v dosavatních SoTA metodách, tak věřím že by rychle climbnuli nahoru. This is mixing my hopium levels For part of me it increases freedomium and for other part of me it increases xriskium Not sure which copium to consume [God-Tier Developer Roadmap - YouTube](https://www.youtube.com/watch?v=pEfrdAtAmqk) jakože na jednu stranu yay svoboda a možnost researche ale na druhou stranu nonyay tohle bude určitě použitý i pro dost destruktivní usecases jestli se takhle vypustí nějakej extrémně capable model v budoucnu tak si mám pocit že to způsobí dost velkej chaos ale už dávám velkou pravděpodobnost na to, ať se to vyvine jakkoliv, že AI (a podporující technologie a faktory) minimálně solidně totálně rozloží, nebo už rozkládá, všechno tomu čemu říkáme společnost s rozkladem status quo jsem skoro smířenej (ale třeba předpovídám špatně, protože budoucnost je v podstatě nepředpovídatelná protože extremní celkově neuchytitelná komplexita světa a teorie chaosu) teď jen aby to nebylo úplný vymření lidstva a aby jako reakce následovala nějaká forma transhumanistické decentralizované adaptace místo dystopické tyranie nebo chaotických válek všude https://fxtwitter.com/burny_tech/status/1736614543684407473 https://nitter.net/burny_tech/status/1736614543684407473 [My techno-optimism](https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html) rozklad nebo radikální transformace společnosti je možná lepší možnost než málo měnící se to co je teď, protože si myslím, že status quo je dlouhodobě neudržitelný např deepfake content už je hodně na limitu nerozlišitelnosti pro většinu lidí co nezkoumají každý pixely https://fxtwitter.com/channel1_ai/status/1734591810033373231 Tohle je fajn seznam risků AI [AI Risks that Could Lead to Catastrophe | CAIS](https://www.safe.ai/ai-risk) manage the risks and harness the potential for transhumanist and other utopian/protopian dreams, like with all powerful technology that can create big catastrophic accidents or be used by malicious people incentives for profit and power are just sometimes aligned with incentives to make life better, let's incentivize those making life better incentives more jak zamezit koncentrace moci ve špatných rukou pro lidi bez způsobování koncentrace moci jiným způsobem je taky bordel chci technologickou, kulturní, ekonomickou apod. svobodu, ale i safety aby se tohle všechno nepodkopalo skrz všemožný možný risky věřím že to jde spojit skrz např zlepšování defence a (cyber)security, věřím že to půjde skloubit dohromady if immortality tech gets developed, democratized and cheap before you die, would you go for it? average AI researcher monitor https://imgur.com/6R5ftmK [[2312.07046] Rethinking Compression: Reduced Order Modelling of Latent Features in Large Language Models](https://arxiv.org/abs/2312.07046) Rethinking Compression: Reduced Order Modelling of Latent Features in Large Language Models AI generates proteins with exceptional binding strengths https://phys.org/news/2023-12-ai-generates-proteins-exceptional-strengths.html https://openai.com/safety/preparedness