[Anil Seth: Neuroscience of Consciousness & The Self - YouTube](https://youtu.be/_hUEqXhDbVs?si=6Qal8QeCFksXDsJB) Science is about explanation, prediction, control https://twitter.com/hi_tysam/status/1729607688064279028?t=rvam-MvRQMheC-YbW45ILg&s=19 LLMs necessarily learn an implicit world model in their performance limit Pzombies, why should aby physical dynamics ve a conscious experience at all Nerds will build a digital God instead of going to the therapy [Millions of new materials discovered with deep learning - Google DeepMind](https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/) [An autonomous laboratory for the accelerated synthesis of novel materials | Nature](https://www.nature.com/articles/s41586-023-06734-w) [[2311.16502] MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI](https://arxiv.org/abs/2311.16502) MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI [Learning few-shot imitation as cultural transmission | Nature Communications](https://www.nature.com/articles/s41467-023-42875-2) Learning few-shot imitation as cultural transmission multimodal LLM has access to OS [GitHub - OthersideAI/self-operating-computer: A framework to enable multimodal models to operate a computer.](https://github.com/OthersideAI/self-operating-computer) OpenFold (like AlphaFold) from a mechanistic intepretability lens [Mechanistic Interpretability - Stella Biderman | Stanford MLSys #70 - YouTube](https://www.youtube.com/live/P7sjVMtb5Sg?si=PxTnM1QWyMRZ57vp&t=625) when training, it goes through phase changes, first learns to predict a 2D representation, then it learns to inflate it to a 3D representation, and then it fills in the details, slowly compressing as much features as possible in the least amount of as dimensions possible by most likely learning various (still undiscovered) circuits helping generalization weather ML [ECMWF | Charts](https://charts.ecmwf.int/?facets=%7B%22Product%20type%22%3A%5B%22Experimental%3A%20Machine%20learning%20models%22%5D%7D) Indirect Direct realism Objective truth is just shared hallucination Colors dont exist on their own, theyre just electromagnetic wavelenghts our eyes respond to, and we see just subset of it I think that even if we find the most predictive model of the brain, we will still never answer the question why this particular physical mechanistic dynamics is conscious at all and why not any other one. All existing attempts and their explanations feel like guessing to me. [Your request has been blocked. This could be due to several reasons.](https://www.microsoft.com/en-us/research/blog/the-power-of-prompting/) Prompting is all you need? Prompt engineering with GPT4 to beat specialist models in medical questions! using kNN-based few-shot example selection, GPT-4–generated chain-of-thought prompting, and answer-choice shuffled ensembling [God Help Us, Let's Try To Understand The Paper On AI Monosemanticity](https://www.astralcodexten.com/p/god-help-us-lets-try-to-understand) God Help Us, Let's Try To Understand AI Monosemanticity holy grail mechanistický intepretability je natrenovat LLMs od zacatku tak aby tyto benchmarky passovaly misto toho abychom je krotily přes RLHF po tom co jsou natrénovaný, aby se circuits kodujici tyto fenomany uvnitr vyskytovaly co nejmíň a ne jen se učily je nepoužívat nebo jiny mentalni gymnastiky kterým nerozumíme a nedokážeme predikovat a kontrolovat když je nedokazeme intepretovat, pak to vede např k decepci LLM deception [[2307.16513] Deception Abilities Emerged in Large Language Models](https://arxiv.org/abs/2307.16513) [[2311.07590] Large Language Models can Strategically Deceive their Users when Put Under Pressure](https://arxiv.org/abs/2311.07590) [AISafety.info](https://stampy.ai/) [AI Risks that Could Lead to Catastrophe | CAIS](https://www.safe.ai/ai-risk) Jestli budou capabilities accelerovat ještě rychleji nebo dosáhneme platou a další AI zimy nikdo neví no, ale mám osobně pocit že to je o dost málo pravděpodobný, ale nemám empirický / matematický důkaz (ale scaling laws platí), ale to co v OpenAI hackují na matiku vypadá slibně. A minimálně jsme na začátku toho jak vůbec ty existující LLMs pořádně využít v plným potenciálu v různých usecases a různé je modifikovat, finetunovat, promptengineerovat, minimalizovat halucinace, dávat jim knowledge, a dělat z toho různý multiagent selfcorrecting chain of thought searching ekosystémy co mají přístup k programovaní, internetu a OS. A vůbec celkově jsme na začátku matematické teorie o tom jak fungují a jak je steerovat teoreticky pomocí exaktní matiky ne jen empiricky alchemisticky. Ale za mě ta celá dynamika a reasoning (i u celkem jednoducchyho, pro lidi co umí programovat, tasku) typu nakoduj hru v Pythonu tam je absolutní miracle oboru AI kterej do teď absolutně nechápeme jak to sakra může fungovat obecně bez specializace... a i jen to před pár lety to bylo pro spoustu lidí nerealistický scifi... Že to vůbec skládá dohromady koherentní jazyky Nebo celkové že počítače a hardware funguje... je až fascinující jak se nám podařilo přírodu zotročit aby dělala v počítači statisticky dostatečně lineární deterministický kauzální operace... I to kvantový tunelování v tranzistorech dokážeme často chytit. you could make better "better than most humans at economically valuable jobs" benchmarks" Nejdřív přišla symbolická AI co nefungovala, pak jsme naskalovali connetionist modely co teď mají gigantický výsledky, a teď zkoušíme různý priors a slight modifikace transformeru nebo interpretability analýzu na steering Pořád si myslím že hodně hardwired symbolická AI je dead end po tom co to způsobilo AI winter Ale hodí se k dosavadním AIs určitě přidat nějaký intepretable mechanismy před nebo po trénování Což se teď děje, cpe se tam dost víc explciitnich heurostik typu search, chain of thought, selfkorekce,... Teď se asi nejvíc řeší vyřešení ty specializace a progress se děje, matiku to začíná násobně zdolávat [[2309.11495] Chain-of-Verification Reduces Hallucination in Large Language Models](https://arxiv.org/abs/2309.11495) Chain-of-Verification Reduces Hallucination in Large Language Models Nenazýval bych to jenom kompresí, to ignoruje všechny ostatní úrovně abstrakce a analýzy, nějaké minimální tam vzniká, máme identifikovány různý weak nebo strong features a algorithmic circuits apod. zatím u menších modelů, ale protože škálování velikosti a kapabilit probíhá mnohem rychleji než mechanistická interpretability, tak o gigantických modelech pořád víme málo Ale zlepšuje se to, a jak tohle zjišťujeme, tak jsme schopni s tím i efektivnější zacházet při trénování, inferenci apod. AGI researchers first tried to make very sophisticated symbolic architectures, but that failed and resulted in AI winter. Then connectionist deep learning with neural networks and transformers that turned into ChatGPT came. https://imgur.com/sFiUfuh Now we're slowly seeing the merging of these paradigms into hybrid architectures, like in OpenAI, where we give the gigantic stacks of transformers tons of explicit symbolic heuristics such as chain of thought, selfverification, selfcorrection, search and so [Q* - Clues to the Puzzle? - YouTube](https://www.youtube.com/watch?v=ARf0WyFau0A) on while creating mathematical theories of how transformers operate to make it more effective and modify or augment its architecture and steer the learned giant blob of almost inscrutable matrices by making them slightly more scrutable using bottom up [Towards Monosemanticity: Decomposing Language Models With Dictionary Learning](https://transformer-circuits.pub/2023/monosemantic-features/index.html) or top down mechanistic interpretability. [Representation Engineering: A Top-Down Approach to AI Transparency](https://www.ai-transparency.org/) na kodovani doporučuju zkusit čtyřku, komplexita a error level u kódování je fakt o dost menší, a Copilota, nebo ty multiagent frameworky, s propmpt engineeringem, dat tomu data na retrieval/do context window, zkusit Copilota, zkusit jiný LLMs, např specializovanější nebo finetuned na ten hyperspecifický task, MemGPT na paměť [Progressive Brain Tissue Replacement Jean Hebert | NextBigFuture.com](https://www.nextbigfuture.com/2023/11/progressive-brain-tissue-replacement-jean-hebert.html) Neuroplasticity and replacing Brain Progressively may enable Immortality - "Jean Hebert plan is to grow a new body with gene therapy to knockout brain development. The old brain would get sections replaced with new cell created brain cells and tissue" [How plants can perform feats of quantum mechanics - Big Think](https://bigthink.com/hard-science/plants-quantum-mechanics/) Plants perform quantum mechanics feats that scientists can only do at ultra-cold temperatures Bose-Einstein condensates to je dost cool, zároveň jsem na to viděl i dost kritiky, zajímalo by mě či někdo udělal nějakou sofistikovanou metastudii nad víc studiema, hmm u mozku se u tohodle taky pořád hádají (Experimental indications of non-classical brain functions) [Radware Bot Manager Captcha](https://iopscience.iop.org/article/10.1088/2399-6528/ac94be) body map of emotions https://imgur.com/gol3k2e https://www.pnas.org/doi/full/10.1073/pnas.1321664111 Mám pocit že "gender" sociální konstrukt program v mozku nemám nainstalovanej vůbec většinu času. Asi na mě nejvíc sedí genderfluid (měnící se gender v čase) co je většinu času agender (bez genderu). Většinou se spíš adaptuju na prostředí. Dokážu se cítit jako libovolnej gender. Nikdy mě věcí jako gender dysforia (nespokojenost se svým genderovským vzhledem) netrápí. Mám obecně zvýšenou mentální fluiditu, což bude taky dost důsledek hodně učení se o obecných věcech o nás jako je kognitivní věda a vidění světa v tom kontextu, nebo důsledek meditace co trénuje neidentifikovat se s konkrétními názory, myšlenkami, pohledy, identitou apod. a dovolovat jim jejich existenci, což je osvobozující, ale zároveň nepotlačovat emoce, vzpomínky, či chtíče, minimalizovat odpor ke konkrétnímu žití, což dodává životu smysl. To všechno má různou efektivitu. 😄 I'm just another biological computer doing all sorts of information processing and it's fun to play with processing anything, and ideally if it helps other beings! Teď je většina mých běžících programů v mozku řešící umělou inteligenci! 😄 <a:wobble:1001415187867377726> nedávno jsem byl v seznamu na LLM srazu kde jsem potkal týpka co tenhle fenomén simuloval 😄 zároveň jsem se dozvěděl že seznam má vlastní český LLM co je prý na češtinu lepší než GPT4 mají open source jenom encodery na huggingfacu, generativní LLMs s decoderama jsou proprietary tipuju že hlavně na knowledge českých faktů, kultury, jazyka bude lepší ale třeba programovani, matematika, abstraktní reasoning, věda, komplexnější koverzace, context window apod. spíš ne nemají dostatečný triliardy dolarů od tech gigantů na dostatečný škálování Any sufficiently advanced empirical science is indistinguishable from alchemist magic [I Entered A Robot Dog Into A Dog Competition - YouTube](https://www.youtube.com/watch?v=JP5FJ7fEyyc) I Entered A Robot Dog Into A Dog Competition [Mind uploading - Wikipedia](https://en.m.wikipedia.org/wiki/Mind_uploading) Required computational capacity for simulating the brain strongly depend on the chosen level of simulation model scale https://twitter.com/burny_tech/status/1729343228120313960?t=pAapm_fFuUghlDqBS4eX2g&s=19 Analog network population model, Spiking neural network, Electrophysiology, Metabolome, Proteome, States of protein complexes, Distribution of complexes, Stochastic behavior of single molecules [MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers](https://nihalsid.github.io/mesh-gpt/) [[2309.11495] Chain-of-Verification Reduces Hallucination in Large Language Models](https://arxiv.org/abs/2309.11495) [Code Generation | Papers With Code](https://paperswithcode.com/task/code-generation/latest) Robotics is needed for AGI https://twitter.com/DrJimFan/status/1728830743508291761?t=tk-TgF6J-N_PmTkR2c6oJw&s=19 Checkout AutoGen, OpenAgents, AgentVerse, XAgent, MetaGPT for multiagent frameworks with MemGPT for memory or sparse priming representations for compression, RAG for retrieval, now with OpenAI's Q* search plus chain of thought plus selfcorrection [Q* - Clues to the Puzzle? - YouTube](https://www.youtube.com/watch?v=ARf0WyFau0A,) and let's also add higher order attention from Meta... [[2311.11829] System 2 Attention (is something you might need too)](https://arxiv.org/abs/2311.11829) and more general multimodality and embodiment from Palm [- YouTube](https://www.youtube.com/watch?v=EzEuylNSn-Q) and robotics, access to all of internet, all software, and embodiment with humanlike capabilities, and selfrewriting to the singularity and beyond, woo weee the future is neurotech import and expert into multimodal Obsidian external memory https://twitter.com/leanprover/status/1729302886117789738 DeepMind has formalized a theoretical result related to AI safety in Lean. 😍 "Monadic syntax is excellent for expressing stochastic algorithms, and working over finitely supported distributions avoids the need for integrability side conditions during proofs." Scalable AI Safety via Doubly-Efficient Debate [[2311.14125] Scalable AI Safety via Doubly-Efficient Debate](https://arxiv.org/abs/2311.14125) all art all memes all my notes? [The derivative isn't what you think it is. - YouTube](https://www.youtube.com/watch?v=2ptFnIj71SM&pp=ygUmYWxnZWJyYWljIHRvcG9sb2d5IGhvbW9sb2h5IGNvaG9tb2xvZ3k%3D) The derivative isn't what you think it is. ai generate all notes using raw gpt4 and universal primer [4 Hours of World’s TOP SCIENTISTS on FREE WILL - YouTube](https://www.youtube.com/watch?v=SSbUCEleJhg) [5-MeO-DMT - Wikipedia](https://en.wikipedia.org/wiki/5-MeO-DMT) 5meodmt neurogenesis mice [Frontiers | A Single Dose of 5-MeO-DMT Stimulates Cell Proliferation, Neuronal Survivability, Morphological and Functional Changes in Adult Mice Ventral Dentate Gyrus](https://www.frontiersin.org/articles/10.3389/fnmol.2018.00312/full) dmt neurogenesis mice [N,N-dimethyltryptamine compound found in the hallucinogenic tea ayahuasca, regulates adult neurogenesis in vitro and in vivo | Translational Psychiatry](https://www.nature.com/articles/s41398-020-01011-0) sota ai video generation https://twitter.com/pika_labs/status/1729510078959497562?s=46&t=1y5Lfd5tlvuELqnKdztWKQ ancient egypt and greeks found steam engines [Aeolipile - Wikipedia](https://en.wikipedia.org/wiki/Aeolipile) [When creative machines overtake man: Jürgen Schmidhuber at TEDxLausanne - YouTube](https://www.youtube.com/watch?v=KQ35zNlyG-o) Schmidhuber historiy of the world and the future with AI https://twitter.com/SchmidhuberAI/status/1729168330097836129?t=gKwnqdfiT7vliK3Njn4lrQ&s=19 https://ieeexplore.ieee.org/document/5508364 Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010) Jürgen Schmidhuber epicycle [Computational creativity - Wikipedia](https://en.wikipedia.org/wiki/Computational_creativity) lucid dream neurotech https://propheticai.co/pages/approach omnipemf SotA commercial neurotech: [Cody Rall MD with Techforpsych - YouTube](https://www.youtube.com/@CodyRallMD/videos) Explicit format: Is tags Short intuitive explanation and summary Detailed technical explanation and summary Has tags Deep dive Brain storming Resources Wiki AI explanations Data is all you need for LLM arithmetic https://twitter.com/SebastienBubeck/status/1729517609669030071?t=rrTKQ-ct8YrYWoDXHsC_vw&s=19 should we make sub LLMs like this in interacting LLMs ecosystem? [[2311.10215] Predictive Minds: LLMs As Atypical Active Inference Agents](https://arxiv.org/abs/2311.10215) LLM active inference Hallucination analysis https://twitter.com/johnjnay/status/1729613282346996216?t=D8s6FGk0_6lBCfGOflGF7w&s=1 AI cancer detection https://twitter.com/emigal/status/1729500823028166877?t=3BChPD0yKd4vVffOl4t6VA&s=19 https://twitter.com/StabilityAI/status/1729589510155948074 real-time text-to-image generation model [[2309.07124] RAIN: Your Language Models Can Align Themselves without Finetuning](https://arxiv.org/abs/2309.07124) RAIN: Your Language Models Can Align Themselves without Finetuning We discover that by integrating self-evaluation and rewind mechanisms, unaligned LLMs can directly produce responses consistent with human preferences via self-boosting. We introduce a novel inference method, Rewindable Auto-regressive INference (RAIN), that allows pre-trained LLMs to evaluate their own generation and use the evaluation results to guide rewind and generation for AI safety. Notably, RAIN operates without the need of extra data for model alignment and abstains from any training, gradient computation, or parameter updates. [[2310.12036] A General Theoretical Paradigm to Understand Learning from Human Preferences](https://arxiv.org/abs/2310.12036) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10449779/ Formalizing psychological interventions through network control theory Truth is often somewhere in the middle Yes. I'm also a lot for mechanistic interpretability from the very start in the architecture or analyzing the trained black box giant blob of inscrutable matrices, for theoretical and practical reasons. With that you can design, train, predict, steer it much better to make it do what you want more effectively, ideally more good and less harm. GPT specialized na coding [ChatGPT - Grimoire](https://chat.openai.com/g/g-n7Rs0IK86-grimoire) True. I wish for mathematical model of capabilities so that we can predict them as we scale! Anthropic is optimistic that we can eventually get it, I'm too! [Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & Alignment - YouTube](https://youtu.be/Nlkk3glap_U&t=2839) Optical computing with plasma [OPTICAL COMPUTING with PLASMA: Stanford PhD Defense - YouTube](https://www.youtube.com/watch?v=Mdh2pLwsK8Y) Cults: Church of Artificial Intelligence, e/acc, forky e/accu, nektery utopian transhumanisti, nekteri EA/longermisti/racionalisti, nektery AI related death cults, nektery ty AGI laby,... Friston Andres etc. names QRI https://twitter.com/Plinz/status/1729808246368616561 "Whenever a physicist discovers the answer to the last question, they push the secret bell by creating another set of uncomputable mathemathics to confuse the next generation, because they understand that physics is a path that can only exist as long as it does not reach its goal" Hiearchies of abstraction the world from fundamental physics to galaxies using different mathematical machinery George holtz streams AGI has been achieved internally Fyzika je láska :3 Teorie relativity ještě přidává guláš do času: Čím rychleji objekt cestuje, nebo pod čím větší gravitací je, tím pomaleji mu ubýha čas. GPS družice toho musí brát v potaz. Interstellar je na to cool film. :D [Interstellar (2014) - Scene ‚ÄúMessages span: 23 years" - YouTube](https://youtu.be/s_M1t0HE-Kk?si=wqPLM2QQuoLaqRbq) Interpretability by removing parts of the network and seeing what changes [Mechanistic Interpretability - Stella Biderman | Stanford MLSys #70 - YouTube](https://www.youtube.com/live/P7sjVMtb5Sg?si=PxTnM1QWyMRZ57vp&t=625) I like panpsychist physicalism, where all physics is qualia, that we can approximate mathematically with more and more exact measuring tools, and individual subjective experiences are bound topologically in information geometry or/and topology of the universe's quantum fields. Using some kind of mathematical boundary in physics, statistical (markov blankets) or nonstatistical (in the topology of the physical fields), topological segmentation or closure or something like that, to make the fact that we are distinct individual experiences physical Regularities in the universe evolutionary shape our perceptual systems to adaptively construct and pattern match useful regularities needed for survival Primary qualities: Exist independently of the observer - solidity of an object, Secondary qualities: exist dependently of the observer through interacting of the universe and the observer - color Topological categories [Topological Categories -- A Unifying Framework | Chris Grossack's Blog](https://grossack.site/2021/12/16/topological-categories.html) [GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.](https://github.com/mlabonne/llm-course) AlphaFold [In Continued Defense Of Effective Altruism](https://www.astralcodexten.com/p/in-continued-defense-of-effective) Pdoom people landscape https://twitter.com/AISafetyMemes/status/1729892336782524676?t=VhodndOxvt8VvBbEs159Dw&s=19 Jim Rutt x Ben Goertzel [EP 211 Ben Goertzel on Generative AI vs. AGI - The Jim Rutt Show](https://www.jimruttshow.com/ben-goertzel-2/) https://www.lesswrong.com/posts/JviYwAk5AfBR7HhEn/how-to-control-an-llm-s-behavior-why-my-p-doom-went-down-1 [The Internet is Worse Than Ever ‚Äì Now What? - YouTube](https://www.youtube.com/watch?v=fuFlMtZmvY0) social media polarization https://twitter.com/VitalikButerin/status/1729251808936362327 dangers behind, but multiple paths forward ahead, some good, some bad P=NP hypercomputation plants implementing quantum superposition using bose einstein condesate to make photosynthesis extremely efficient [PRX Energy 2, 023002 (2023) - Exciton-Condensate-Like Amplification of Energy Transport in Light Harvesting](https://journals.aps.org/prxenergy/abstract/10.1103/PRXEnergy.2.023002) [Plants Use Quantum Physics to Survive | Live Science](https://www.livescience.com/37746-plants-use-quantum-physics.html) quantum superposition and quantum entaglement isn't the same thing - in quantum superposition you can specify the state of particle A and the state of particle B, in entaglement you cannot [Can Quantum Entanglement and Quantum Superposition be considered the same phenomenon? - Physics Stack Exchange](https://physics.stackexchange.com/questions/148131/can-quantum-entanglement-and-quantum-superposition-be-considered-the-same-phenom) https://twitter.com/YiMaTweets/status/1729899425072648687 Compression is not all of intelligence, but it is likely the foundation of intelligence. Additional mechanisms build on top of it. [Differential technological development - Wikipedia](https://en.wikipedia.org/wiki/Differential_technological_development) [AI suggested 40,000 new possible chemical weapons in just six hours - The Verge](https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx)