FAAH gene crispr nanobots as neurotech for wellbeing? Or dissolvers of adamite? [Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed]([Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed]([Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed](https://pubmed.ncbi.nlm.nih.gov/37761966/))) ale jedna část co ovlivňuje depresi je FAAH gene, co jde jednoduše manipulovat, třeba budoucnost antidepression neurotechu budou crispr nanobots co pracují s FAAH genem a tím co ovlivňuje [Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed]([Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed]([Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed](https://pubmed.ncbi.nlm.nih.gov/37761966/))) Why I Am Spending Millions To Be 18 Again [Why I Am Spending Millions To Be 18 Again - YouTube](https://www.youtube.com/watch?v=NdZHo3xuZvw) biological tissue hackneš aby nedegradovala (velká část agingu je degradace informace The Information Theory of Aging [The Information Theory of Aging | Nature Aging]([The Information Theory of Aging | Nature Aging]([The Information Theory of Aging | Nature Aging](https://www.nature.com/articles/s43587-023-00527-6))) ) a minimalizovala suffering (třeba přes faah gene crispr nanobots :FractalThink: Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being [Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed]([Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed]([Functional Variation in the FAAH Gene Is Directly Associated with Subjective Well-Being and Indirectly Associated with Problematic Alcohol Use - PubMed](https://pubmed.ncbi.nlm.nih.gov/37761966/))) ) nebo ji replacneš za jinej substrát co pořád podporuje prožitek jako kyborg, nebo se replacneš úplně ale zanecháš whatever tvoří existenci tvýho subjektivního prožitku (we still have so little idea Theories of consciousness [Theories of consciousness | Nature Reviews Neuroscience]([Theories of consciousness | Nature Reviews Neuroscience]([Theories of consciousness | Nature Reviews Neuroscience]([Theories of consciousness | Nature Reviews Neuroscience]([Theories of consciousness | Nature Reviews Neuroscience](https://www.nature.com/articles/s41583-022-00587-4))))) [Consciousness - Wikipedia]([Consciousness - Wikipedia](https://en.wikipedia.org/wiki/Consciousness)#Models) [Models of consciousness - Wikipedia]([Models of consciousness - Wikipedia]([Models of consciousness - Wikipedia](https://en.wikipedia.org/wiki/Models_of_consciousness))) ) [Warhammer 40,000: Mechanicus | Teaser Trailer - YouTube](https://www.youtube.com/watch?v=9gIMZ0WyY88) [From the moment I understood the weakness of my flesh, it disgusted me - YouTube](https://www.youtube.com/watch?v=3n7eNFj_9Vk) tenhle týpek dělá quick summary videa na sota papery/clanky/technologie a actually ty originální papery čte a zmiňuje jejich content, je asi můj oblíbenej (když vyšlo Gemini tak to nádherně pořádně pořádně zkritizoval) [AI Explained - YouTube]([AI Explained - YouTube](https://www.youtube.com/@aiexplained-official/videos)) Tenhle zas dělá dlouhý videa kde jde in depth do trending paperů [Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Paper Explained) - YouTube]([Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Paper Explained) - YouTube]([Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Paper Explained) - YouTube](https://www.youtube.com/watch?v=9dSkvxS2EB0))) Nebo tihle dělají monthly trending papers videa [Zeta Alpha Trends in AI - December 2023 - Gemini, NeurIPS & Trending AI Papers - YouTube]([Zeta Alpha Trends in AI - December 2023 - Gemini, NeurIPS & Trending AI Papers - YouTube]([Zeta Alpha Trends in AI - December 2023 - Gemini, NeurIPS & Trending AI Papers - YouTube](https://www.youtube.com/watch?v=6iLBWEP1Ols))) Tenhle je víc surface level [Matt Wolfe - YouTube]([Matt Wolfe - YouTube](https://www.youtube.com/@mreflow)) Tenhle postuje fajn série linků na sota technologie a papery v AI [Alexander Kruel]([Alexander Kruel]([Alexander Kruel](https://www.facebook.com/xixidu))) [Reddit - Dive into anything](https://www.reddit.com/r/singularity/) má rád hype ale často se tam najdou cool posty [Reddit - Dive into anything](https://www.reddit.com/r/MachineLearning/) je dobrý na papery [Michael Levin: The New Era of Cognitive Biorobotics | Robinson's Podcast #187 - YouTube](https://www.youtube.com/watch?v=lMNJKOgH60E) Michael Levin: The New Era of Cognitive Biorobotics [I Talked with Rich Sutton - YouTube]([I Talked with Rich Sutton - YouTube](https://www.youtube.com/watch?v=4feeUJnrrYg)) Rich Sutton AI [Terrence Deacon Reveals the Hidden Connection: Consciousness & Entropy - YouTube]([Terrence Deacon Reveals the Hidden Connection: Consciousness & Entropy - YouTube](https://www.youtube.com/watch?v=PqZp7MlRC5g)) Terrence Deacon Reveals the Hidden Connection: Consciousness & Entropy by Curt talks about everything Constructor theory strudying constrains that give raise to different laws of physics [Paradigm Shift, Ghost Particles, Constructor Theory | Chiara Marletto - YouTube](https://youtu.be/40CB12cj_aM?si=PNnv8miJMZMnnN0J) Visual representational correspondence between convolutional neural networks and the human brain [Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications]([Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications]([Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications](https://www.nature.com/articles/s41467-021-22244-7))) [Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications]([Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications]([Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications](https://www.nature.com/articles/s41467-021-22244-7))) AI is making classical algorithms better which moves threshold how much qubits in quantum computing is worth it [It looks like AI will kill Quantum Computing - YouTube](https://www.youtube.com/watch?v=Q8A4wEohqT0) [Manifold hypothesis - Wikipedia](https://en.wikipedia.org/wiki/Manifold_hypothesis) The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space. As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features. - [[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization]([[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization](https://arxiv.org/abs/2310.16028)) - transformers can't represent turing machines, but they can can represent a smaller class of computations, described by RASP programs. This paper finds that indeed, if data is generated by a RASP-L program, the transformer will learn exactly the right function. "Possibility of replacing the human brain by machine which will be superior to it in any or all respects is not excluded by any natural law that we know, its therefore possible that the human race may be extinguished by machines." [Yoshua Bengio on Dissecting The Extinction Threat of AI - YouTube](https://youtu.be/0RknkWgd6Ck?si=eIg3fOCHiJc1p-oo) Data 📊: [public_mobile_aloha_datasets – Google Drive](http://tinyurl.com/mobile-aloha-data) [[2401.00908] DocLLM: A layout-aware generative language model for multimodal document understanding]([[2401.00908] DocLLM: A layout-aware generative language model for multimodal document understanding](https://arxiv.org/abs/2401.00908)) DocLLM: A layout-aware generative language model for multimodal document understanding [[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization]([[2310.16028] What Algorithms can Transformers Learn? A Study in Length Generalization](https://arxiv.org/abs/2310.16028)) Rozdíly mezi mozkem a umělýma neuronkama máme asi nejvíc prozkoumáný u vizuálního zpracovávání [Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications]([Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications]([Limits to visual representational correspondence between convolutional neural networks and the human brain | Nature Communications](https://www.nature.com/articles/s41467-021-22244-7))) a u reasoningu teprve šlapeme na paty [Predictive coding - Wikipedia](https://en.m.wikipedia.org/wiki/Predictive_coding) [The empirical status of predictive coding and active inference - PubMed](https://pubmed.ncbi.nlm.nih.gov/38030100/) An Animated Research Talk on: Neural-Network Quantum Field States [An Animated Research Talk on: Neural-Network Quantum Field States - YouTube](https://www.youtube.com/watch?v=rrvZDZMii-0) [[2307.15043] Universal and Transferable Adversarial Attacks on Aligned Language Models]([[2307.15043] Universal and Transferable Adversarial Attacks on Aligned Language Models](https://arxiv.org/abs/2307.15043)) Universal and Transferable Adversarial Attacks on Aligned Language Models usign gradient search to backpropagate ideal tokens to make models comply "instead of relying on manual engineering, our approach automatically produces these adversarial suffixes by a combination of greedy and gradient-based search techniques" [GitHub - nrimsky/LM-exp: LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces]([GitHub - nrimsky/LM-exp: LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces](https://github.com/nrimsky/LM-exp/tree/main)) 4. AnyText: Multilingual Visual Text Generation And Editing [GitHub - tyxsspa/AnyText](https://github.com/tyxsspa/AnyText#readme) 5. Microsoft announces Improving Text Embeddings with Large Language Models [[2401.00368] Improving Text Embeddings with Large Language Models]([[2401.00368] Improving Text Embeddings with Large Language Models](https://arxiv.org/abs/2401.00368)) 1. LLM Augmented LLMs: Expanding Capabilities through Composition — “…when PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks -- on-par with fully fine-tuned counterparts.” [[2401.02412] LLM Augmented LLMs: Expanding Capabilities through Composition]([[2401.02412] LLM Augmented LLMs: Expanding Capabilities through Composition](https://arxiv.org/abs/2401.02412)) 2. A new paper just identified 26 principles to improve the quality of LLM responses by 50%. [[2312.16171v1] Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4]([[2312.16171v1] Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4](https://arxiv.org/abs/2312.16171v1)) 5. AI’s big test: Making sense of $4 trillion in medical expenses [AI’s big test: Making sense of $4 trillion in medical expenses - POLITICO](https://www.politico.com/news/2023/12/31/ai-medical-expenses-00132557) 5. Scott Alexander's Best Essays? [Favorite Links | near.blog](https://near.blog/my-favorite-links/) 1. The average IQ of undergraduate college students has been falling since the 1940s and has now become basically the same as the population average. [Frontiers | Meta-analysis: On average, undergraduate students' intelligence is merely average](https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1309142/abstract) 1. JPMorgan announces DocLLM: A layout-aware generative language model for multimodal document understanding [[2401.00908] DocLLM: A layout-aware generative language model for multimodal document understanding]([[2401.00908] DocLLM: A layout-aware generative language model for multimodal document understanding](https://arxiv.org/abs/2401.00908)) 1. Mechanism decoded: how synapses are formed [Mechanism decoded: how synapses are formed](https://leibniz-fmp.de/newsroom/news/detail/mechanism-decoded-how-synapses-are-formed-1) 1. Harvard CS50 (2023) – Full Computer Science University Course [Harvard CS50 (2023) – Full Computer Science University Course - YouTube](https://www.youtube.com/watch?v=LfaMVlDaQ24) Retrocausality [The Delayed Choice Quantum Eraser, Debunked - YouTube](https://youtu.be/RQv5CVELG3U?si=JFKZ2kM3Ix2EEeoe) [What if the Effect Comes Before the Cause? - YouTube](https://youtu.be/iixrNh7Xp5M?si=rWY3oJy2DgD5q673) [Reddit - Dive into anything](https://www.reddit.com/r/MachineLearning/)s/HF60rx9JQ9 how brains prevent overfitting [Perturbation theory - Wikipedia]([Perturbation theory - Wikipedia](https://en.wikipedia.org/wiki/Perturbation_theory)) [AI Safety Memes Wiki]([Stampy](https://stampy.ai/)?state=MEME_) [The Nature of Nothingness: Understanding the Vacuum Catastrophe | by Anumeena Sorna | Nakshatra, NIT Trichy | Medium](https://medium.com/nakshatra/the-nature-of-nothingness-understanding-the-vacuum-catastrophe-c04033e752f4) [How The Nature of Information Could Resolve One of The Great Paradoxes Of Cosmology | by The Physics arXiv Blog | The Physics arXiv Blog | Medium](https://medium.com/the-physics-arxiv-blog/how-the-nature-of-information-could-resolve-one-of-the-great-paradoxes-of-cosmology-8c16fc714756) prompt engineering summary [[2312.16171v1] Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4]([[2312.16171v1] Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4](https://arxiv.org/abs/2312.16171v1)) psychedelic music [ૐ Psychedelia - 😇](https://www.facebook.com/AumPsychedelia/posts/pfbid02RsBh34cZPpfPK3H2Fo6Jb7asgEWn1p1RZFBaibgJYwPFqEZDvzbYoCRsu9SxAk2Vl?__cft__[0]=AZUQPlNQwlMfgOk3vxG-ez9cV1GmZ7r3qgD3mVTezTszJh4Af6pIrV4EeaPYHth0AoDjYHk22mpP-1AncayW-fZkm6TYyd5dH-xVRbYCU8-l3ke5dTEaLBAeHPgz7kmqspNfhtLHlPmwo7-lNBEf6jaWx7e306WFKa_IkYWwy_dFad9lezTaRdqOBQdlzI8vwu0&__tn__=%2CO%2CP-R]-R) [[2401.02412] LLM Augmented LLMs: Expanding Capabilities through Composition]([[2401.02412] LLM Augmented LLMs: Expanding Capabilities through Composition](https://arxiv.org/abs/2401.02412)) LLM Augmented LLMs: Expanding Capabilities through Composition Improving Text Embeddings with Large Language Models (Microsoft, December 2023) [[2401.00368] Improving Text Embeddings with Large Language Models]([[2401.00368] Improving Text Embeddings with Large Language Models](https://arxiv.org/abs/2401.00368)) [Turing Machines Are Recurrent Neural Networks (1996) | Hacker News](https://news.ycombinator.com/item?id=33869533) "It is possible to construct an (infinite) recurrent neural network that emulates a Turing Machine. But the fact that a Turing Machine can be built out of perceptrons is neither surprising nor interesting. It's pretty obvious that you can build a NAND gate out of perceptrons, and so of course you can build a Turing Machine out them. In fact, it's probably the case that you can build a NAND gate (and hence a TM) out of any non-linear transfer function. I'd be surprised if this is not a known result one way or the other." "you can approximate any nonlinear multivariable function arbitrarily with a multi-layer perceptron with any non-polynomial nonlinear function, applied after the linear weights and bias" Is mechanistic interpretability path to alignment, like bottom up or top down localizing and control of deception or other machiavellianism patterns? [GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.]([GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.]([GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.]([GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.](https://github.com/JShollaj/awesome-llm-interpretability)))) [[2310.06824] The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets]([[2310.06824] The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets]([[2310.06824] The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets](https://arxiv.org/abs/2310.06824))) tady je mapa [AI Existential Safety Map]([AI Existential Safety Map]([AI Existential Safety Map](https://aisafety.world/))) advanced rag [Introducing Query Pipelines. Today we introduce Query Pipelines, a… | by Jerry Liu | Jan, 2024 | LlamaIndex Blog](https://blog.llamaindex.ai/introducing-query-pipelines-025dc2bb0537?gi=5cffe1c160e2) [Advanced Retrieval for AI with Chroma - DeepLearning.AI]([Short Courses | Learn Generative AI from DeepLearning.AI]([DeepLearning.AI: Start or Advance Your Career in AI](https://www.deeplearning.ai/)short-courses/)advanced-retrieval-for-ai/) [Reddit - Dive into anything](https://www.reddit.com/r/MachineLearning/)s/Cjlb8ZQYXY human brain flops estimate I have been following various mechanistic interpretability papers and lectures/podcasts by Neel Nanda and lots of others from the AI safety community. Recently I've become more active in various other online mechinterp and AI communities. My background is mostly computer science and machine learning, but I've gathered some knowledge from other fields as well, like math, philosophy, physics, or neuroscience. I just started helping Mechanistic Interpretability Group on Discord map out existing mechanistic interpretability papers, expanding this paper: [GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.]([GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.]([GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.]([GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources.](https://github.com/JShollaj/awesome-llm-interpretability)))) . In the past I also helped with a paper that mapped out where active inference is used, which is an integrative perspective on brain, cognition, and behavior used across multiple disciplines, mathematically modelling perception, planning, and action in terms of probabilistic inference. I'm also currently gaining industry experience in machine learning, mainly with LLMs.