Ale či to má prožitek reálně nikdo neví, hlavně tím že ani pořádně nevíme jak funguje prožitek ve fyzice v mozku, kde máme zatím dat jenom trochu. Lidi spekulujou do všech stran. Někteří si myslí že je to dost možný, jako Hinton (týpek co vymyslel hodně matiky za AI) [Geoffrey Hinton | Will digital intelligence replace biological intelligence? - YouTube](https://youtu.be/iHCeAotHZa4?si=vDAoUJ-ULIa6Z2j4) Tady to diskutuje v 2022, chtělo by to udělat update na 2024, protože za tu dobu se AI šílené zlepšilo, a modely mozku pokročili [[2303.07103] Could a Large Language Model be Conscious?](https://arxiv.org/abs/2303.07103) Je tam dost odlišnosti, a zároveň dost podobností na matematický úrovni. Nějaká slavní vnitřní simulace světa a theory of mind tam je, ale v různých aspektech jiná v porovnání s lidmi. Tenhle rok se začíná většina firem vlastníci ty největší AIs soustředit na to aby z toho vznikli více autonomní entity s plánováním, gólama, pamětí,... Např https://aibusiness.com/nlp/openai-is-developing-ai-agents což je mnohem blíž k člověku (dle by to chtělo ještě víc explicitní plánování, model světa, sebe, sebereference, víc smyslů, kontinuitu, realtime učení) Hyperparamater statespace neural networks Wow ta komplexita... https://fxtwitter.com/jaschasd/status/1756930242965606582?t=S6R6iJfPbb_l1o8_FWm1dA&s=19 https://fxtwitter.com/jaschasd/status/1756930244337098890?t=-KwCiOaFz8OJ2rZzl_6fBw&s=19 https://www.marktechpost.com/2024/02/10/this-ai-paper-from-stanford-and-google-deepmind-unveils-how-efficient-exploration-boosts-human-feedback-efficacy-in-enhancing-large-language-models/ [[2402.00396] Efficient Exploration for LLMs](https://arxiv.org/abs/2402.00396) [[2106.10165] The Principles of Deep Learning Theory](https://arxiv.org/abs/2106.10165) [The Principles of Deep Learning Theory](https://deeplearningtheory.com/) deep learning statistical mechanics book https://twitter.com/burny_tech/status/1757074849967595757 [Electron transport chain - YouTube](https://youtu.be/LQmTKxI4Wn4?si=jQaYb-eGLZboGH10) https://twitter.com/fly51fly/status/1757043161602658722?t=zlpvK49fdjEA0tzeMqeFhg&s=19 [[2402.06120] Exploring Group and Symmetry Principles in Large Language Models](https://arxiv.org/abs/2402.06120) [[2402.06196] Large Language Models: A Survey](https://arxiv.org/abs/2402.06196) https://twitter.com/_reachsumit/status/1756877429690495334 What camp i'm in? All of them The whole universe will be learned Optimally perfect thermodynamic algorithmic information theoretic intelligent system with the scale of a galaxy understanding all mathematics governing reality https://twitter.com/burny_tech/status/1757072814912254015?t=v-Ourp0ozEykqgnQvkuJUQ&s=19 [OSF](https://osf.io/preprints/psyarxiv/9byzu) https://twitter.com/burny_tech/status/1757080284279840885 AI NTK theory AI spline theory of NNs from Randal [Collective intelligence - Wikipedia](https://en.wikipedia.org/wiki/Collective_intelligence) ising machine MCMC diffusion quantum electrodynamics statistical mechanics of deep learning renormalization group fluctation theorem Neural Tangent Kernel [Neural tangent kernel - Wikipedia](https://en.wikipedia.org/wiki/Neural_tangent_kernel) [Neural Networks as Quantum Field Theories (NNGP, NKT, QFT, NNFT) - YouTube](https://www.youtube.com/watch?v=ZSmORp3Bm2c) [[2307.03223] Neural Network Field Theories: Non-Gaussianity, Actions, and Locality](https://arxiv.org/abs/2307.03223) Neural Network Gaussian Processes (NNGP), Neural Tangent Kernel (NKT) theory, Quantum Field Theory (QFT), Neural Network Field Theory (NNFT) [Reverse engineering the NTK](https://james-simon.github.io/blog/reverse-engineering/) [[2106.03186] Reverse Engineering the Neural Tangent Kernel](https://arxiv.org/abs/2106.03186) [Curt Jaimungal on Life, the Universe, & Theories of Everything - YouTube](https://www.youtube.com/watch?v=3_FSkVTLu1Y) https://twitter.com/jesse_hoogland/status/1755679791943147738 [[2402.02364] The Developmental Landscape of In-Context Learning](https://arxiv.org/abs/2402.02364) “Study hard what interests you the most in the most undisciplined, irreverent and original manner possible.” ― Richard Feynman. [CURIOSITY - Featuring Richard Feynman - YouTube](https://www.youtube.com/watch?v=UjEngEpiJKo&pp=ygUSY3VyaW91c2l0eSBmZXlubWFu) Manhattan ProjectAcoustic wave equationBethe–Feynman formulaFeynman checkerboardFeynman diagramsFeynman gaugeFeynman–Kac formulaFeynman parametrizationFeynman pointFeynman propagatorFeynman slash notationFeynman sprinklerHellmann–Feynman theoremHeaviside-Feynman formulaV−A theoryBrownian ratchetFeynman–Stueckelberg interpretationNanotechnologyOne-electron universePartonPath integral formulationPlaying the bongosQuantum cellular automataQuantum computingQuantum dissipationQuantum electrodynamicsQuantum hydrodynamicsQuantum logic gatesQuantum turbulenceResummationRogers CommissionShaft passerSticky bead argumentSynthetic molecular motorThe Feynman Lectures on PhysicsUniversal quantum simulatorVortex ring modelWheeler–Feynman absorber theoryVariational perturbation theory [Evan Hubinger (Anthropic)—Deception, Sleeper Agents, Responsible Scaling - YouTube](https://www.youtube.com/watch?v=S7o2Rb37dV8) [How Physicists Created a Holographic Wormhole in a Quantum Computer - YouTube](https://www.youtube.com/watch?v=uOJCS1W1uzg) [Wormhole Experiment Called Into Question | Quanta Magazine](https://www.quantamagazine.org/wormhole-experiment-called-into-question-20230323/) [The Biggest Ideas in the Universe | 9. Fields - YouTube](https://www.youtube.com/watch?v=Dy1LNk_B6IE) [Oscillatory neural network - Wikipedia](https://en.wikipedia.org/wiki/Oscillatory_neural_network) [UTS HAI Research - BrainGPT - YouTube](https://www.youtube.com/watch?v=crJst7Yfzj4) Mind reading cap with Vision Pro and realtime AI generated worlds based on thoughts when? With extra hyperdimensional worlds when on psychedelics [Fluid dynamics feels natural once you start with quantum mechanics - YouTube](https://www.youtube.com/watch?v=MXs_vkc8hpY&t=1520s) Me and nerd? What prior generated that hypothesis about the internal representation of my markov blanket in your generative world model? Introduction to deep learning theory [Introduction to Deep Learning Theory - YouTube](https://youtu.be/pad023JIXVA?si=U8OaydFqo8cVWdJo) Principles of deep learning theory [The Principles of Deep Learning Theory - Dan Roberts - YouTube](https://youtu.be/YzR2gZrsdJc?si=xX1DxrsDOPQ8hNY5) Deep learning theory class happening right now [.:: (Deep) Learning Theory ::.](https://people.dm.unipi.it/agazzi/nntheory.html) [[2402.06634] SocraSynth: Multi-LLM Reasoning with Conditional Statistics](https://arxiv.org/abs/2402.06634) https://twitter.com/fly51fly/status/1757395788186235126?t=Q-EXfjKqab-qiYbmTCOoBw&s=19 [Time and Quantum Mechanics SOLVED? | Lee Smolin - YouTube](https://www.youtube.com/watch?v=uOKOodQXjhc) AGI is the internet in hyperspace [Paper page - Scaling Laws for Fine-Grained Mixture of Experts](https://huggingface.co/papers/2402.07871) https://www.reddit.com/r/blueprint_/s/FXNPQbVLH1 Why have an opinion when you can have superposition of opinions instead [Sensecape: Enabling Multilevel Exploration and Sensemaking with Large Language Models | Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology](https://dl.acm.org/doi/10.1145/3586183.3606756) [[2311.18644] Exploring the hierarchical structure of human plans via program generation](https://arxiv.org/abs/2311.18644) ADHD megathread https://twitter.com/QiaochuYuan/status/1757634307139707020?t=T2wC0sj_QdItxW2WVNVUUg&s=19 Just give me 7 trillion dollars bro, we can use AI to solve nuclear fusion and build dyson spheres in a few years bro, energy crisis, climate crisis, and all other crises will be solved bro, trust me bro [How mood tunes prediction: a neurophenomenological account of mood and its disturbance in major depression - PubMed](https://pubmed.ncbi.nlm.nih.gov/32818063/) [When do neural representations give rise to mental representations? | The Transmitter: Neuroscience News and Perspectives](https://www.thetransmitter.org/defining-representations/when-do-neural-representations-give-rise-to-mental-representations/) [The math of how atomic nuclei stay together is surprisingly beautiful | Full movie #SoME2 - YouTube](https://www.youtube.com/watch?v=FL3ImtGcHqQ) [Large World Models](https://largeworldmodel.github.io/) [[2402.08268] World Model on Million-Length Video And Language With Blockwise RingAttention](https://arxiv.org/abs/2402.08268) [#034 Eray Özkural- AGI, Simulations & Safety - YouTube](https://www.youtube.com/watch?v=pZsHZDA9TJU) https://twitter.com/jaschasd/status/1756930242965606582 https://twitter.com/jaschasd/status/1756930244337098890 The boundary between trainable and untrainable neural network hyperparameter configurations is *fractal*! And beautiful! Here is a grid search over a different pair of hyperparameters -- this time learning rate and the mean of the parameter initialization distribution. this result makes me feel that we have long way to go from predictively understanding NN hyperparameter alchemy if its even possible because of that fractal chaotic nature of the hyperparameter geometric landscape principles of deep learning https://fxtwitter.com/burny_tech/status/1757074849967595757 [[2106.10165] The Principles of Deep Learning Theory](https://arxiv.org/abs/2106.10165) [The Principles of Deep Learning Theory](https://deeplearningtheory.com/) [Introduction to Deep Learning Theory - YouTube](https://www.youtube.com/watch?v=pad023JIXVA) [Princeton ORFE Deep Learning Theory Summer School 2021 - YouTube](https://www.youtube.com/playlist?list=PL2mB9GGlueJj_FNjJ8RWgz4Nc_hCSXfMU) [Deep Networks Are Kernel Machines (Paper Explained) - YouTube](https://www.youtube.com/watch?v=ahRPdiCop3E&list=WL&index=1&pp=gAQBiAQB) [A New Physics-Inspired Theory of Deep Learning | Optimal initialization of Neural Nets - YouTube](https://www.youtube.com/watch?v=m2bXL5Z5CBM&t=2s&pp=ygUgQm9yaXMgSGFuaW4gRGVlcCBMZWFybmluZyBUaGVvcnk%3D) [Boris Hanin - Finite Width, Large Depth Neural Networks as Perturbatively Solvable Models - YouTube](https://www.youtube.com/watch?v=6gDibuhHt3k&pp=ygUgQm9yaXMgSGFuaW4gRGVlcCBMZWFybmluZyBUaGVvcnk%3D) [The Principles of Deep Learning Theory - Dan Roberts - YouTube](https://www.youtube.com/watch?v=YzR2gZrsdJc&t=3217s&pp=ygUjRGFuaWVsIFJvYmVydHMgRGVlcCBMZWFybmluZyBUaGVvcnk%3D) [Effective Theory of Deep Neural Networks - YouTube](https://www.youtube.com/watch?v=XAuz08GuY9A&pp=ygUjRGFuaWVsIFJvYmVydHMgRGVlcCBMZWFybmluZyBUaGVvcnk%3D) [IAIFI Summer Workshop - Sho Yaida - YouTube](https://www.youtube.com/watch?v=BhpMsDbOI2c&pp=ygUJU2hvIFlhaWRh) [MSML2020 Paper Presentation - Sho Yaida - YouTube](https://www.youtube.com/watch?v=EWCPUwHb4oM&pp=ygUJU2hvIFlhaWRh) [CAII 11/8 Seminar Featuring MIT Theoretical Physics Researcher Dan Roberts - YouTube](https://www.youtube.com/watch?v=3i5k3V6zqlM&pp=ygUJU2hvIFlhaWRh) [An Animated Research Talk on: Neural-Network Quantum Field States - YouTube](https://www.youtube.com/watch?v=rrvZDZMii-0) a mathematical perspective for transformers [Cracking the Code: The Mathematical Secrets of Transformers - YouTube](https://www.youtube.com/watch?v=0WHZESuwVC4&pp=ygUobWF0aGVtYXRpY2FsIHBlcnNwZWN0aXZlIG9uIHRyYW5zZm9ybWVycw%3D%3D) [A Walkthrough of A Mathematical Framework for Transformer Circuits - YouTube](https://www.youtube.com/watch?v=KV5gbOmHbjU&t=3653s&pp=ygUobWF0aGVtYXRpY2FsIHBlcnNwZWN0aXZlIG9uIHRyYW5zZm9ybWVycw%3D%3D) [Maths with transformers - YouTube](https://www.youtube.com/watch?v=81o-Uiop5CA&pp=ygUobWF0aGVtYXRpY2FsIHBlcnNwZWN0aXZlIG9uIHRyYW5zZm9ybWVycw%3D%3D) [First Nuclear Plasma Control with Digital Twin - YouTube](https://www.youtube.com/watch?v=4VD_DLPQJBU) I wonder what makes this song so good at increasing confidence by probably releasing dopamine. Can it be explained using physics of neuronal dynamics? [KORDHELL - MURDER IN MY MIND (MUSIC VIDEO) - YouTube](https://www.youtube.com/watch?v=Rj4RfirEoQQ) [[2402.08135] A scalable, synergy-first backbone decomposition of higher-order structures in complex systems](https://arxiv.org/abs/2402.08135) [A systematic review of neural, cognitive, and clinical studies of anger and aggression | Current Psychology](https://link.springer.com/article/10.1007/s12144-022-03143-6) [The Gradient Podcast - Yoshua Bengio: The Past, Present, and Future of Deep Learning - YouTube](https://www.youtube.com/watch?v=xv-gsQdn9eY) [100x less compute with GPT-level LLM performance: How a little known open source project could help solve the GPU power conundrum — RWKV looks promising but challenges remain | TechRadar](https://www.techradar.com/pro/100x-less-compute-with-gpt-level-llm-performance-how-a-little-known-open-source-project-could-help-solve-the-gpu-power-conundrum-rwkv-looks-promising-but-challenges-remain) [[2402.09371] Transformers Can Achieve Length Generalization But Not Robustly](https://arxiv.org/abs/2402.09371) Transformers Can Achieve Length Generalization But Not Robustly Length generalization remains fragile, significantly influenced by factors like random weight initialization and training data order https://twitter.com/emollick/status/1757937829340967240?t=-OTjQhuz9wV9M9nGUPk4ZQ&s=19 [LLM Agents can Autonomously Hack Websites](https://arxiv.org/html/2402.06664v1) we show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand. [AIM Seminars: Gitta Kutyniok - The Modern Mathematics of Deep Learning - YouTube](https://www.youtube.com/live/bCQpfpYuYW8?si=9ATz3QoeaXdfeG_q) https://twitter.com/sergiynest/status/1757408348469961088 Using physics-informed reinforcement learning, Quilter learns to design circuit boards by grading itself against what really matters: manufacturability, electromagnetics, thermodynamics, etc. [Technology](https://www.quilter.ai/technology) All axioms of models are neither right nor wrong [Altermagnetic lifting of Kramers spin degeneracy | Nature](https://www.nature.com/articles/s41586-023-06907-7) https://twitter.com/cremieuxrecueil/status/1758029622527103203 https://www.sciencedirect.com/science/article/pii/S0160289622000320 [The role of height in the sex difference in intelligence - PubMed](https://pubmed.ncbi.nlm.nih.gov/20066931/) [Are Men Smarter than Women? - Richard Hanania's Newsletter](https://www.richardhanania.com/p/are-men-smarter-than-women) sex differences in intelligence. TL;DR: No differences in intelligence. Yes differences in specific abilities. gemini 1.5 https://twitter.com/sundarpichai/status/1758145921131630989 milions of token window programmers https://twitter.com/8teAPi/status/1758151388050375067 https://twitter.com/thom_wolf/status/1758140066285658351?t=pNiPoM95FfYEFMSA5jsCMA "playing with a basic, fully-local and open-source speech-to-text-to-speech pipeline on my mac less than 120 lines of code to chain local whisper + Zephyr (in LM studio) + an Openvoice TTS https://gist.github.com/thomwolf/e9c3f978d0f82600a7c24cb0bf80d606… latency is 1.5-2.5 sec on an M3. already quite impressed how all these local models run with such a low latency even without specific optimizations this 2 s of latency is my baseline. now excited to see what a more optimized fully local OSS assistant could reach. looking at you Candle, LAION BUD-E, metavoice, Openvoice, suno, Llama.cpp :)" are llms conscious review [[2308.08708] Consciousness in Artificial Intelligence: Insights from the Science of Consciousness](https://arxiv.org/abs/2308.08708) [[2309.00667] Taken out of context: On measuring situational awareness in LLMs](https://arxiv.org/abs/2309.00667) [[2303.07103] Could a Large Language Model be Conscious?](https://arxiv.org/abs/2303.07103) [[2304.05077] If consciousness is dynamically relevant, artificial intelligence isn't conscious](https://arxiv.org/abs/2304.05077) https://philpapers.org/rec/WIECLL Chain-of-Thought Reasoning Without Prompting Can LLMs reason effectively without prompting? Our findings reveal that, intriguingly, CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the decoding process. https://twitter.com/arankomatsuzaki/status/1758309932103774329 [[2402.10200] Chain-of-Thought Reasoning Without Prompting](https://arxiv.org/abs/2402.10200) [[2402.08871] Position: Topological Deep Learning is the New Frontier for Relational Learning](https://arxiv.org/abs/2402.08871) Wondering why LLM safety mechanisms are fragile? 🤔 😯 We found safety-critical regions in aligned LLMs are sparse: ~3% of neurons/ranks ⚠️Sparsity makes safety easy to undo. Even freezing these regions during fine-tuning still leads to jailbreaks [Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications](https://boyiwei.com/alignment-attribution/) https://www.lesswrong.com/posts/hincdPwgBTfdnBzFf/mapping-the-semantic-void-ii-above-below-and-between-token [Physics Informed Machine Learning: High Level Overview of AI and ML in Science and Engineering - YouTube](https://www.youtube.com/watch?v=JoFW2uSd3Uo) AI empathy [[2302.02083] Evaluating Large Language Models in Theory of Mind Tasks](https://arxiv.org/abs/2302.02083) [[2309.01660] Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain](https://arxiv.org/abs/2309.01660) [[2310.20320] Theory of Mind in Large Language Models: Examining Performance of 11 State-of-the-Art models vs. Children Aged 7-10 on Advanced Tests](https://arxiv.org/abs/2310.20320) brain empathy [Mirror neuron - Wikipedia](https://en.wikipedia.org/wiki/Mirror_neuron) Shall we hardcore mirror neurons like structures into AI for AI to have empathy? https://www.sciencedirect.com/book/9780128053973/neuronal-correlates-of-empathy https://www.sciencedirect.com/science/article/abs/pii/B9780128053973000048 [The Neural Correlates of Empathy that Predict Prosocial Behavior in Adolescence](https://escholarship.org/uc/item/4z8243wk) https://knightscholar.geneseo.edu/cgi/viewcontent.cgi?article=1538&context=great-day-symposium With the cyborg era and intelligence explosion, many social frameworks about how humans are special systems dissolve, just like how we figured out that Earth isn't the center of the universe AI will autonomously solve nuclear fusion and build intergalactic dyson spheres and people will be like ok cool anyway what's for dinner We are accelerating towards predicting into reality protopia or utopia for all beings by building it. Optimism enacts optimistic actions that build optimistic reality. We must not lose this spirit. FAAH gene eh https://twitter.com/the_megabase/status/1757536195813196229 [OSF](https://osf.io/preprints/psyarxiv/4cbuv) https://twitter.com/IrisVanRooij/status/1686479782727266305?t=alD-M2g88o3c_eNhV3EWfQ&s=19 [Microdosing LSD increases the complexity of your brain signals | New Scientist](https://www.newscientist.com/article/2416478-microdosing-lsd-increases-the-complexity-of-your-brain-signals/) [Mathematics | Free Full-Text | Homological Landscape of Human Brain Functional Sub-Circuits](https://www.mdpi.com/2227-7390/12/3/455) https://twitter.com/burny_tech/status/1758798503680098609 Universal basic services for every being doubters gonna doubt achievers gonna achieve Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75 [Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI | Lex Fridman Podcast #75 - YouTube](https://www.youtube.com/watch?v=E1AxVXt2Gv4) [[2402.08268] World Model on Million-Length Video And Language With Blockwise RingAttention](https://arxiv.org/abs/2402.08268) Reasoning skill is not a single vector and all of the vectors are a spectrum. Humans are very small subspace of this space of all possible reasonings. [V-JEPA: The next step toward advanced machine intelligence](https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/) zpracovávání přirozenýho jazyka? vision? strukturování dat? něco jinýho? jazykový modely? obecně neuronky? obecně statistický učení? obecně AI metody? robotiku? symbolický metody? něco jinýho? software? hardware? hrát si? budovat produkty v industry? jaký aplikace, chatboty? dělat na foundations nebo lepit co už existuje? dělat empirický research? dělat teoretický research? hodně specifický nebo víc mezidisciplionární? mít spíš obecný přehled? dělat něco míň technickýho? nějaký mix? pokud chceš high level overview toho co za témata existuje v generativním podoboru AI co je teď nejvíc trendy, tohle je dobrý [Generative AI in a Nutshell - how to survive and thrive in the age of AI - YouTube](https://www.youtube.com/watch?v=2IK3DFHRFfw) ale jde i do témat mimo generativní podobor A tohle je zase super dobrá mapa matiky za AI a taky podoborů AI [Map of Artificial Intelligence - YouTube](https://www.youtube.com/watch?v=hDWDtH1jnXg) Tohle je fajn highlevel summary matiky za neuronkama [But what is a neural network? | Chapter 1, Deep learning - YouTube](https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi) a tohle je fajn summary matiky za transformerama [What are Transformer Neural Networks? - YouTube](https://www.youtube.com/watch?v=XSSTuhyAmnI) (co je architektura založená na neuronkách) který jsou jako základ pod většinou novodobýma generativníma AI technologiema, a https://twitter.com/LangChainAI https://twitter.com/llama_index má asi nejlepší feed trendy lepících věcí 😄 Jo ještě tohle je super na implementací transformerů from scratch, kde není tolik matiky [Let's build GPT: from scratch, in code, spelled out. - YouTube](https://www.youtube.com/watch?v=kCc8FmEb1nY) nebo obecně implementace neuronek od základů až k transformerům [Neural Networks: Zero to Hero - YouTube](https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ) [Courses - DeepLearning.AI](https://www.deeplearning.ai/courses/) má asi nejlepší delší kurzy pro industry aplikace Tohle je super seznam všemožných ML courses ze všech možných univerzit na netu, od praktických po hodně teoretických [GitHub - dair-ai/ML-YouTube-Courses: 📺 Discover the latest machine learning / AI courses on YouTube.](https://github.com/dair-ai/ML-YouTube-Courses) Mě teď asi nejvíc zajímá jak fungujou neuronky uvnitř, ne jen jejich architektura a učící algorithmus (ten učící algoritmus a architektura je mix lingebry, analýzy a statistiky (a ty algorithmy vznikly z neurovědy, statistický fyziky a fucking around and finding out)) Např je hodně otevřenej problém jaký algorithmy se vůbec učí a jak je identifikovat a ovládat [Concrete open problems in mechanistic interpretability | Neel Nanda | EAG London 23 - YouTube](https://www.youtube.com/watch?v=7t9umZ1tFso) Nebo chybí pořádný teoretická fyzikální/matematický model vysvětlující proč vůbec neuronky fungujou in the first place, jak se učí reprezentace, jak zobecňují apod. [A New Physics-Inspired Theory of Deep Learning | Optimal initialization of Neural Nets - YouTube](https://www.youtube.com/watch?v=m2bXL5Z5CBM) FIT ti pro ML dá základní matiku (lingebra, analýza, statistika), programovat (učí hlavně C++ což se hodí když chceš psát low level ML knihovny, což dělá menšina, jinak se všechno dělá v pythonu, na co je taky pár předmětů), jak dělat v linuxu, s databázema, základní algoritmy, další matiku (automaty jsou všude), statistický učení, neuronky, kryptografie se teď taky řeší,... [Studijní plán - Bc. specializace Umělá inteligence, 2021](https://bk.fit.cvut.cz/cz/plany/pl30021467.html) taky tam nejsou diffusion modely, ty jsou teď in v generování obrázků [Diffusion models explained. How does OpenAI's GLIDE work? - YouTube](https://www.youtube.com/watch?v=344w5h24-h8) nebo ještě teď dávají comeback statespace modely jako mamba [Mamba: Linear-Time Sequence Modeling with Selective State Spaces (Paper Explained) - YouTube](https://www.youtube.com/watch?v=9dSkvxS2EB0) taky nevidim symbolický metody nebo sebeorganizaci [The Future of AI is Self-Organizing and Self-Assembling (w/ Prof. Sebastian Risi) - YouTube](https://www.youtube.com/watch?v=_7xpGve9QEE) Differentiable neural computer je cool [Differentiable neural computer - Wikipedia](https://en.wikipedia.org/wiki/Differentiable_neural_computer) zajimavy jak udělali turing mašinu z rekurentky tim ze tam explicině hardcodli memory ale lidi konvergnuli na transformery a turing completness si pak dodali jinacima zpusobama [[2301.04589] Memory Augmented Large Language Models are Computationally Universal](https://arxiv.org/abs/2301.04589) [[2303.14310] GPT is becoming a Turing machine: Here are some ways to program it](https://arxiv.org/abs/2303.14310) nebo vzhledem k tomu ze se nasly naučený finite state machines uvnitř transformeru tak je technicky in a sense turing complete i sam transformer [Decomposing Language Models Into Understandable Components \ Anthropic](https://www.anthropic.com/news/decomposing-language-models-into-understandable-components) nebo by stačily NOR/XOR hradla https://www.lesswrong.com/posts/2roZtSr5TGmLjXMnT/toward-a-mathematical-framework-for-computation-in btw ještě alternativně např MIT má přednášky na ty matiky online, včetně famous přednášek na lingebru od gilberta [MIT OpenCourseWare | Free Online Course Materials](https://ocw.mit.edu/) [Gilbert Strang lectures on Linear Algebra (MIT) - YouTube](https://www.youtube.com/playlist?list=PL49CF3715CB9EF31D) [[2305.13048] RWKV: Reinventing RNNs for the Transformer Era](https://arxiv.org/abs/2305.13048) [China Says It Plans to Mass-Produce Humanoid Robots Within 2 Years - Business Insider](https://www.businessinsider.com/china-plans-mass-production-humanoid-robots-within-two-years-2023-11) https://www.theregister.com/2024/02/15/feds_go_fancy_bear_hunting/ [[2008.01540] The world as a neural network](https://arxiv.org/abs/2008.01540) [John Hopfield: Physics View of the Mind and Neurobiology | Lex Fridman Podcast #76 - YouTube](https://www.youtube.com/watch?v=DKyzcbNr8WE) [What if we could redesign society from scratch? The promise of charter cities - YouTube](https://www.youtube.com/watch?v=v-A1i2g9riU) https://www.pnas.org/doi/10.1073/pnas.2401731121?utm_source=facebook&utm_medium=social&utm_term=pnas&utm_content=e730bf19-136e-4817-a0fb-fdb1f79d7dfc&utm_campaign=hootsuite Sources of AGI capabilities https://imgur.com/ehrz0q8 [DeepMind‚Äôs New AI Beats Billion Dollar Systems - For Free! - YouTube](https://www.youtube.com/watch?v=BufUW7h9TB8) [[2212.12794] GraphCast: Learning skillful medium-range global weather forecasting](https://arxiv.org/abs/2212.12794) [[2310.01889] Ring Attention with Blockwise Transformers for Near-Infinite Context](https://arxiv.org/abs/2310.01889) [[2401.18079] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization](https://arxiv.org/abs/2401.18079) reducing/eliminating costs for necessities (food, energy, medicine/healthcare, transportation, clothing, education, shelter) without sacrificing freedoms and economic principles Comparing GPT4 a Gemini 1.5 na comparing old a new compiler codebase v Haskellu Gemini crushuje GPT4, odpovídá na složitější otázky a míň halucinuje Explaining fixing concurrency bug, why redundant functions were removed, writing memdump, explaining rare syntax Pořád chybování, ale mnohem menší ⤴️ https://www.reddit.com/r/singularity/s/PihQuJGjok Smell to text https://phys.org/news/2023-09-ai-nose-molecular.html Taste to text [Predicting Bordeaux red wine origins and vintages from raw gas chromatograms | Communications Chemistry](https://www.nature.com/articles/s42004-023-01051-9)