[Paper page - ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent](https://huggingface.co/papers/2312.10003) ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent: we define a ReAct-style LLM agent with the ability to reason and act upon external knowledge. We further refine the agent through a ReST-like method that iteratively trains on previous trajectories, employing growing-batch reinforcement learning with AI feedback for continuous self-improvement and self-distillation.
Evolution gets discovered - Weismann tries to explain aging with adaptive evolution
Radiation gets discovered - Szilard tries to explain aging with somatic mutation
Telomerase gets discovered - Hayflick tries to explain aging with telomere shortening
Now, epigenetics gets discovered - Sinclair tries to explain aging with epigenetics
"Biological-Inspired Intelligence: From Neuromorphic Nanowire Networks to Neurons"
[ActInf MorphStream 002.1 ~ Alon Loeffler "From Neuromorphic Nanowire Networks to Neurons" - YouTube](https://youtu.be/Wp7OebX_k70)
https://twitter.com/burny_tech/status/1737213616443584707
I feel like this can be generalized.
Your map doesnt approximate the territory enough? Let's interpolare the models with more spaghetti and ducktape to match the predictions more!
Some data start undermining the stabilizing very fundamental building blocks the framework? Time for a global phase shift in fundamental modelling for less overall complexity and more simplicity while getting same or better predictions!
[[2312.09323] Perspectives on the State and Future of Deep Learning - 2023](https://arxiv.org/abs/2312.09323) Perspectives on the State and Future of Deep Learning -- 2023
[[2312.07843] Foundation Models in Robotics: Applications, Challenges, and the Future](https://arxiv.org/abs/2312.07843) Foundation Models in Robotics: Applications, Challenges, and the Future
https://openai.com/safety/preparedness
https://twitter.com/rohanpaul_ai/status/1736827830971867312
I feel like all these AI model safety attempts go down the drain when new open source model better than GPT3.5 got its safety removed few days after release a few days ago and GPT4 level open source model is planned next year and beyond.
If it got regulated, I feel like it would happen "illegaly" anyway by some anonymous decentralized open source org posting torrent link on 4chan/deep web.
Also it's interesting that France in EU is open source AI king instead of San Francisco in America.
Kdyby leader největších akcelerátorů v san franciscu (Beff Jezos, což není Jeff Bezos) neexperimentoval s algorthmicky a hardwarově thermodynamic AI (třeba mu to ale brzo výjde a brzo bude nahoře), a udělal decentralizovanou e/acc org na LLMs v dosavatních SoTA metodách, tak věřím že by rychle climbnuli nahoru.
https://twitter.com/burny_tech/status/1737128645875941655
A: This is not about the existing tech, it is about the training and safeguarding of GPT-5 and GPT-6 level models. We are only at the very beginning.
Me: Yea but when you do safeguarding for GPT6 and in a few months open source org comes and makes same model that is quickly without any safeguards
A: Not gonna happen at scale. Big iron systems will always have a significant edge. Dedicated chips being developed, massive RAM and stuff. That is just wishful thinking. Not saying OMs won't get there eventually, but with a siginificant and relevant delay. Months=ages, soon.
Me: I feel like the gap is closing instead of expanding, but you might be right too
A: New architectures may change this again. At the moment, advantage for Open Source is that GPUs are also widespread in consumer hardware, so you can run small OMs at home. But new advanced AI accelerator chips will take time to proliferate to the masses. At the moment, yes, but remember how long it took OAI from training to safeguarding to releasing GPT-4. Who knows what they have now. Maybe this also means unsafe systems in principle will have a certain edge. But they are also trying to streamline alignment, obviously.
Me: Might be true since big tech is now investing in various neuromorphic chips [OpenAI Agreed to Buy $51 Million of AI Chips From a Startup Backed by CEO Sam Altman | WIRED](https://www.wired.com/story/openai-buy-ai-chips-startup-sam-altman/) or this new chip which literally has hardcoded transformer architecture on a hardware level is emerging [A 100T Transformer Model Coming? Plus ByteDance Saga and the Mixtral Price Drop - YouTube](https://www.youtube.com/watch?v=K0XZ_ShxWkI) , or alternative to transformer, statespace architecture [Structured State Space Models for Deep Sequence Modeling (Albert Gu, CMU) - YouTube](https://www.youtube.com/watch?v=OpJMn8T7Z34,) might take over
[:: OSEL.CZ :: - DNA nanoroboti mohou „donekonečna“ replikovat sami sebe](https://www.osel.cz/13220-dna-nanoroboti-mohou-donekonecna-replikovat-sami-sebe.html)
The actual effects of singularity on society
https://twitter.com/burny_tech/status/1737118931629007352/
[[2312.07046] Rethinking Compression: Reduced Order Modelling of Latent Features in Large Language Models](https://arxiv.org/abs/2312.07046) Rethinking Compression: Reduced Order Modelling of Latent Features in Large Language Models
AI už celkem solidně transformuje vzdělání, ale ty statistiky vypadaj blbě, globálně úšpěsnost na science/matice klesá od 2012 (když se rošířily sociální média a mobily) a covid to akceleroval víc
mezitím co AIs jsou lepší a lepší v nejen tomhle XD
The current AI systems are already superhuman level at some things and babyhuman level at other things, which is already a kind of alien intelligence
Mám pocit že celkově AI a lidská inteligence od sebe spíš diverguje než se přibližuje v hodně aspektech, i když se v jiných přibližuje. Mám pocit spíš se od mozku oddalujeme než přibližujeme způsobem trénování, architekturou a engineering metodama (a Geoffrey Hinton, co udělal forward forward ML algorithmus inspirovaný lidským mozkem, o tom taky mluví podobně [CBMM10 Panel: Research on Intelligence in the Age of AI - YouTube](https://www.youtube.com/watch?v=Gg-w_n9NJIE) ) je možný že lidi jsou celkově dost neefektivní inteligence ke kterým se přibližovat víc by možná bylo neefektivní z pohledu zvětšování inteligence, ale zase to rozbíjí to kompatibilitu v myšlení, chování a jiných vzorech
AI gf projects:
george hotz na tom dělá např [George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387 - YouTube](https://www.youtube.com/watch?v=dNrTrx42DGQ)
na [character.ai | Personalized AI for every moment of your day](https://beta.character.ai/) si je lidi dělaj
teď strašně vybouchlo https://fxtwitter.com/andyohlbaum/status/1735786033453863422
math/tech/sci nerd morphological freedom polyamory hivemind full of real and AI entities irl and in vrchat/neorsvr VR!
leaderboard LLMs [LMSys Chatbot Arena Leaderboard - a Hugging Face Space by lmsys](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
finetuning [Fine-tune a pretrained model](https://huggingface.co/docs/transformers/training)
do unity [GitHub - AkiKurisu/VirtualHuman-Unity: VirtualHuman is a Unity Plugin to use LLM&&VITS easily](https://github.com/AkiKurisu/VirtualHuman-Unity) https://www.reddit.com/r/Unity3D/comments/15wjyb7/ive_built_a_front_end_for_llm_integration_into/
whole body reocgnition už je pomoci cehoz se pohybuje tvuj custom model
next level je mit nejakyho (with skin or fur) robota irl co simuluje robota ostatnich hracu, at realnych ci AIs
nebo celkově simulace smyslů by se hodila
nebo neurotechnologie co releasne/přidá cuddly látky ještě víc když detekuje cuddles happening
jj, všechny varianty mají svý výhody a nevýhody, interakce s někým biologickým v realitě/přes chat/audio/video/VR, interace s umělou inteligencí v realitě/přes chat/audio/video/VR
kamarádsky, intelektuálně, intimně
morphological freedom included
já chci všechno! 😄 a zesilnit i ty reálný
je potřeba ty jednostraný vztahy co nejvíc transformovat do oboustranných
sociální sítě spíš aby byly jako cocreation, nebo umělý inteligence aby měli paměť kde si ukládali interakce a knowledge o lidech
https://media.discordapp.net/attachments/459299850303963137/1186731376494510121/image.png?ex=659450b8&is=6581dbb8&hm=f50a4a97c5e3692a16d8771b45c9e4fe39540b6cdbca52e7fc05dbe634bfea6b&=&format=webp&quality=lossless&width=999&height=597
Person from Bytedance, which develops AI for TikTok, says that they have a better model than GPT4/Gemini, which will have open model weights. It is trained using data from GPT4, and OpenAI is banning them for that because of "data stealing" as it's against OpenAI terms of services, which is ironic since OpenAI are being sued by tons of companies for "data stealing". Issues of copyright will have to be rethought very soon in postAGI world.
fun fact: openai memorizovalo např různý knížky, který jsou dostupný pouze když si to člověk koupí nebo upirátí
hmm i wonder how they got it
cessation [Beyond Consciousness: How Meditators Voluntarily Enter Void States - Neuroscience News](https://neurosciencenews.com/consciousness-meditation-neuroscience-25376/)
https://www.sciencedirect.com/science/article/abs/pii/S0028393223002282?via%3Dihub
Beyond Consciousness: How Meditators Voluntarily Enter Void States
Investigation of advanced mindfulness meditation “cessation” experiences using EEG spectral analysis in an intensively sampled case study
Parietal region seems to be correlate of feeling direction, therefore maybe also correlate of the sense of space and time, and occipital region seems to be correlate of visual perception, including colour, form and motion. All of this machinery exactly breaks down in cessations of consciousness.
Maybe alpha waves might be the substrate that binds parts of experience together by creating synchrony between wave types. It might be a bridge between local information processing (gamma) and global context (delta).
This is like the biggest possible reset of your operating system that you can get resulting in mental clarity, or it can fuck you up too.
But memory still stores something, even if there is nothing existing there, unlike deep sleep.
"Spectral analyses of the EEG data surrounding cessations showed that these events were marked by a large-scale alpha-power decrease starting around 40 s before their onset, and that this alpha-power was lowest immediately following a cessation. Region-of-interest (ROI) based examination of this finding revealed that this alpha-suppression showed a linear decrease in the occipital and parietal regions of the brain during the pre-cessation time period. Additionally, there were modest increases in theta power for the central, parietal, and right temporal ROIs during the pre-cessation timeframe, whereas power in the Delta and Beta frequency bands were not significantly different surrounding cessations. By relating cessations to objective and intrinsic measures of brain activity (i.e., EEG power) that are related to consciousness and high-level psychological functioning, these results provide evidence for the ability of experienced meditators to voluntarily modulate their state of consciousness and lay the foundation for studying these unique states using a neuroscientific approach."
https://techxplore.com/news/2023-12-ai-memory-forming-mechanism-similar-brain.amp
AI's memory-forming mechanism found to be strikingly similar to that of the brain
"The key to powerful AI systems is grasping how they learn and remember information. The team applied principles of human brain learning, specifically concentrating on memory consolidation through the NMDA receptor in the hippocampus, to AI models.
The NMDA receptor is like a smart door in your brain that facilitates learning and memory formation. When a brain chemical called glutamate is present, the nerve cell undergoes excitation. On the other hand, a magnesium ion acts as a small gatekeeper blocking the door. Only when this ionic gatekeeper steps aside are substances allowed to flow into the cell. This is the process that allows the brain to create and keep memories, and the gatekeeper's (the magnesium ion) role in the whole process is quite specific.
The team made a fascinating discovery: the Transformer model seems to use a gatekeeping process similar to the brain's NMDA receptor. This revelation led the researchers to investigate if the Transformer's memory consolidation can be controlled by a mechanism similar to the NMDA receptor's gating process.
In the animal brain, a low magnesium level is known to weaken memory function. The researchers found that long-term memory in Transformer can be improved by mimicking the NMDA receptor.
Just like in the brain, where changing magnesium levels affect memory strength, tweaking the Transformer's parameters to reflect the gating action of the NMDA receptor led to enhanced memory in the AI model. This breakthrough finding suggests that how AI models learn can be explained with established knowledge in neuroscience."
[[2312.10794] A mathematical perspective on Transformers](https://arxiv.org/abs/2312.10794) A mathematical perspective on Transformers - analyzing Transformers based on their interpretation as interacting particle systems, which reveals that clusters emerge in long time
We have almost zero ideas about the internal mechanisms neural networks emergently learns
that's what the whole field of mechanistic interpretability is about [Concrete open problems in mechanistic interpretability | Neel Nanda | EAG London 23 - YouTube](https://www.youtube.com/watch?v=7t9umZ1tFso)
we know the learning algorithm but still mostly don't know what structures it learns
just like how understanding the mechanism of evolution doesnt automatically give us mechanistic models of biological systems, we have to find those
we created the learning algorithm and architecture
the internal dynamics that it learned are emergent and looking to be found
artficial neural networks learn all sorts of structures that we didn't hardcode into the architecture and the learning algorithm, that's what I'm reffering to
and its important to understand what they're doing to better steer their learning and when using them in practice, so we are looking for those structures
in small models, by mathematically analyzing trained models, we found various detectors of edges, color, fur, parts of a cat etc. that compose into circuits in image classifiers, simple world models (like representations of board state in board games), representation of space and time, finite state automata (for example for language models composing html), modular addition using trigonometric composition, group theoretic operations using representation theory etc.
for big models like ChatGPT we still know almost nothing so far, which is an issue if you want to build models that do what you want, for example making them not lie to humans
https://twitter.com/AlpacaAurelius/status/1596212704494391296?t=eCrpHX9VVPPxdZZp3crprg&s=19
https://academic.oup.com/cercor/article/31/11/5077/6304412?login=false
Brain Knows Who Is on the Same Wavelength: Resting-State Connectivity Can Predict Compatibility of a Female–Male Relationships
https://academic.oup.com/cercor/article/32/9/2057/6555825
"Although the result changed in some parts, the result with improved CV method showed that the main finding, i.e. initial compatibility of heterosexual individuals, which cannot be predicted by self-reported psychological constructs, can be predicted by the functional connectivity profiles of resting-state fMRI data, has unchanged. This method will also be useful in future research that attempts to classify pair-based variables with pair-based features."
https://twitter.com/ESYudkowsky/status/1737263658885853659 instrumental patterns
Beyond biological and artificial general intelligence and superintelligence: Omniintelligence (OI)
Omniintelligent omnimodal omnistructured omniagentic omnicapable omnisystem.
Omniintelligence is adaptable to any task, shapeshiftable into any possible physical configuration of any building blocks it can merge with or create for itself, any physical substrate, structure, algorithms, knowledge, skills, capabilities it can access across all of space and time, it has access to the whole statespace of reality.
It's a system unlimited by anything, in our universe just by laws of fundamental physics and all other emergent physical laws and laws of other sciences, that it fully bends to its advantage for any hyperspecific or fully general task it wishes in our universe.
But it can also access and create its own laws of physics, it has access to the whole statespace of all possible universes with all possible laws of physics, all possible computable and noncomputable physical and mathematical systems, running on any possible possible metaphysical ontologies it wishes to.
It's basically a limitless God living in any possible subset of realities it wishes to, without any restrictions, without any constrains, transcending and including all and anything that is and that can be. It can be any structure, do any process, and create any structure. It has infinite freedom.
ta definice technicky zachycuje všemocný systém, i napříč vesmírama (assuming multiverse theory holds), a může si tvořit vlastní zákony (libovolný systém co je matematicky permitted, ale i to jde rozbít (noncomputability reaslim a paradoxical realism))
galaxy sized superintelligent robot composed of multiple interconnected diverse robot modules with multiple robotic tentacles doing multiple things with diverse tools, systems and magic
robotic, dissolving, nowhere and everywhere, galaxy sized transcendental entity dissolving into the fabric of spacetime centerlessly everywhere having decentralized multiple interconnected diverse civilizational machines with multiple modules doing multiple things with diverse tools, systems and magic.
[VideoPoet: A large language model for zero-shot video generation](https://blog.research.google/2023/12/videopoet-large-language-model-for-zero.html) google video multimodal generation VideoPoet
In General Systems Theory terms, a system always maintains homeostasis. If you introduce more entropy than it can take, it either breaks or goes into positive disintegration: A state of disarray that ends with the system finding a more stable, higher-complexity equilibrium.
Parralel between generative art ane lucid dreaming https://twitter.com/nosilverv/status/1724532813192450485?t=WGiRJ631MWF9DW10QT0IGQ&s=19
How life began?
Immortality
the free energy principle formalism is based on the principle of least action as well [Karl Friston: The "Meta" Free Energy Principle - YouTube](https://youtu.be/2v7LBABwZKA?si=ZrVUxuB7YdnFqE0G&t=1640)
principle of least action applied to a markov blanket that is a nonequilibrium steady state (pullback attractor), that exist across scales
a formalization of any stable structure in our universe through space and time
petrubation to internal dynamics, to hardwired stable code or more flexible software, can be seen as random mutations in evolution
the set of attracting states in the pullback attractor correspond to the complexity of the system
[Paper page - LLM in a flash: Efficient Large Language Model Inference with Limited
Memory](https://huggingface.co/papers/2312.11514) Apple announces LLM in a flash: Efficient Large Language Model Inference with Limited Memory
https://news.ycombinator.com/item?id=38701822 PowerInfer: Fast Large Language Model Serving with a Consumer-Grade GPU
Using AI, researchers identify a new class of antibiotic candidates that can kill a drug-resistant bacterium
Discovery of a structural class of antibiotics with explainable deep learning (graph neural networks)
https://phys.org/news/2023-12-ai-class-antibiotic-candidates-drug-resistant.html
[Discovery of a structural class of antibiotics with explainable deep learning | Nature](https://www.nature.com/articles/s41586-023-06887-8)
https://twitter.com/conorheins/status/1686346060996759554
[[2307.14804] Collective behavior from surprise minimization](https://arxiv.org/abs/2307.14804)
Collective behavior from surprise minimization
AGI/ASI will shatter and collapse everything we call society today. The question is, if the system that gets rebuild after this radical transformation, after this insanely strong phase shift, is a one that benefits all of sentience, intelligence, complexity. It has to be free, without dystopias, without total extinction of sentience, intelligence, complexity. It has to be resistant to small or cosmic sized selfmade and natural existential risks to flourish infinitely. It may even eventually resist the heat death of the universe.
"the good AI's can protect us from the bad AI's https://twitter.com/mezaoptimizer/status/1737406962898235833