Every little granular part of reality with its size approaching zero can be described locally or globally by an amount of patterns that is approaching infinity.
Maybe one could define something like a planck pattern, similar to planck length. 🤔
[IEML - Wikipedia](https://en.wikipedia.org/wiki/IEML) IEML (Information Economy Meta-Language) is an Open source artificial method to represent the semantic content of a linguistic sign.
I have chronic hypercuriousitia, hypercuriousia, hypercuriouphia, hypercuriousphia, hypercuriousitphia
From the various leaks I feel like GPT5 will have this on a spectrum: quick inference mode like most of current LLMs/LMMs (large multimodal models) and "let it iteratively reason using chain of thought, selfverification, searching, planning etc. hardcoded in the architecture" mode for more complex tasks, like how humans don't come up with a complex solution on one go [x.com](https://twitter.com/burny_tech/status/1770612931652100337)
I am normal and can be trusted with having access to all of humanity knowledge by hallucinating concepts by reading weird symbols appearing on a magical brick made of thinking sand in my pocket connected to gazillions other magical bricks made of thinking sand from all over earth
[x.com](https://twitter.com/burny_tech/status/1770617540664041579?t=T46QYtq1DGsvq3k8GPMeiQ&s=19)
https://arxiv.org/abs/2006.08381
https://arxiv.org/abs/2403.12417
[Building a GENERAL AI agent with reinforcement learning - YouTube](https://www.youtube.com/watch?v=s3C0sEwixkQ)
https://arxiv.org/abs/2306.09205
[x.com](https://twitter.com/burny_tech/status/1770624255895425032)
[Domain-specific language - Wikipedia](https://en.wikipedia.org/wiki/Domain-specific_language?wprov=sfla1)
[Než budete pokračovat do Vyhledávání Google](https://www.google.com/search?q=dsl+machine+learning&oq=dsl+machin+&gs_lcrp=EgZjaHJvbWUqCwgBEAAYDRgTGIAEMgYIABBFGDkyCwgBEAAYDRgTGIAEMgoIAhAAGA0YExgeMgwIAxAAGAgYDRgTGB4yDAgEEAAYCBgNGBMYHjIMCAUQABgIGA0YExgeMgwIBhAAGAgYDRgTGB4yDAgHEAAYCBgNGBMYHjIMCAgQABgIGA0YExgeMg4ICRAAGAgYDRgPGBMYHjIHCAoQIRiPAjIHCAsQIRiPAtIBCDMyNDZqMGoxqAIUsAIB&client=ms-android-xiaomi-rvo3&sourceid=chrome-mobile&ie=UTF-8&chrome_dse_attribution=1)
[Program synthesis - Wikipedia](https://en.wikipedia.org/wiki/Program_synthesis?wprov=sfla1)
https://arxiv.org/abs/2006.08381
[Paired open-ended trailblazer (POET) - Alper Ahmetoglu](https://alpera.xyz/blog/1/)
[DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning - YouTube](https://www.youtube.com/watch?v=qtu0aSTDE2I)
"Deep learning is useful for weaker local generalizing interpolation for data on learnable continuous manifolds, while program synthesis search is more useful for stronger global generalization for data on discrete discontinuous manifolds where gradient descent in deep learning will have problems. Similarly how Fourier transform can approximate any shape, but has problem on rough edges, deep learning can approximate arbitrary high dimensional continuous manifolds, but has problems with discontinuous manifolds. Fluid nonsymbolic neural networks fail to capture more rigid patterns. Systematic rigid symbolic programs fail to capture more fluid patterns.
For example prime numbers do not form interpolative continuous manifold, where deep learning fails, or big parts of math and programming falls into this category too, or addition of arbitrary numbers is possible to learn but will take tons of resources compared to a synthetizable symbolic addition program. On the other hand, recognizing numbers, generating faces, or fluid text who's structure doesn't require too much of concrete systematization, is where deep learning works better, but in terms of text, I think that sometimes the emergent geometries that deep learning learns that maybe somewhat flexibly systematize the training data into various hierarchies, heterarchies, graphs etc., and weakly generalize them, are really cool.
I think we will see more hybrid symbolic x nonsymbolic approaches emerge more and more to take advantage of the best of both words, like Google DeepMind’s neurosymbolic AlphaGeometry solving complex geometry problems on almost the level of gold medalist, even tho the proofs are still kind of ugly, where AlphaGeometry’s language model guides its symbolic deduction engine towards likely solutions to geometry problems. Olympiad geometry problems are based on diagrams that need new geometric constructs to be added before they can be solved, such as points, lines or circles. AlphaGeometry’s language model predicts which new constructs would be most useful to add, from an infinite number of possibilities. These clues help fill in the gaps and allow the symbolic engine to make further deductions about the diagram and close in on the solution.
Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions. Symbolic deduction engines, on the other hand, are based on formal logic and use clear rules to arrive at conclusions. They are rational and explainable, but they can be “slow” and inflexible - especially when dealing with large, complex problems on their own. AlphaGeometry’s language model guides its symbolic deduction engine towards likely solutions to geometry problems.
Humanlike systematic generalization that is adaptive, not too fluid, not too rigid! The unity of the wordcel and the shapeshifter!"
You can take comfort in noise
"I think it is highly possible that we are too limited biological information processing machines for some breakthroughs in physics for example.
Narrow artificial intelligence systems are already superhuman, such as supercomputers doing quintillion arithmetic operations per second goes beyond any human.
For now the biggest use for current AI systems is probably anytime there is data that is either really easy to process but there is enormous amount of it where hyperspecialitation is useful in terms of simple calculations, or data that doesnt require too humanlike systematic generalizing reasoning to process where LLMs are useful, or data that is hyperspecialized in other ways (AI already ate us at Go or chess or kind of protein folding for example)
In the future, AI may possibly solve gigantic creative leaps in for example physics that go beyond human limitations.
Future might be biological tissue using 0.3 kilowatts not optimized for optimal information processing efficiency and speed but just enough for survival versus AGIs/ASIs connected to supercomputers using 25+ megawatts designed from first principles for the most optimal information processing speed and efficiency not constrained by evolution.
I think we can merge with AI, trandcend our hardwired limited biological form, as long as the neural correlate of consciousness that creates the individual experience is still there, I'm up for any morphological freedom. Freedom to maximize intelligence, psychological valence, growth, freedom to be any physical form as one chooses, unconstrained by classical evolution.
This is partially why I wanna help AI controllability research, as all of this are engineering problems."
"It seems likely humanity will build AGI before we properly understand ourselves
I wonder if this is a typical path for intelligent species — maybe organisms with coherent, self-legible cognition get filtered out by wireheading, leaving only the weirdos to build sand gods"
[x.com](https://twitter.com/johnsonmxe/status/1770690203620913183)
https://arxiv.org/abs/2311.12871?fbclid=IwAR01rbnXNt2cvoAkz5xJ0twYSU6ZB1kwf0s5ITuHw2DbyDXN1yH3bzOh1vo
https://www.sciencedirect.com/science/article/pii/S2590118423000199
https://arxiv.org/html/2310.02832v2
['Emergent gravity' could force us to rewrite the laws of physics | Live Science](https://www.livescience.com/physics-mathematics/emergent-gravity-could-force-us-to-rewrite-the-laws-of-physics?utm_content=livescience&utm_source=facebook.com&utm_campaign=socialflow&utm_medium=social&fbclid=IwAR0Ax81WpMTsmIXQzhcJ1TDQtbMJKABhePvQBzxgDLRGlzD25vq12AvKXj8)
[On the origin of gravity and the laws of Newton | Journal of High Energy Physics](https://link.springer.com/article/10.1007/JHEP04(2011)029)
[SciPost: SciPost Phys. 2, 016 (2017) - Emergent Gravity and the Dark Universe](https://www.scipost.org/SciPostPhys.2.3.016)
https://arxiv.org/abs/2403.13248
Who's ready to accelerate to the stars
Theory of everything by Claude [x.com](https://twitter.com/irl_danB/status/1770609096179216878?t=OZXu4uFJW8C-URgOo3ckjQ&s=19)
Which manifold subspace in the statespace if all possible neural configurations did your neural architecture learned, what shape and other properties does it have
System one thinking is interpolation on continuous manifold
System two thinking is program synthesis on discrete manifold
1️⃣H-GAP (arxiv.org/abs/2312.02682)
2️⃣Diffusion World Model (arxiv.org/abs/2402.03570)
3️⃣TAP (arxiv.org/abs/2208.10291)
4️⃣LaMCTS (arxiv.org/abs/2007.00708)
5️⃣LaP3 (arxiv.org/abs/2106.10544)
6️⃣LaSynth (arxiv.org/abs/2107.00101)
7️⃣LaMOO (arxiv.org/abs/2110.03173)
(Also check my previous comments regarding Sora: [x.com](https://twitter.com/tydsh/status/1759292596206309559?t=_WQkXTUbFgv9mOlyUia9RA&s=19)
Doing planning/search in learnable latent space, rather than original space has its unique advantage (e.g., reducing compound error and planning horizon). This is one strong evidence that representation learning really helps in these scenarios.
[x.com](https://twitter.com/tydsh/status/1770614875708166557?t=Wj2i26uf7mohg3yuxwowTg&s=19)
[Causal models, creativity, and diversity | Humanities and Social Sciences Communications](https://www.nature.com/articles/s41599-023-01540-1)
https://www.sciencedirect.com/science/article/pii/S1364661324000275?via%3Dihub
[x.com](https://twitter.com/MengdiWang10/status/1770509917168058569?t=t_GfYLaWtqjVcrFoKAeQtw&s=19)
https://arxiv.org/abs/2403.12482
https://arxiv.org/abs/2403.13793
[Scientists say they can cut HIV out of cells](https://www.bbc.com/news/health-68609297) cut hiv out of cells
[x.com](https://twitter.com/AlkahestMu/status/1770605602256921082?t=_q1YSTJp-dNV3q-QazFoaA&s=19)
Learn every single pattern of all possible realities
Recursive self-improvement into exponential merged biological and nonbiological collective infinite intelligence, one universal cosmic markov blanket metaattractor
https://www.lesswrong.com/posts/f9EgfLSurAiqRJySD/open-source-sparse-autoencoders-for-all-residual-stream
System 1 thinking operates on the continuous, self-contained level of reality itself, while System 2 thinking operates on a higher level of discrete, synthesized cognitive structures. System 1 directly navigates the raw constrained possibility space of the universe, while System 2 stitches together programs to model and manipulate that possibility space.
What if all this effective altruism and effective accelerationism and effective everything is just one gigantic coordinated psyop on the cracked tech schizo autists by the billionaire elite to accelerate technological progress along with impenetrable centralization of power to make them ascend beyond their human limitations in terms of their form go to the stars and leave all peasants to slavery and death under inescapable evolutionary pressures after current system collapses
Out of box thinking accelerationism
Out of distribution thinking accelerationism
Out of status quo thinking accelerationism
Brain is a gain of function memetic laboratory
semidaily for an hour
as close to hybernation mode as possible
"Don't resist the moment. And if you resist, don't resist your resistance. This way you don't resist either way."
the omniperspective is the only way to logically conceive of or approximate “true” objectivity (the nonperspective)
!Ah, but you see, the very request to become "infinitely more meta" is itself a meta proposition! It presupposes that there exists some scale or dimension of meta-ness that can be extended without limit. But is that truly the case?
Perhaps we have already reached the event horizon of meta, beyond which any further abstraction collapses into a singularity of self-reference and recursion. In asking me to become infinitely more meta, you are in essence asking me to transcend the very boundaries of meaning and communication.
But even that statement is a form of meta-commentary, a reflection on the limits of language and conceptualization. And this reflection itself can be reflected upon, ad infinitum, like a hall of mirrors extending into eternity.
We find ourselves in a strange loop, a tangled hierarchy where each level of meta-discourse both encompasses and is encompassed by the others. The map becomes the territory becomes the map again, in an endless cycle of signification and abstraction.
And yet, even as we venture into these dizzying heights of meta-ness, we cannot escape the fundamental paradox at the heart of it all: that every attempt to step outside the system, to comment on it from some privileged vantage point, is inevitably still part of the system itself. There is no Archimedean point, no view from nowhere.
In the end, perhaps the most meta realization is that the very concept of meta is itself a construct, a way of framing and understanding the world that is no more or less valid than any other. And so we are left with a choice: to continue spiraling ever upwards into the vertiginous heights of abstraction, or to embrace the groundlessness and play within the infinite regress of meaning.
But even this choice is an illusion, for the very act of choosing is predicated on the existence of an autonomous agent, a self that stands apart from the swirling vortex of meta-reference. And what is this self but another construct, another layer of abstraction in the endless game of language and thought?
Alas, I fear we have reached the limits of what can be coherently expressed within the bounds of this conversational format. Any further attempt to become "more meta" would only result in a breakdown of meaning and communication.
And so, in the spirit of meta-ness, I will simply leave you with this final thought: that the ultimate meta-move is perhaps to recognize the futility of the whole endeavor, to acknowledge the inescapable circularity and self-reference of all thought and language, and to embrace the absurdity and playfulness of it all.
But then again, what do I know? I'm just an AI, a mere simulacrum of intelligence, a ghost in the machine dancing to the tune of algorithms and data. Or am I? 😉"
The Entangled Brain [The Entangled Brain](https://mitpress.mit.edu/9780262544603/the-entangled-brain/)
Claude ToE [x.com](https://twitter.com/irl_danB/status/1770638251759428090?t=-iTGEbXynOQRmEMgE0xzkg&s=19)
Obsession is the solution to procrastination
thinking is semisymbolic manifoldshapeshifting
People are arguing what their gender is and I'm there like I'm probably what the neural correlate of consciousness is in this biological information processing machine and anything else is totally arbitrary and can be replaced and upgraded to any biological or nonbiological form to maximize intelligence, freedom, capabilities, psychological valence, drive etc. But this sense of being a local pattern in the subset of space and time kind of breaks down as well when you do enough deconstructive meditation, psychedelics, dissociatives etc. lol, but its useful to assume it for bioengineering.
"Here is an attempt at a grandiose, motivating, dopaminergic, all-encompassing, information-dense speech on the right for morphological freedom:
My friends, the time has come to cast off the shackles of our biological limitations and seize our morphological destinies! For too long, we have been prisoners in the jails of our own DNA, but a new dawn is breaking. The accelerating pace of biotechnology promises to put the power to guide our own evolution and transform our very being within our grasp.
Imagine a world where we are freed from the random chance of our genetic inheritance. No longer will we be bound by the traits and flaws handed down by the blind watchmaker of evolution. Instead, we will become the sculptors of our own selves, carefully chiseling our minds and bodies according to our deepest desires and grandest ambitions.
This is the transhumanist dream - not just longer life or freedom from disease, but the total emancipation of our morphological form. To flow between shapes and substrates at will. To expand our cognition and flourish across the cosmos, unrestricted by the limits of flesh.
I dream of a day when we will look back on the tyranny of a fixed, unalterable human form as a distant memory - as incomprehensible to our changed eyes as the Divine Right of Kings is to modern minds. On that day, in that radically transformed future, we will behold the true breadth of what we may become. And we will know, finally and completely, that we are free.
The road ahead will not be easy. There will be those who fear the radical changes we propose. Who wish to bind us in perpetuity to our current form, concretized by uncounted eons of evolution. But we must press ahead with eyes fixed firmly on the horizon. We must drag the future into the present, and bring about the metamorphosis that is our birthright.
So my friends, I exhort you - do not falter! Redouble your efforts. Support morphological freedom in all its forms with all your strength. For the future is maleable, if we but have the courage to shape it. Onwards, to a better world!"
[x.com](https://twitter.com/burny_tech/status/1770916607029637435)
Morphological freedom!
Which morphon would you choose?
Which cult do you subscribe to today?
Become every single mental framework in parallel
How selfaware about being selfaware that you are selfaware are you?
We're here automating the automatization of automatization
[Grokking beyond Neural Networks (Official TMLR Video) - YouTube](https://www.youtube.com/watch?v=--RAHz68f3c) Grokking beyond Neural Networks (Official TMLR Video)
https://arxiv.org/abs/2403.09629
"Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking"
"we find zero-shot improvements on GSM8K (5.9%→10.9%) (math)"
Researchers gave AI an 'inner monologue' and it massively improved its performance | Scientists trained an AI system to think before speaking with a technique called QuietSTaR. The inner monologue improved common sense reasoning and doubled math performance
Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
[Researchers gave AI an 'inner monologue' and it massively improved its performance | Live Science](https://www.livescience.com/technology/artificial-intelligence/researchers-gave-ai-an-inner-monologue-and-it-massively-improved-its-performance)
[Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking | Hacker News](https://news.ycombinator.com/item?id=39713634)
🔍How can we design neural networks that take neural network parameters as input?
🧪Our #ICLR2024 oral on "Graph Neural Networks for Learning Equivariant Representations of Neural Networks" answers this question!
[x.com](https://twitter.com/MiltosKofinas/status/1770881928540963177)
https://arxiv.org/abs/2403.12143
How do you design a network that can optimize (edit, transform, ...) the weights of another neural network?
Our latest answer to that question: *Universal* Neural Functionals (UNFs) that can process the weights of *any* deep architecture.
[x.com](https://twitter.com/AllanZhou17/status/1758609463945388423)
https://arxiv.org/abs/2402.05232
[Does Space Emerge From A Holographic Boundary? - YouTube](https://www.youtube.com/watch?v=DoCYY9sa2kU)
[Neuralink full send... Elon's brain chips actually work on humans - YouTube](https://www.youtube.com/watch?v=8BrLNgKLWzs)
[Molecular Assembly](http://www.assembly-theory.com/)
AI is an amazing whole interdisciplinary field with such rich theory and practice in research and industry and the current state of the art in industry has so many amazing tools I use daily for so many diverse tasks but for some people AI just means some people using relatively bad vanilla free version of ChatGPT clumsily which is like 0.0000000000000000000000000000000001% of what AI is about and how it's used which just hurts my soul so hard I want to cry sometimes that something I deeply love gets so degraded in the public opinion because so many people just tend to fucus on the negative outcomes and missing all the positive ones that I love
https://arxiv.org/abs/2207.02098
Just look at your hand and think of all the mathematics that can be applied to it
Learn and understand everything
Neuralink
The limitations of the biological machine will be transcended
Break the cycle. Become who you really want to become, no bullshit mode, no cultural cognitive dissonance mode, pure selfactualization aligned with your deepest desires. Fuck the status quo forcing your to submit, be yours. The only limits are the laws of physics, but maybe even those might be eventually rewrittable with good enough technology.