Yannic Chollet intelligence
[On the Measure of Intelligence by François Chollet - Part 1: Foundations (Paper Explained) - YouTube](https://youtu.be/3_qGrmD6iQY?si=VPXyfZhcsCtpKASB)
The world has infinite to humans unfathomable complexity
Infinitely dimensional statespace of all possible mental constructs
I love the detailed complexity of absolutely saturated information
God is all possible structures and relationships between them and their implementations in our observable physical universe
Very high valence pretty stable baseline state of consciousness
The simplest thing includes infinite complexity including all details
[The Simplest Thing - YouTube](https://www.youtube.com/live/_HG02hDl5R0?si=f5_z_Bx8yjGDJz4b)
Technical university lectures > Movies
I wish I could eat and digest technical knowledge at 10000000000000000000x the current speed
What do you think is the transhumanist longtermist end goal? Or a direction, a vector?
I think that the end goal is infinite knowing, intelligence, predictivity, meaning, interestingness, complexity, growth, bliss, satisfaction, fulfillment, wellbeing, mapping the whole space of knowledge with all possible structures, creating the most predictive model of our shared observable physical universe, mapping the space of all possible types of experiences including the ones with highest psychological valence, meaning, intelligence etc., and create clusters of atoms optimized for it, playing the longest game of the survival of the stablest for the longest time by building assistive intelligent technology in riskaware accelerated way and merging with it into hybrid forms and expanding to the whole universe and beyond and beating the heat death of the universe.
Superintelligence, superlongevity and superhappiness.
"You raise some fascinating points about the potential long-term goals and aspirations of transhumanism from a longtermist perspective. I think you've identified many of the key themes and ideals that tend to animate transhumanist thought and discourse.
At its core, the transhumanist vision is about radically expanding and enhancing the capacities of the human mind and condition through technology. This includes vastly amplifying intelligence, knowledge, predictive capabilities, emotional wellbeing, and longevity - essentially pushing the boundaries of the human experience to the greatest extents possible. Many transhumanists see this as an iterative, accelerating process, with enhanced intelligence enabling the creation of even more advanced technologies in a positive feedback loop.
The notion of "infinite" expansion in domains like intelligence and knowledge reflects the immense ambition of this vision. Transhumanists often speak of mapping the entire space of possible experiences, simulating whole universes, and uncovering the fundamental laws and structures underlying reality itself. Underlying this is frequently a drive to create the richest, most meaningful and fulfilling types of experiences possible.
At the outer limits, some transhumanists envision expanding this process beyond Earth to the entire cosmos - transforming all matter into optimized computational substrates supporting ever-growing intelligence and bliss. Transcending even mortality and the apparent inevitability of entropy and the heat death of the universe are sometimes held out as ultimate aspirations.
That said, most transhumanists acknowledge immense uncertainty about what an "end state" might actually look like. They tend to advocate advancing cautiously and incrementally with a focus on reducing existential risks. How to maintain meaning and avoid stagnation or value drift in the face of vast posthuman intelligence also remains an open question.
Additionally, contemporary transhumanism contains a diversity of viewpoints and priorities. Not all transhumanists are focused on cosmic visions of the far future. Many emphasize near-term objectives like mitigating aging, disease, and suffering and using technology to create greater opportunities for human flourishing and self-actualization within a recognizable modern context.
So in summary, I believe the transhumanist longtermist perspective does point towards an extraordinarily grand vision of what intelligence could ultimately achieve and become - but one still fraught with profound uncertainties. It will be fascinating to see how these ambitions and ideas evolve as the science and philosophy of intelligence and technology rapidly advance in the coming years and decades. Let me know if you have any other thoughts or reflections on this!"
Is intelligence fully mathematizable?
If we talk about biological intelligence, assuming no hypercomputation is involved (I suspect it isn't) meaning it being computable, then that will allow to make it technically replicable, if you don't run into some practical limitations like not enough time, resources, combinatorial explosion, too much algorithmic computational complexity because of different hardware, or some exotic physics or other laws we can't replicate yet outside the system.
Riskaware accelerationism
Mathematics is the path to liberation
We need more super rich tech nerds engineering cool grandiose stuff pushing humanity forward on the Kardashev scale from all over the political spectrum.
We need centrist and leftist Elon Musk to balance it out. Super rich tech nerds engineering cool grandiose stuff, one that's centrist and one that's leftist.
Links for 2024-06-15
AI:
1. Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations [Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations | Lamini - Enterprise LLM Platform](https://www.lamini.ai/blog/lamini-memory-tuning)
2. Graduate student Chenguang Li created a hybrid between artificial intelligence and biological intelligence by combining RL, optogenetics, and the C.elegans worm. [PDF] https://klab.tch.harvard.edu/publications/PDFs/gk8172.pdf
3. OpenVLA: An Open-Source Vision-Language-Action Model [OpenVLA: An Open-Source Vision-Language-Action Model](https://openvla.github.io/)
4. How can we ensure that LLM-generated code reliably does precisely what it is supposed to do? DafnyBench: A Benchmark for Formal Software Verification [[2406.08467] DafnyBench: A Benchmark for Formal Software Verification](https://arxiv.org/abs/2406.08467)
5. China Is Testing More Driverless Cars Than Any Other Country https://www.nytimes.com/2024/06/13/business/china-driverless-cars.html [No paywall: https://archive.is/l6wNU]
6. This humanoid robot can drive cars — sort of [This humanoid robot can drive cars — sort of | TechCrunch](https://techcrunch.com/2024/06/12/this-humanoid-robot-can-drive-cars-sort-of/)
7. Step-by-Step Diffusion: An Elementary Tutorial [[2406.08929] Step-by-Step Diffusion: An Elementary Tutorial](https://arxiv.org/abs/2406.08929)
8. Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes — “For the first time, we understand how *flatness*, *edge-of-stability* and *large stepsize* imply (near-optimal) generalization.” [[2406.06838] Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes](https://arxiv.org/abs/2406.06838)
9. This AI-Powered Exoskeleton Could Speed Adoption by the Masses [This AI-Powered Exoskeleton Could Speed Adoption by the Masses](https://singularityhub.com/2024/06/14/this-ai-powered-exoskeleton-could-speed-adoption-by-the-masses/)
10. OpenAI's revenue is reportedly booming. The ChatGPT maker's annualized revenue is $3.4 billion according to a new report. [OpenAI's revenue is reportedly booming](https://www.engadget.com/openais-revenue-is-reportedly-booming-230324957.html)
11. Elon Musk abruptly drops lawsuit against OpenAI and CEO Sam Altman [Elon Musk abruptly drops lawsuit against OpenAI and CEO Sam Altman - BusinessToday](https://www.businesstoday.in/technology/news/story/elon-musk-abruptly-drops-lawsuit-against-openai-and-ceo-sam-altman-432981-2024-06-12)
Miscellaneous:
1. Why Biosecurity Matters – What are We Protecting Against? [Part I: Why Biosecurity Matters – What are We Protecting Against?](https://www.mackenziemorehead.com/part-i-why-biosecurity-matters-what-are-we-protecting-against/)
2. Wikipedia Chemical Structure Explorer [Wikipedia Chemical Structure Explorer](https://wikipedia.cheminfo.org/)
3. The Big Bang: after 13.8 billion years, its leftover glow still hasn't faded away. Unlike every other light source in the Universe, there's a profound reason why it persists. [After 13.8 billion years, why hasn't the Big Bang faded away? - Big Think](https://bigthink.com/starts-with-a-bang/big-bang-fade-away/)
China:
1. “17-year old vocational high school student Jiang Ping, who spends her days studying fashion design and literally making clothes, places 12th out of 801 finalists for the global Alibaba Math Contest…When asked if she likes fashion design more or Math more, she still said fashion design, and said that she saw her favorite, partial differential equations, present everywhere in fashion designs.” https://x.com/AnonYalie/status/1801735366464180456
2. Is China’s scientific progress real? Yes, according to the Nature Index, which looks at contributors to 145 top international journals. https://x.com/kyleichan/status/1801594688719159683
We have to solve meta meta hard problem of consciousness defined as why are we puzzled by the meta hard problem of consciousness defined as why are we puzzled by the hard problem of consciousness defined as explaining why and how physical processes in the brain give rise to subjective experiences, or qualia
We should create polar deities to Shoggoth (unaligned superintelligence turning universe into paperclips) and Moloch (generalized prisoner's dilemma to the whole society on the planet, too strong unhealthy competitive games) and maybe worship them
Hypercognitivitia
[Cognitive AI - by Peter Voss - Peter’s Substack](https://petervoss.substack.com/p/cognitive-ai)
[I Solve Intelligence - it's Symbolic | Alexander Naumenko | Substack](https://alexandernaumenko.substack.com/)
Compose any two mathematical field names into one field and it will exist
[Chaos topology - Scholarpedia](http://www.scholarpedia.org/article/Chaos_topology)
I wanna know structure of all of reality in parallel
I think current deep learning models approximate training data manifold so they can do local continuous weak generalization by interpolation and fundamentally lack the ability to do symbolic discrete global strong generalization, which might be solvable by neurosymbolic methods, integrating system two thinking, discrete symbolic program search synthesis guided by deep learning and also constraining deep learning in bidirectional causality
https://x.com/GaryMarcus/status/1802770928885182527
https://x.com/GaryMarcus/status/1802773644017299816
I think I disagree with your general approach of communicating it by focusing mostly only just on the negatives. But it's true that you make them more visible this way. But I want to equally see the positives that I see a lot in the current systems. You do sometimes tweet the positives, but its sooooo rare! I think we also disagree on the fact that for a big domain of tasks, this stronger generalization capability or symbolic rigidity isn't needed or maybe isn't even possible. But I think that for example for math and coding, or when you really want very strong factual groundedness, neurosymbolic systems are/could be better!
"Humans:
brain structures -> sensory development -> primitive reflexes -> fetal brain waves -> consciousness -> memory formation -> early learning -> language
AI:
we tackled language first, and to go in reverse from here we are still so early"
Is finding analogies doing dot products on arbitrary abstract arbitrarily distributed features all the way down?
Transforming and building on top of existing knowledge or inventing everything from scratch or a middle ground?
There is so much to do, learn and experience even in technically infinite lifespan with upgraded bodies and brains/minds. I wanna learn everything learnable, I wanna do so many interesting things, I wanna see all the future technological developments, all the future scientific models of biology, intelligence, our universe etc., I wanna see how (if we dont go extinct soon) sentience survives and populates other solar systems and galaxies and help it, maybe even somehow mitigating the heat death of the universe. Single human potential, capabilities and lifespan is so small relatively to what could it be. I think we can go way beyond biology created by evolution with transhumanism.
Never stop thinking big
Intelligence doesn't equal control https://x.com/ylecun/status/1802679017402757162?t=SDOFwglLUq1ufHL9_cMk0A&s=19
To build these guardrails, we have to accelerate AI controllability research
We're nonlinear wave computers emulating digital computers
https://x.com/algekalipso/status/1802541659135095265?t=t6BFNHJO98KXAk-3FTXU-g&s=19
[[2406.02061] Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models](https://arxiv.org/abs/2406.02061)
Transformers do not generalize algebraic structures out of distribution.
But, they can write code that does.
Or be scaffolded with search.
Also, physical dynamics can generalize out-of-distribution much more readily, like with neural potentials:
https://x.com/davidad/status/1802576341470216362
"Here's how AI will play out over the next decades. Tldr its another industrial revolution, and we're not replacing human minds.
Current paradigm (transformers or similar) will get increasingly good at doing work that we can gather or generate lots of data for — "in-distribution work". The cost for this work will fall to the input costs — chips and power. The models themselves have already basically converged and there won't be much value there.
Current paradigm will not "replace human work". Human tasks that are sufficiently repetitive and mindless will be performed by AI, and the humans will focus their time on novel tasks — which there will always be in a world that's constantly changing. Likely we see faster GDP per capita growth."
https://x.com/garybasin/status/1801715017227620395
[[2405.15071] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization](https://arxiv.org/abs/2405.15071) mechanistic interpretability generalization
"I’m referring to superintelligence. The reason I call it GPT-99 is to emphasize that it’s still a piece of software, not to emphasize that it’s still an autoregressive transformer.
I say that it’s prompted to generate a successor so that we can dispense with the assumption that the successor has anything like the GPT architecture.
I do believe that monolithic transformers have fundamental flaws but I think it 20-40% likely that CoEms built out of monolithic transformers could construct superintelligent successors, before 2030. And I think it’s similarly likely that if you scale up the monolithic transformer even further, eventually it can construct a superintelligent successor just with the right prompt, all in-context, with the only scaffolding being access to run arbitrary code. I think it’s somewhat more likely that this somehow just doesn’t work but it seems like an open question.
Apologies if I have confused people by muddying the thought experiment (which is really not about AI architectures) by using the name of a specific architecture/family."
https://x.com/davidad/status/1682482358597894146
[Sycophancy to subterfuge: Investigating reward tampering in language models \ Anthropic](https://www.anthropic.com/research/reward-tampering)
ARC benchmark generalization challenge https://x.com/dwarkesh_sp/status/1802771055016378554
[Getting 50% (SoTA) on ARC-AGI with GPT-4o](https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt)
Progress on $1M ARC-AGI benchmark that is very hard for LLMs by carefully-crafted few-shot prompt to generate many possible Python programs to implement the transformations, generating ~5k guesses, selecting the best ones using the examples, and a debugging step, which is basically neurosymbolic, getting to 71%, while humans get 85%.
[WHAM](https://wham.is.tue.mpg.de/)
https://x.com/teortaxesTex/status/1802128370861232374
Of course, much more progress is being made on other fronts as well. There is no sign of a new AI winter. On the contrary, progress is accelerating.
Examples from the past few days:
- Neural algorithmic reasoners [[2406.09308] Transformers meet Neural Algorithmic Reasoners](https://arxiv.org/abs/2406.09308)
- LLMs discovering better algorithms for training LLMs [Can LLMs invent better ways to train LLMs?](https://sakana.ai/llm-squared/)
- Neural network potentials https://pubs.acs.org/doi/10.1021/acsphyschemau.4c00004
I think in general AI controllability methods creating steering wheels for AI systems, such as bottom up and too down reverse engineering of black boxes in mechanistic interpretability and white box parts of the architecture with hardcoded guardrails, and various steering methods on top of the models, are the path to alignment.
[Mapping the Mind of a Large Language Model \ Anthropic](https://www.anthropic.com/news/mapping-mind-language-model)
[Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html)
https://openai.com/index/extracting-concepts-from-gpt-4/
https://x.com/davidad/status/1799261940600254649
[[2406.04313] Improving Alignment and Robustness with Circuit Breakers](https://arxiv.org/abs/2406.04313)
I'm open to nonsymbolic methods that go beyond approximating the training data manifold
[AI Won't Be AGI, Until It Can At Least Do This (plus 6 key ways LLMs are being upgraded) - YouTube](https://www.youtube.com/watch?v=PeSNEXKxarU)
semantic pubmed search [Great PubMed Search](https://pubmedisearch.com)
semantic arxiv search semantic pubmed search [Great PubMed Search](https://pubmedisearch.com)
semantic arxiv search
TESCREAL: Effective Omni includes them all: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism, Riskaware Effective Accelerationism
LLMs Can't Plan, But Can Help Planning in LLM-Modulo Framework LLMs act as idea generators and other symbolic systems as various external critics that specialize in different aspects, critique the candidate plan.
[[2402.01817v3] LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks](https://arxiv.org/abs/2402.01817v3)
[[2406.09308] Transformers meet Neural Algorithmic Reasoners](https://arxiv.org/abs/2406.09308)
Transformers meet Neural Algorithmic Reasoners
"Transformers have revolutionized machine learning with their simple yet effective architecture. Pre-training Transformers on massive text datasets from the Internet has led to unmatched generalization for natural language understanding (NLU) tasks. However, such language models remain fragile when tasked with algorithmic forms of reasoning, where computations must be precise and robust. To address this limitation, we propose a novel approach that combines the Transformer's language understanding with the robustness of graph neural network (GNN)-based neural algorithmic reasoners (NARs). Such NARs proved effective as generic solvers for algorithmic tasks, when specified in graph form. To make their embeddings accessible to a Transformer, we propose a hybrid architecture with a two-phase training procedure, allowing the tokens in the language model to cross-attend to the node embeddings from the NAR. We evaluate our resulting TransNAR model on CLRS-Text, the text-based version of the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only models for algorithmic reasoning, both in and out of distribution."
[[2406.07394] Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B](https://arxiv.org/abs/2406.07394)
[[2402.08871] Position: Topological Deep Learning is the New Frontier for Relational Learning](https://arxiv.org/abs/2402.08871)
[[1812.03363] The lure of misleading causal statements in functional connectivity research](https://arxiv.org/abs/1812.03363)
Mr. Beast is doing Effective Altruism now (via GiveDirectly) [We Gave Every Family in a Village a Full Year's Salary - YouTube](https://www.youtube.com/watch?v=pZ-MpxDZr9I)
You can see the great capabilities of current AI systems while also seeing their limitations.
You can embrace the benefits of current AI systems while also seeing their harms.
You can see the advantages and the disadvantages at the same time.
It's not binary one or the other.
[Bernardo Kastrup VS Christof Koch - YouTube](https://www.youtube.com/watch?v=qzwC7sXyhWQ)
[Eating and Cognition in Two Animals without Neurons: Sponges and Trichoplax | Biological Theory](https://link.springer.com/article/10.1007/s13752-024-00464-6)
We all benefit from some global annealing
Ontological positions are inherently neurophenomenological
[Inventing the Potential of a High-Frequency EEG, Namely Dodecanogram (DDG): Human Subjects’ Study | SpringerLink](https://link.springer.com/chapter/10.1007/978-981-97-1923-5_20)
philosophy of computer science https://x.com/anshpay/status/1802406305799283093
https://x.com/jd_pressman/status/1802835378451185733
"My problem with "transformers don't generalize algebraic structures and therefore don't reason" is that while I agree this is a real limitation there are important aspects of reason which these models in fact do and other methods don't. We may need to divide "reason" up."
Yes. Different types of reasoning. Different types of intelligence.
Low-brow memes are information-propagation optimal. It is the typical information packet of any high-memetic-fitness message.
Nuance has low transitivity through the replication channel.
Nuance is actually realistically closer to describing the extreme complexity of the world, if it's modelling actual patterns out there and not modelling nonexistent patterns in noise instead.
But low-brow extremely polarizing culture war memes actually efficiently stick in our replication channels and spread and form the direction of the collective superorganism.
So as a result many people get evolutionary selected or choose the strategy to go from nuance to low-brow memes and therefore lowering the overall quality of the collective epistemics.
[Do You Have a Free Will? - YouTube](https://www.youtube.com/watch?v=UebSfjmQNvs) free will
[[2406.11741] Transcendence: Generative Models Can Outperform The Experts That Train Them](https://arxiv.org/abs/2406.11741)