## Tags
- Part of: [[Intelligence]], [[Artificial Intelligence]]
- Related: [[Artificial General Intelligence]],
- Includes:
- Additional:
## Technical summaries
- A superintelligence is a hypothetical agent that possesses [[intelligence]] surpassing that of the brightest and most gifted human minds, ranging from marginally smarter than the upper limits of human-level intelligence to vastly exceeding human cognitive capabilities. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an [[intelligence explosion]] and associated with a technological [[singularity]].
## Main resources
- [Superintelligence - Wikipedia](https://en.wikipedia.org/wiki/Superintelligence)
<iframe src="https://en.wikipedia.org/wiki/superintelligence" allow="fullscreen" allowfullscreen="" style="height:100%;width:100%; aspect-ratio: 16 / 5; "></iframe>
## Definitions
- [\[2311.02462\] Levels of AGI for Operationalizing Progress on the Path to AGI](https://arxiv.org/abs/2311.02462) from [[Artificial General Intelligence]]
ASI (Artificial Superintelligence): Defined in the paper's taxonomy as "Level 5: Superhuman" - a system that "outperforms 100% of humans" on a wide range of non-physical tasks, including metacognitive tasks.
[[Images/6ec7f6eb8791ef53417805032132fc4f_MD5.jpeg|Open: Pasted image 20240919014547.png]]
![[Images/6ec7f6eb8791ef53417805032132fc4f_MD5.jpeg]]
- Noncomputable idealization: [\[cs/0004001\] A Theory of Universal Artificial Intelligence based on Algorithmic Complexity](https://arxiv.org/abs/cs/0004001) [AIXI - LessWrong](https://www.lesswrong.com/tag/aixi)
AIXI is a theoretical model of artificial intelligence that combines [[decision theory ]]and [[algorithmic information theory]] to make optimal decisions by considering all possible computable models of the environment, updating them based on past experiences, and selecting actions that maximize expected future rewards.
## Brainstorming
My ideal scifi would be about benevolent superintelligence that cures all diseases, makes all beings happy, figures out how biology, fundamental physics, consciousness, intelligence, etc. works by countless scientific breakthroughs, understands all math, understands everything in philosophy, creates post-scarcity abundance for all, creates infinitely fascinating complex art, and in the process grows infinitely more and more in intelligence and creativity, maximizes morphological freedom, and does no harm
Benevolent superintelligence explosion
[[Artificial intelligence x Science]]
Yeah its a bit unrealistic superutopia that I like dreaming about, so that's why it's science fiction. My current biggest fear in the real world is tech companies centralizing too much power for themselves via AI and other technology and other means (economic, political,...), so that's partially why I want open source to win and try to support it, while trying to reverse engineer the moat of tech companies. To democratize the power.
The issue with AI safety community I started to have is that a big part of them basically want something like government surveillance on GPUs and training runs to prevent unsafe AI, which can so much easily turn into surveillance dystopia and destroy open source completely, plus big tech is merging with government as well to have the least restrictions for themselves while wanting to restricting others including open source. It feels like that will make power dynamics even more concetrated instead.
A lot of luddites also joined the AI safety movement
I think when I look at the current world and at history, then a lot of times when there was too much concentration of power in any form to some centralized entity, then it started killing freedom for everyone else. And I view AI as technology that has the potential to give the ultimate power, centralized power if its in the hands of few, or decentralized power fi tis in the hands of people.
I also started to not really believe in the assumption that increasing intelligence automatically leads to rogueness. I think intelligence is independent of that, and also independent of power seeking. For example we have galaxy brain scientists that are not at all rogue or power seeking. It depends so much. and they are controlled by IMO less intelligent managers and politicians.
My favorite definitions of intelligence include stuff like modelling capability, predictive capability, generalization capability, etc., about some data, which are decoupled from agency and goals in changing the world to me.
The Culture series is a science fiction series that centre on The Culture, a utopian, post-scarcity space society of humanoid aliens and advanced superintelligent artificial intelligences living in artificial habitats spread across the Milky Way galaxy. [Culture series - Wikipedia](https://en.wikipedia.org/wiki/Culture_series)
## Written by AI (may include hallucinated factually incorrect information)
Here's a comprehensive map of definitions for artificial superintelligence (ASI) from various sources:
## Definitions of Artificial Superintelligence (ASI)
### Core Concepts
- An AI system that surpasses human intelligence across all domains[1][2][3][4][5][6][7][8][9]
- A hypothetical future form of AI[2][3][4][5][6][7][8]
- Cognitive abilities exceeding those of humans[1][2][3][4][5][6][7][8][9]
### Key Characteristics
- Self-aware and autonomous[2][5]
- Capable of self-improvement[2][5]
- Surpasses human problem-solving and decision-making[1][2][3][4][5][6][7][8]
- Exhibits creativity and emotional intelligence[5][7]
- Processes vast amounts of data[5][7]
### Scope of Intelligence
- Exceeds human intelligence in all fields[1][2][3][4][5][6][7][8]
- Comprehensive range of intellectual capabilities[1][2][3][4][5][6][7][8]
- Outperforms humans in technical and scientific domains[4]
### Potential Applications
- Science, finance, business, healthcare, agriculture, politics[2][5][7]
- Solving complex global challenges[5][7]
- Advancing technology and innovation[5][7]
### Theoretical Nature
- Does not currently exist[2][3][4][5][6][7][8]
- Considered a future possibility[2][3][4][5][6][7][8]
- Subject of ongoing research and debate[2][3][4][5][6][7][8]
### Relation to Other AI Concepts
- Beyond artificial general intelligence (AGI)[2][3][4][5][6]
- More advanced than narrow AI or weak AI[1][3]
- Ultimate form of AI development[3][4][9]
### Implications and Concerns
- Potential existential risks to humanity[2][5][7]
- Ethical considerations[2][5][7]
- Transformative impact on society and economy[2][5][7]
This map synthesizes definitions from multiple sources to provide a comprehensive overview of artificial superintelligence (ASI) as a concept in AI research and futurism.
Citations:
[1] https://www.techtarget.com/searchenterpriseai/definition/artificial-superintelligence-ASI
[2] https://www.spiceworks.com/tech/artificial-intelligence/articles/super-artificial-intelligence/
[3] https://www.larksuite.com/en_us/topics/ai-glossary/asi-artificial-super-intelligence
[4] https://www.coursera.org/articles/super-intelligence
[5] https://libraries.usc.edu/events/artificial-superintelligence
[6] https://en.wikipedia.org/wiki/Superintelligence
[7] https://www.infosysbpm.com/blogs/financial-services/artificial-super-intelligence-the-future-of-ai.html
[8] https://www.ibm.com/topics/artificial-superintelligence
[9] https://infuture.institute/en/trend/artificial-superintelligence-asi/
[10] https://www.techopedia.com/definition/31619/artificial-superintelligence-asi