"no cloning theorem The no-cloning theorem is a fundamental result in quantum mechanics that states it is impossible to create an identical copy of an arbitrary unknown quantum state[3]. In other words, there is no universal process that can perfectly duplicate any given quantum state. The theorem was first stated by Wootters, Zurek, and Dieks in 1982[3][4]. It has profound implications in fields like quantum computing and quantum cryptography. The proof of the no-cloning theorem relies on the linearity of quantum operations. It shows that if a hypothetical quantum cloning machine could copy two orthogonal quantum states (like |0⟩ and |1⟩), then by linearity it would not correctly clone a superposition of those states (like 1/√2(|0⟩+|1⟩))[1][2]. This leads to a contradiction, proving that a universal quantum cloning machine cannot exist. Some key consequences of the no-cloning theorem are: 1. It prevents using classical error correction techniques on quantum states, as backup copies cannot be made. However, quantum error correction is still possible via other means[3]. 2. It protects the uncertainty principle. If cloning were possible, one could make many copies of a state and measure each copy to arbitrary precision, violating the uncertainty principle[4]. 3. It prevents superluminal communication via quantum entanglement, which would violate causality[4]. While perfect cloning of quantum states is ruled out, it is possible to create imperfect copies that are close to the original state[4]. But arbitrary unknown quantum states fundamentally cannot be duplicated exactly, a profound departure from the classical world where information can be freely copied. Citations: [1] [PDF] CSE 599d - Quantum Computing The No-Cloning Theorem, Classical Teleportation and Quantum Teleportation, Superdense Coding - Washington https://courses.cs.washington.edu/courses/cse599d/06wi/lecturenotes4.pdf [2] The No Cloning Theorem - YouTube [The No Cloning Theorem - YouTube](https://www.youtube.com/watch?v=owPC60Ue0BE) [3] No-cloning theorem - Wikipedia [No-cloning theorem - Wikipedia](https://en.wikipedia.org/wiki/No-cloning_theorem) [4] The no-cloning theorem - Quantiki [The no-cloning theorem | Quantiki](https://www.quantiki.org/wiki/no-cloning-theorem) [5] 13.7: No Cloning Theorem - Engineering LibreTexts [13.7: No Cloning Theorem - Engineering LibreTexts](https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Signal_Processing_and_Modeling/Information_and_Entropy_%28Penfield%29/13:_Quantum_Information/13.07:_No_Cloning_Theorem) " If no cloning theorem is true assuming quantum mechanics is true and if consciousness/individual experience/identity is inherently quantum mechanical, then we cannot copy minds Autonomy Recursive self-improvement Selfreplication AI outcomes [x.com](https://twitter.com/burny_tech/status/1782666851044000178?t=MxHW87WLpYpYDKUnpIyEgw&s=19) Russia attacked because expanding nato in Ukraine near it's boarders scared it? From all the recent papers and leaks from various leading AGI labs I think next will be general math, coding, reasoning getting into superhuman domains by selfplay/search/selfcorrection mechanisms like how Chess and Go got into superhuman domains by AlphaZero or recent progress in coding using AlphaCode 2, in addition with neurosymbolic methods like in geometry by recent AlphaGeometry or football tactics by TacticAI [x.com](https://twitter.com/SpencrGreenberg/status/1781702814500016486?t=5rkC08Y7MjEoabKNEBx7Ig&s=19) [AlphaGo - Wikipedia](https://en.wikipedia.org/wiki/AlphaGo?wprov=sfla1) [GPT-5: Everything You Need to Know So Far - YouTube](https://youtu.be/Zc03IYnnuIA?si=xN9GhhxlD9kchoJ1) [Gemini Full Breakdown + AlphaCode 2 Bombshell - YouTube](https://youtu.be/toShbNUGAyo?si=C4S0QE1GP2oTtpy4) [Alpha Everywhere: AlphaGeometry, AlphaCodium and the Future of LLMs - YouTube](https://youtu.be/dOplrIJEYBo?si=K02lW8tkjciVdZPe) [This is what DeepMind just did to Football with AI... - YouTube](https://youtu.be/I7J67JOkIbI?si=8tUY1xotvtUU_m-x) [Principles of Riemannian Geometry in Neural Networks | TDLS - YouTube](https://m.youtube.com/watch?v=IPrNIjA4AWE&fbclid=IwZXh0bgNhZW0CMTEAAR1Yyam6KoMDM68iBDXAjBAXXvpS3s-1y0IPng0CNWQw7qRhh7Pr0YXYCjU_aem_AfNmxStbdF7_dg6TIUB2M2onlCVmDE4hSpsO6FC_FwQhsR1motXlWZGsmgVxP09rCXQ3tLK5dk7L5rQcNcl6o8nv) >Our latest experiments show that our Lean co-pilot can automate >80% of the mathematical proof steps (2.3 times better than the previous rule-based baseline aesop). https://arxiv.org/abs/2404.12534 <[x.com](https://twitter.com/AnimaAnandkumar/status/1782518528098353535?t=WweTdkLQwVhUQCmYTH4Feg&s=19>) LLM jailbreaking strategies [x.com](https://twitter.com/NannaInie/status/1782358650641600762?t=7JjYczfC3NbK2E_MM-7pSQ&s=19) Genetically engineered soybeans with animal protein [x.com](https://twitter.com/simonmaechling/status/1782512994997436840?t=WUsd25xNsKC8uTkb9Vq-ZQ&s=19) https://www.psypost.org/ai-connects-gut-bacteria-metabolites-to-alzheimers-disease-progression/ [Scientists create 'toxic AI' that is rewarded for thinking up the worst possible questions we could imagine | Live Science](https://www.livescience.com/technology/artificial-intelligence/scientists-create-toxic-ai-that-is-rewarded-for-thinking-up-the-worst-possible-questions-we-could-imagine) ive got a pretty good idea of what's happening behind closed doors regrouping of links ive been posting all the below is in the name of human level generality for LLMs navigating latent space <https://github.com/brendenlake/MLC> reasoning structure <https://arxiv.org/abs/2402.03620> meta thinking <https://arxiv.org/abs/2403.09629> learning without prior training <https://www.nature.com/articles/s41593-024-01607-5> imagining <https://arxiv.org/abs/2306.09205> a way to grow knowledge flexibly <https://arxiv.org/abs/2006.08381> self improvement <https://arxiv.org/abs/2404.12253> [AI can predict political orientations from blank faces – and researchers fear 'serious' privacy challenges | Fox News](https://www.foxnews.com/politics/ai-can-predict-political-orientations-blank-faces-researchers-fear-serious-privacy-challenges) What finetuning to use [Understanding RLHF](https://understanding-rlhf.github.io/) List of smarts [x.com](https://twitter.com/tsarnick/status/1782615380470722940?t=BB3sb4wsgMkwXp4mpgax2w&s=19) [The (Simple) Theory That Explains Everything | Neil Turok - YouTube](https://youtu.be/ZUp9x44N3uE?si=s2q1fvUAjfLyd9Wf) OtherlloGPT world model measuring was partially baked into the measurement tool? [AI Starts to Sift Through String Theory’s Near-Endless Possibilities | Quanta Magazine](https://www.quantamagazine.org/ai-starts-to-sift-through-string-theorys-near-endless-possibilities-20240423/?fbclid=IwZXh0bgNhZW0CMTEAAR1UyAH_xPRturepxopZR-AET7UrbJhjNRcUeNzKBBTcsQ8tIzBiwDZ8oEA_aem_Ac2sXrB9n8HVcGoUDreRkWJkKk1eNvImhkIOwo1jbT9qmf8I4SX2AUpevuzQxnrsfYDWqHZlrUJOcuvPV8bF5LBG) "kernel methods equations Kernel methods are a class of machine learning algorithms that use a kernel function to map data into a higher-dimensional feature space, where it becomes easier to find linear patterns or decision boundaries[1][2]. Some key equations and concepts in kernel methods include: 1. The kernel function k(x, z) computes the inner product between two data points x and z in the high-dimensional feature space, without explicitly computing the coordinates in that space[1]: k(x, z) = ⟨φ(x), φ(z)⟩ where φ is the feature map from input space to feature space. 2. The kernel matrix or Gram matrix K contains the kernel function evaluated for all pairs of training examples[1][2]: Kij = k(xi, xj) K must be symmetric positive semi-definite for a valid kernel. 3. The representer theorem states that the solution to certain optimization problems involving regularized empirical risk can be expressed as a linear combination of kernel functions evaluated at the training points[2]: f(x) = ∑αik(xi, x) 4. In support vector machines, the dual optimization problem for finding the optimal margin classifier depends only on the kernel matrix[1][5]: max α ∑αi - ½∑αiαjyiyjk(xi, xj) s.t. ∑αiyi = 0, 0 ≤ αi ≤ C Popular kernel functions include the linear, polynomial, and Gaussian radial basis function (RBF) kernels[3]. The choice of kernel allows complex nonlinear patterns to be learned using efficient linear methods in the implicit high-dimensional space induced by the kernel[1][2]. Citations: [1] [PDF] 2 Kernel methods: an overview - People @EECS https://people.eecs.berkeley.edu/~jordan/kernels/0521813972c02_p25-46.pdf [2] [PDF] Kernel methods in machine learning - arXiv https://arxiv.org/pdf/math/0701907.pdf [3] Kernel method | Engati [Kernel method | Engati](https://www.engati.com/glossary/kernel-method) [4] Kernel method - Wikipedia [Kernel method - Wikipedia](https://en.wikipedia.org/wiki/Kernel_method) [5] [PDF] Chapter 7 An Introduction to Kernel Methods - University of Missouri https://calla.rnet.missouri.edu/cheng_courses/com/Quadratic_programming_svmintro.pdf " https://www.techrxiv.org/users/684323/articles/848527-meat-meets-machine-multiscale-competency-enables-causal-learning [Generalization in diffusion models from geometry-adaptive harmonic representation | Zahra Kadkhodaie - YouTube](https://youtu.be/V_t6QppPbwQ?si=uAYhUEsM2I2yEupm) "<@703734994777931807> it seems to me what Zahra means by "strong generalization" is both of the following: 1) the learned models are "independent" of the data sets. Of course taken literally in isolation the above would be non-sense because obviously the models depend on the data set. But in context she meant that if you take two samples of data from the *same distribution* (80x80 center framed grayscale human faces in this case), and the sample counts are large enough to force the machine into the generalization regime, then it learns the "same" model (in technical fact is learns very similar, not identical, models) regardless of specific sample. One can intuit this by understanding that as the sample set gets far too large for memorization given the degrees of freedom in the machine parameter space, the machine is forced to learn "mean" features. 2) the learned models are closer to each other than the samples are too each other. This is demonstrated by showing that the same random seed, once denoised by the models, results in images more similar to each other than their closest match in the respective training sets. One can intuit this result by recognizing that here the concept of "close" depends on the insane dimensionality of the problem, i.e. the curse of dimensionality. The dimensionality of the models is far lower than the dimensionality of the inputs (because we are in the generalization regime, see above) and therefore two random points in the lower model space are much more likely to be close to on another than choosing two random points in the higher dimensional input space. It's also important to keep in mind that Zahra defines "generalization" as "continuous interpolation between training images"." Diffusion model [How does Stable Diffusion work? ‚Äì Latent Diffusion Models EXPLAINED - YouTube](https://youtu.be/J87hffSMB60?si=1xBNVo4nvFBsT_bf) [What are Diffusion Models? - YouTube](https://youtu.be/fbLgFrlTnGU?si=D7ifNZwpCFUocE_Q) [Diffusion Models | Paper Explanation | Math Explained - YouTube](https://youtu.be/HoKDTa5jHvg?si=DoXwb1tPqnCfJsk9) [Denoising Diffusion Probabilistic Models | DDPM Explained - YouTube](https://youtu.be/H45lF4sUgiE?si=-Lmpp5QnXQTgRwnU) ChatGPT Nigeria https://arxiv.org/abs/2209.06862 [Generative AI Research Spotlight: Demystifying Diffusion-Based Models | NVIDIA Technical Blog](https://developer.nvidia.com/blog/generative-ai-research-spotlight-demystifying-diffusion-based-models/) AI deciding and talking to animals [Earth Species Project](https://www.earthspecies.org) [Project CETI •-- Home](https://www.projectceti.org/) https://www.science.org/doi/10.1126/science.adg7314 https://www.scientificamerican.com/article/artificial-intelligence-could-finally-let-us-talk-with-animals/ [Introducing RFM-1: Giving robots human-like reasoning capabilities](https://covariant.ai/insights/introducing-rfm-1-giving-robots-human-like-reasoning-capabilities/) robots world models [Demystifying the Default Mode Network - YouTube](https://youtu.be/SdfF-_Dfw7I?si=fQhvm7GN4wr5PLRi) "Schrödinger equation The Schrödinger equation is the fundamental equation of quantum mechanics that describes how the wavefunction of a quantum-mechanical system evolves over time. It plays a central role similar to Newton's laws in classical mechanics[3]. The time-dependent Schrödinger equation in one dimension is[1][2]: $i\hbar\frac{\partial}{\partial t}\Psi(x,t) = \left[-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}+V(x)\right]\Psi(x,t)$ where: - $\Psi(x,t)$ is the wavefunction (a complex-valued probability amplitude) - $\hbar$ is the reduced Planck constant - $m$ is the mass of the particle - $V(x)$ is the potential energy - The first term represents the kinetic energy and the second term the potential energy[5] The time-independent Schrödinger equation is[2][4]: $-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}\psi(x)+V(x)\psi(x)=E\psi(x)$ where $E$ is the total energy of the system. The solutions $\psi(x)$ to this equation are the stationary states of the system. The Schrödinger equation allows determining the allowed energies and wavefunctions of a quantum system. It has been used with great success to describe the hydrogen atom and is extensively applied in atomic, molecular, nuclear and condensed matter physics[3]. Citations: [1] Schrödinger Equation | Brilliant Math & Science Wiki [Schrödinger Equation | Brilliant Math & Science Wiki](https://brilliant.org/wiki/schrodinger-equation/) [2] 3.1: The Schrödinger Equation - Chemistry LibreTexts [3.1: The Schrödinger Equation - Chemistry LibreTexts](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_%28LibreTexts%29/03:_The_Schrodinger_Equation_and_a_Particle_in_a_Box/3.01:_The_Schrodinger_Equation) [3] Schrodinger equation | Explanation & Facts - Britannica [Schrodinger equation | Explanation & Facts | Britannica](https://www.britannica.com/science/Schrodinger-equation) [4] Properties of the solutions to the Schrödinger equation [Properties of the solutions to the Schrödinger equation - Chemistry LibreTexts](https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_%28Physical_and_Theoretical_Chemistry%29/Quantum_Mechanics/03._The_Tools_of_Quantum_Mechanics/Properties_of_the_solutions_to_the_Schrodinger_equation) [5] The Schrödinger Equation Explained in 60 Seconds - YouTube [The Schrödinger Equation Explained in 60 Seconds - YouTube](https://www.youtube.com/watch?v=AR23uxZruhE) " The operating system of the universe is linear algebra [The World as a Neural Network. Frequently Asked Questions. - YouTube](https://m.youtube.com/watch?v=VDX4-CUg1yA&fbclid=IwZXh0bgNhZW0CMTEAAR1iKV18k2siqhe39sKk3VtLL4SbZWPmVAfGi1ynRR7AB4u5t6zJsRbiQjU_aem_AVd3e8HYHcN34UeZRwCHzWYZLFTd7T-E7c1Olbn43z59HMTbc4l9ZfsDFL_WEjaUaE3Q6FOM7kWIaR8v6KCXcVap)