The landscape of neuro-symbolic AI [x.com](https://twitter.com/burny_tech/status/1772832439167050193) DALL·E 2024-03-27 04.42.43 - Visualize a vast, sprawling landscape that symbolizes the complex and interconnected world of neuro-symbolic AI. Imagine a horizon filled with structu.webp 1. Neural Networks 1.1. Deep Learning 1.1.1. Feedforward Neural Networks 1.1.1.1. Multi-layer Perceptron 1.1.1.2. Deep Belief Networks 1.1.1.3. Restricted Boltzmann Machines 1.1.2. Autoencoders 1.1.2.1. Undercomplete Autoencoders 1.1.2.2. Sparse Autoencoders 1.1.2.3. Denoising Autoencoders 1.1.2.4. Contractive Autoencoders 1.1.3. Generative Adversarial Networks 1.1.3.1. Conditional GANs 1.1.3.2. Cycle GANs 1.1.3.3. Wasserstein GANs 1.1.3.4. Progressive Growing of GANs 1.1.4. Variational Autoencoders 1.1.4.1. Conditional VAEs 1.1.4.2. Hierarchical VAEs 1.1.4.3. Disentangled VAEs 1.2. Convolutional Neural Networks 1.2.1. Image Classification 1.2.1.1. AlexNet 1.2.1.2. VGGNet 1.2.1.3. ResNet 1.2.1.4. Inception 1.2.1.5. MobileNet 1.2.1.6. EfficientNet 1.2.2. Object Detection 1.2.2.1. R-CNN 1.2.2.2. Fast R-CNN 1.2.2.3. Faster R-CNN 1.2.2.4. YOLO 1.2.2.5. SSD 1.2.2.6. RetinaNet 1.2.3. Semantic Segmentation 1.2.3.1. Fully Convolutional Networks 1.2.3.2. U-Net 1.2.3.3. DeepLab 1.2.3.4. Mask R-CNN 1.3. Recurrent Neural Networks 1.3.1. Language Modeling 1.3.1.1. LSTM 1.3.1.2. GRU 1.3.1.3. Bidirectional RNNs 1.3.1.4. Attention Mechanisms 1.3.2. Machine Translation 1.3.2.1. Seq2Seq 1.3.2.2. Transformer 1.3.2.3. Convolutional Seq2Seq 1.3.3. Speech Recognition 1.3.3.1. Connectionist Temporal Classification 1.3.3.2. Attention-based Models 1.3.3.3. Listen, Attend and Spell 1.3.3.4. DeepSpeech 1.4. Transformers 1.4.1. Natural Language Processing 1.4.1.1. BERT 1.4.1.2. GPT 1.4.1.3. XLNet 1.4.1.4. RoBERTa 1.4.1.5. ELECTRA 1.4.2. Language Translation 1.4.2.1. Transformer 1.4.2.2. Convolutional Seq2Seq 1.4.2.3. Unsupervised Machine Translation 1.4.3. Text Summarization 1.4.3.1. Extractive Summarization 1.4.3.2. Abstractive Summarization 1.4.3.3. Pointer-Generator Networks 1.4.3.4. BART 1.4.3.5. T5 1.5. Graph Neural Networks 1.5.1. Node Classification 1.5.1.1. Graph Convolutional Networks 1.5.1.2. GraphSAGE 1.5.1.3. Graph Attention Networks 1.5.1.4. Gated Graph Neural Networks 1.5.2. Link Prediction 1.5.2.1. Matrix Factorization 1.5.2.2. Neural Tensor Networks 1.5.2.3. TransE 1.5.2.4. RotatE 1.5.2.5. ComplEx 1.5.3. Graph Generation 1.5.3.1. GraphRNN 1.5.3.2. GraphVAE 1.5.3.3. MolGAN 1.5.3.4. GCPN 1.6. Spiking Neural Networks 1.6.1. Neuromorphic Computing 1.6.1.1. TrueNorth 1.6.1.2. SpiNNaker 1.6.1.3. Loihi 1.6.1.4. BrainScaleS 1.6.2. Brain-inspired Computing 1.6.2.1. Hierarchical Temporal Memory 1.6.2.2. Liquid State Machines 1.6.2.3. Echo State Networks 1.6.2.4. Neural Engineering Framework 2. Symbolic AI 2.1. Logic Programming 2.1.1. Prolog 2.1.1.1. SWI-Prolog 2.1.1.2. YAP 2.1.1.3. XSB 2.1.2. Answer Set Programming 2.1.2.1. Clingo 2.1.2.2. DLV 2.1.2.3. WASP 2.2. Rule-based Systems 2.2.1. Expert Systems 2.2.1.1. MYCIN 2.2.1.2. DENDRAL 2.2.1.3. CLIPS 2.2.1.4. Jess 2.2.2. Decision Trees 2.2.2.1. ID3 2.2.2.2. C4.5 2.2.2.3. CART 2.2.2.4. Random Forest 2.3. Knowledge Representation 2.3.1. Ontologies 2.3.1.1. RDF 2.3.1.2. OWL 2.3.1.3. SKOS 2.3.2. Semantic Networks 2.3.2.1. ConceptNet 2.3.2.2. WordNet 2.3.2.3. FrameNet 2.3.3. Description Logics 2.3.3.1. ALC 2.3.3.2. SHIQ 2.3.3.3. SROIQ 2.4. Reasoning 2.4.1. First-order Logic 2.4.1.1. Resolution 2.4.1.2. Tableau 2.4.1.3. Natural Deduction 2.4.2. Fuzzy Logic 2.4.2.1. Fuzzy Sets 2.4.2.2. Fuzzy Rules 2.4.2.3. Fuzzy Inference Systems 2.4.3. Probabilistic Reasoning 2.4.3.1. Bayesian Networks 2.4.3.2. Markov Networks 2.4.3.3. Markov Logic Networks 2.4.3.4. Probabilistic Soft Logic 2.5. Planning 2.5.1. Classical Planning 2.5.1.1. STRIPS 2.5.1.2. ADL 2.5.1.3. PDDL 2.5.2. Hierarchical Task Network Planning 2.5.2.1. SHOP 2.5.2.2. SHOP2 2.5.2.3. O-Plan 2.5.3. Partial-order Planning 2.5.3.1. UCPOP 2.5.3.2. VHPOP 2.5.3.3. POPF 3. Neuro-symbolic Integration 3.1. Neural-symbolic Learning 3.1.1. Knowledge Distillation 3.1.1.1. Teacher-Student Networks 3.1.1.2. Hinton's Dark Knowledge 3.1.1.3. Born-Again Networks 3.1.2. Rule Extraction 3.1.2.1. TREPAN 3.1.2.2. DeepRED 3.1.2.3. LIME 3.1.2.4. Anchors 3.2. Neural-guided Search 3.2.1. Guided Exploration 3.2.1.1. Monte Carlo Tree Search 3.2.1.2. Guided Policy Search 3.2.1.3. AlphaGo 3.2.2. Heuristic Learning 3.2.2.1. Imitation Learning 3.2.2.2. Inverse Reinforcement Learning 3.2.2.3. Meta-Learning 3.3. Differentiable Reasoning 3.3.1. Differentiable Theorem Proving 3.3.1.1. ∂ILP 3.3.1.2. NLProlog 3.3.1.3. DiffLog 3.3.2. Neural Theorem Provers 3.3.2.1. NTP 3.3.2.2. NSMN 3.3.2.3. NLIL 3.4. Neuro-symbolic Program Synthesis 3.4.1. Neural Program Induction 3.4.1.1. Neural Programmer 3.4.1.2. Neural Programmer-Interpreters 3.4.1.3. Neural Turing Machines 3.4.2. Neural Program Synthesis 3.4.2.1. Neural GPU 3.4.2.2. Neural RAM 3.4.2.3. Neural Program Synthesis with Reinforcement Learning 4. Applications 4.1. Robotics 4.1.1. Robotic Perception 4.1.2. Robotic Manipulation 4.1.3. Robotic Navigation 4.1.4. Human-Robot Interaction 4.2. Autonomous Systems 4.2.1. Self-driving Cars 4.2.2. Autonomous Drones 4.2.3. Autonomous Underwater Vehicles 4.2.4. Autonomous Spacecraft 4.3. Explainable AI 4.3.1. Local Interpretable Model-agnostic Explanations (LIME) 4.3.2. Shapley Additive Explanations (SHAP) 4.3.3. Layer-wise Relevance Propagation (LRP) 4.3.4. Concept Activation Vectors (CAV) 4.4. Interpretable Models 4.4.1. Decision Trees 4.4.2. Rule-based Models 4.4.3. Linear Models 4.4.4. Attention Mechanisms 4.5. Commonsense Reasoning 4.5.1. Qualitative Reasoning 4.5.2. Analogical Reasoning 4.5.3. Spatial Reasoning 4.5.4. Temporal Reasoning 4.6. Causal Inference 4.6.1. Causal Discovery 4.6.2. Causal Inference with Observational Data 4.6.3. Counterfactual Reasoning 4.6.4. Causal Reinforcement Learning 4.7. Transfer Learning 4.7.1. Domain Adaptation 4.7.2. Multi-task Learning 4.7.3. Zero-shot Learning 4.7.4. Heterogeneous Transfer Learning 4.8. Few-shot Learning 4.8.1. One-shot Learning 4.8.2. Meta-Learning 4.8.3. Prototypical Networks 4.8.4. Matching Networks 4.9. Lifelong Learning 4.9.1. Continual Learning 4.9.2. Incremental Learning 4.9.3. Progressive Learning 4.9.4. Curriculum Learning 4.10. Multimodal Learning 4.10.1. Image-Text Matching 4.10.2. Video-Text Alignment 4.10.3. Audio-Visual Speech Recognition 4.10.4. Multimodal Emotion Recognition 4.11. Embodied AI 4.11.1. Embodied Question Answering 4.11.2. Embodied Visual Recognition 4.11.3. Embodied Language Grounding 4.11.4. Embodied Multimodal Learning