Research

Theoretical foundations and conceptual explorations in cognitive architectures, neuro-symbolic AI, and Artificial General Intelligence.

Cognitive ArchitecturesNeuro-Symbolic AIOpenCogHyperonMeTTaAGI

Cognitive Architectures

A cognitive architecture is a unified computational framework that specifies the fixed structures and processes underlying intelligent behaviour. Rather than solving a single task, a cognitive architecture aims to capture the general mechanisms of cognition — perception, memory, reasoning, learning, and action — in a single integrated system.

Why They Matter

Pure deep learning excels at pattern recognition but lacks the structured reasoning, systematic generalisation, and sample efficiency that characterise human intelligence. Cognitive architectures provide the scaffolding that lets specialised subsystems — neural or symbolic — cooperate toward general problem-solving.

Key Examples

  • ACT-R — models human cognition as interacting modules (declarative memory, procedural rules, perceptual-motor buffers).
  • SOAR — uses a universal subgoaling mechanism and chunking to learn from problem-solving episodes.
  • OpenCog / Hyperon — a hypergraph-based architecture targeting AGI through the integration of multiple AI paradigms.

Neuro-Symbolic AI

Neuro-symbolic AI combines the perceptual power of neural networks with the structured reasoning of symbolic systems. Neural networks handle the grounding problem — mapping raw sensory data to meaningful representations — while symbolic engines perform higher-level inference, planning, and knowledge manipulation.

Limitations of Each Paradigm Alone

Pure Deep Learning

  • Poor systematic generalisation
  • Low sample efficiency
  • Opaque reasoning traces
  • Weak causal reasoning

Pure Symbolic AI

  • Brittle under noisy data
  • Cannot handle raw perception
  • Expensive knowledge engineering
  • Poor scalability

The Hybrid Approach

The key insight is that neural networks can solve the symbol grounding problem — converting pixels, audio, or text into structured symbolic atoms — while a symbolic engine reasons over those atoms using logic, probabilistic inference, or program synthesis. Feedback from the symbolic layer can also guide neural training, creating a tightly coupled learning loop.

Neuro-Symbolic Data Flow

flowchart LR
  subgraph Perception["Neural Layer (Perception)"]
    Raw[Raw Sensory Data]
    NN[Neural Network]
    Repr[Learned Representations]
    Raw --> NN --> Repr
  end

  subgraph Grounding["Symbol Grounding"]
    Repr -->|extract symbols| Sym[Symbolic Atoms]
  end

  subgraph Reasoning["Symbolic Layer (Reasoning)"]
    Sym --> KB[Knowledge Base\nAtomSpace]
    KB --> Inf[Inference Engine\nPLN / MeTTa]
    Inf --> Out[Reasoned Conclusions]
  end

  Out -->|feedback| NN

OpenCog

OpenCog is an open-source AGI framework created by Ben Goertzel. Its central data structure is the AtomSpace — a typed hypergraph where both knowledge and programs are represented as atoms and links. This unified representation allows the system to reason about its own knowledge and, in principle, about its own reasoning processes.

Core Components

  • AtomSpace — the shared working memory; a hypergraph of typed nodes and links that stores facts, rules, and programs.
  • PLN (Probabilistic Logic Networks) — a term-logic inference engine that handles uncertainty through truth values (strength + confidence).
  • MOSES (Meta-Optimizing Semantic Evolutionary Search) — a program-evolution algorithm that learns compact programs from data.
  • ECAN (Economic Attention Networks) — allocates limited cognitive resources (attention) across atoms using spreading activation.

Hyperon

Hyperon is the next-generation rewrite of OpenCog, designed for scalability, modularity, and formal correctness. Where the original OpenCog was a monolithic C++ system, Hyperon is built around a minimal, well-defined core — the MeTTa interpreter — with all higher-level cognitive algorithms expressed as MeTTa programs running inside the AtomSpace.

Key Improvements over OpenCog

  • Formal semantics: MeTTa has a well-defined operational semantics, making it easier to reason about program correctness.
  • Modularity: cognitive algorithms (PLN, MOSES, ECAN) are implemented as MeTTa libraries rather than hard-coded C++ modules.
  • Distributed AtomSpace: designed to scale across multiple nodes, enabling large-scale knowledge graphs.
  • Interoperability: clean Python and Rust bindings make it easier to integrate with modern ML tooling (PyTorch, HuggingFace).

OpenCog Hyperon Architecture

graph TD
  subgraph Hyperon["OpenCog Hyperon"]
    AS[AtomSpace\nHypergraph Store]
    MeTTa[MeTTa Language\nMeta Type Talk]
    ECAN[ECAN\nAttention Allocation]
    PLN[PLN\nProbabilistic Logic]
    MOSES[MOSES\nProgram Evolution]
  end

  subgraph Neural["Neural Subsystems"]
    LLM[Large Language Model]
    CV[Computer Vision]
    Embed[Embedding Models]
  end

  LLM -->|symbol grounding| AS
  CV -->|perception| AS
  Embed -->|semantic vectors| AS
  AS <-->|read/write| MeTTa
  MeTTa --> PLN
  MeTTa --> MOSES
  ECAN -->|resource allocation| AS

MeTTa — Meta Type Talk

MeTTa (Meta Type Talk) is the programming language at the heart of Hyperon. It is a functional, pattern-matching language where code and data share the same representation — both are atoms in the AtomSpace. This homoiconicity enables a form of reflective meta-learning: a MeTTa program can inspect, modify, and synthesise other MeTTa programs at runtime.

Core Concepts

  • Atoms — the universal building block; everything (symbols, variables, expressions, types) is an atom.
  • Pattern matching — the primary evaluation mechanism; expressions are reduced by matching against rewrite rules stored in the AtomSpace.
  • Dependent types — MeTTa supports a rich type system that can express constraints on values, enabling type-safe knowledge representation.
  • Non-determinism — a single expression can match multiple rules, returning a set of results; this naturally models uncertainty and search.

Example: Transitive Inheritance

; Facts
(Inheritance Human Mammal)
(Inheritance Mammal Animal)

; Transitivity rule
(= (ancestor $x $z)
   (match &self (Inheritance $x $y)
     (match &self (Inheritance $y $z)
       $z)))

; Query: what is Human an ancestor of?
; => Animal

This snippet illustrates how MeTTa unifies facts and rules in a single AtomSpace. The match primitive queries the AtomSpace for patterns, and rewrite rules define how expressions reduce — enabling multi-hop inference without a separate query engine.

My Work in This Space

During my internship at iCogLabs, I worked on integrating transformer-based language models with the OpenCog AtomSpace. The goal was to use a fine-tuned LLM to extract structured knowledge from text, populate the AtomSpace, and then leverage MeTTa's reasoning capabilities to answer questions requiring multi-hop inference — outperforming pure retrieval-augmented generation on knowledge-intensive benchmarks.