这是indexloc提供的服务,不要输入任何密码
Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,9 @@ INCLUDE(TorchExports)
# Torch libraries
ADD_SUBDIRECTORY(lib)

# OpenCog Multi-Dimensional Tensor Inference Engine
ADD_SUBDIRECTORY(opencog)

CONFIGURE_FILE(paths.lua.in "${CMAKE_CURRENT_BINARY_DIR}/paths.lua")

INCLUDE_DIRECTORIES(BEFORE "${LUA_INCDIR}")
Expand Down
24 changes: 18 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,26 @@ Torch7 community support can be found at the following locations. As of 2019, th
<a name="torch.reference.dok"></a>
# Torch Package Reference Manual #

__Torch__ is the main package in [Torch7](http://torch.ch) where data
structures for multi-dimensional tensors and mathematical operations
over these are defined. Additionally, it provides many utilities for
accessing files, serializing objects of arbitrary types and other
useful utilities.
__TorCog__ is an enhanced version of [Torch7](http://torch.ch) that implements
**OpenCog as a Multi-Dimensional Tensor Inference Engine**. It provides the
original Torch tensor library enhanced with a complete OpenCog cognitive
architecture using tensors for knowledge representation, inference, and
attention allocation.

<a name="torch.overview.dok"></a>
## Torch Packages ##
## TorCog Packages ##

### OpenCog Multi-Dimensional Tensor Inference Engine

* **[OpenCog Tensor Engine](opencog/README.md)** - Complete OpenCog cognitive architecture implementation using multi-dimensional tensors
* [Tensor-based Atoms](opencog/atoms/) - Knowledge representation with high-dimensional embeddings
* [Inference Engine](opencog/inference/) - Rule-based reasoning with neural components
* [Attention Allocation](opencog/attention/) - Neural attention mechanism with tensor dynamics
* Pattern matching and similarity using tensor operations
* Probabilistic reasoning with uncertainty propagation
* Hebbian learning and neural adaptation

### Original Torch Components

* Tensor Library
* [Tensor](doc/tensor.md) defines the _all powerful_ tensor object that provides multi-dimensional numerical arrays with type templating.
Expand Down
72 changes: 72 additions & 0 deletions opencog/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# OpenCog Multi-Dimensional Tensor Inference Engine
CMAKE_MINIMUM_REQUIRED(VERSION 2.8 FATAL_ERROR)

SET(CMAKE_MODULE_PATH
"${CMAKE_CURRENT_SOURCE_DIR}/../cmake"
"${CMAKE_MODULE_PATH}")

# Source files for OpenCog components
SET(OPENCOG_ATOMS_SRC
atoms/opencog_atoms.c
)

SET(OPENCOG_INFERENCE_SRC
inference/inference_engine.c
)

SET(OPENCOG_ATTENTION_SRC
attention/attention_allocation.c
)

SET(OPENCOG_CORE_SRC
opencog.c
)

SET(OPENCOG_TEST_SRC
test_opencog.c
)

# Tensor wrapper source
SET(OPENCOG_TENSOR_SRC
tensor_wrapper.c
)

# All OpenCog sources
SET(OPENCOG_ALL_SRC
${OPENCOG_TENSOR_SRC}
${OPENCOG_ATOMS_SRC}
${OPENCOG_INFERENCE_SRC}
${OPENCOG_ATTENTION_SRC}
${OPENCOG_CORE_SRC}
)

# Include directories
# Include current directory for tensor wrapper
INCLUDE_DIRECTORIES(BEFORE "${CMAKE_CURRENT_SOURCE_DIR}")

# Create OpenCog library
ADD_LIBRARY(opencog SHARED ${OPENCOG_ALL_SRC})

# Link with math library (no longer need TH)
# TARGET_LINK_LIBRARIES(opencog TH)

# Math library for mathematical functions
TARGET_LINK_LIBRARIES(opencog m)

# Create test executable
ADD_EXECUTABLE(test_opencog ${OPENCOG_TEST_SRC})
TARGET_LINK_LIBRARIES(test_opencog opencog m)

# Create demo executable
ADD_EXECUTABLE(example_opencog_demo example_opencog_demo.c)
TARGET_LINK_LIBRARIES(example_opencog_demo opencog m)

# Install OpenCog library and headers
INSTALL(TARGETS opencog DESTINATION lib)
INSTALL(FILES opencog.h DESTINATION include/opencog)
INSTALL(FILES atoms/opencog_atoms.h DESTINATION include/opencog/atoms)
INSTALL(FILES inference/inference_engine.h DESTINATION include/opencog/inference)
INSTALL(FILES attention/attention_allocation.h DESTINATION include/opencog/attention)

# Install test and demo executables
INSTALL(TARGETS test_opencog example_opencog_demo DESTINATION bin)
265 changes: 265 additions & 0 deletions opencog/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,265 @@
# OpenCog Multi-Dimensional Tensor Inference Engine

This directory contains the implementation of OpenCog as a multi-dimensional tensor inference engine, built on top of the Torch tensor library. This implementation provides a tensor-based cognitive architecture that combines symbolic reasoning with neural computation.

## Overview

The OpenCog Tensor Inference Engine implements the core components of the OpenCog cognitive architecture using multi-dimensional tensors for efficient computation:

- **Tensor-based Atoms**: All knowledge is represented as atoms with high-dimensional tensor embeddings
- **Truth Value Computations**: Probabilistic truth values computed using tensor operations
- **Attention Allocation**: Neural attention mechanism with tensor-based dynamics
- **Pattern Matching**: Efficient pattern matching using tensor similarity computations
- **Inference Engine**: Rule-based reasoning with neural network components
- **Learning System**: Hebbian learning and neural adaptation using tensor updates

## Architecture

### Core Components

1. **Atoms and AtomSpace** (`atoms/`)
- `OCAtom`: Basic knowledge representation with tensor embeddings
- `OCTruthValue`: Probabilistic truth values using tensors
- `OCAtomSpace`: Container for all atoms with relationship matrices

2. **Inference Engine** (`inference/`)
- `OCInferenceEngine`: Main reasoning engine
- `OCInferenceRule`: Configurable inference rules
- `OCPattern`: Pattern matching system
- Built-in rules: deduction, induction, abduction, revision

3. **Attention Allocation** (`attention/`)
- `OCAttentionBank`: Central attention management system
- `OCAttentionAgent`: Specialized attention processing agents
- Attention dynamics: spreading, decay, competition, novelty detection

4. **Main API** (`opencog.h/c`)
- High-level interface for the complete system
- Configuration management
- Statistics and monitoring
- Error handling

### Key Features

#### Tensor-Based Representation
- Each atom has a high-dimensional embedding vector (default: 128 dimensions)
- Truth values are represented as tensors enabling batch operations
- Attention values (STI, LTI, VLTI) stored as tensor components
- Relationship matrices for efficient graph operations

#### Neural-Symbolic Integration
- Symbolic atoms with neural embeddings
- Truth value propagation using tensor mathematics
- Neural attention mechanisms
- Hebbian learning for association strengthening
- Backpropagation for embedding optimization

#### Attention Allocation System
- Short-Term Importance (STI) for working memory
- Long-Term Importance (LTI) for episodic memory
- Very Long-Term Importance (VLTI) for semantic memory
- Attention spreading through atom connections
- Economic model with rent collection and resource allocation

#### Inference Capabilities
- Forward chaining inference
- Backward chaining (goal-directed reasoning)
- Pattern matching with tensor similarity
- Probabilistic reasoning with uncertainty propagation
- Temporal reasoning support

## Usage Examples

### Basic System Setup

```c
#include "opencog/opencog.h"

// Create OpenCog system with default configuration
OpenCog *opencog = OpenCog_new(NULL);

// Add some knowledge
OCAtom *cat = OpenCog_addConcept(opencog, "cat", 0.9f, 0.8f);
OCAtom *animal = OpenCog_addConcept(opencog, "animal", 0.95f, 0.9f);

// Create relationship
OpenCog_associateAtoms(opencog, cat, animal, 0.85f);

// Run inference
OpenCog_forwardChain(opencog, 10);

// Clean up
OpenCog_free(opencog);
```

### Attention and Learning

```c
// Enable attention and learning
OCConfig *config = OpenCog_getDefaultConfig();
config->enable_attention = 1;
config->enable_hebbian_learning = 1;

OpenCog *opencog = OpenCog_new(config);

// Add concepts
OCAtom *red = OpenCog_addConcept(opencog, "red", 0.8f, 0.7f);
OCAtom *apple = OpenCog_addConcept(opencog, "apple", 0.9f, 0.8f);

// Boost attention to activate learning
OpenCog_boostAttention(opencog, red, 20.0f);
OpenCog_boostAttention(opencog, apple, 15.0f);

// Run attention cycles
for (int i = 0; i < 10; i++) {
OpenCog_stepAttention(opencog);
}

// Check attentional focus
int focus_count;
OCAtom **focus = OpenCog_getAttentionalFocus(opencog, &focus_count);
```

### Pattern Matching and Similarity

```c
// Create atoms
OCAtom *dog = OpenCog_addConcept(opencog, "dog", 0.85f, 0.75f);
OCAtom *wolf = OpenCog_addConcept(opencog, "wolf", 0.8f, 0.7f);

// Compute similarity using tensor embeddings
float similarity = OpenCog_similarity(opencog, dog, wolf);
printf("Similarity between dog and wolf: %.3f\n", similarity);

// Find similar atoms
int similar_count;
OCAtom **similar = OpenCog_findSimilarAtoms(opencog, dog, 0.5f, &similar_count);
```

## Building

The OpenCog tensor engine is built as part of the main TorCog project:

```bash
mkdir build
cd build
cmake ..
make opencog
```

To run tests:
```bash
./opencog/test_opencog
```

## Configuration

The system can be configured using `OCConfig`:

```c
OCConfig *config = OpenCog_getDefaultConfig();

// Adjust system limits
config->max_atoms = 50000;
config->max_inference_steps = 200;
config->embedding_dimensions = 256;

// Control components
config->enable_attention = 1;
config->enable_forgetting = 1;
config->enable_hebbian_learning = 1;

// Learning parameters
config->learning_rate = 0.05f;
config->confidence_threshold = 0.6f;
config->attention_threshold = 0.2f;

OpenCog *opencog = OpenCog_new(config);
```

## Performance Considerations

### Memory Usage
- Each atom requires ~1KB base memory + embedding size
- Default embedding: 128 dimensions × 4 bytes = 512 bytes per atom
- Attention matrices: O(N²) where N is max_atoms
- Inference matrices: O(N²) space for relationship storage

### Computational Complexity
- Atom similarity: O(D) where D is embedding dimension
- Attention spreading: O(N×E) where E is average edges per atom
- Forward chaining: O(R×N²) where R is number of rules
- Pattern matching: O(N×D) for tensor-based matching

### Optimization Tips
- Use smaller embedding dimensions for memory-constrained environments
- Limit max_atoms based on available RAM
- Adjust attention_threshold to control focus size
- Use sparse tensors for large, mostly-zero relationship matrices

## Advanced Features

### Custom Inference Rules
```c
OCInferenceRule *custom_rule = OCInferenceRule_new(OC_DEDUCTION_RULE, "my_rule");
// Set custom truth value function
custom_rule->tv_function = my_custom_tv_function;
OCInferenceEngine_addRule(opencog->inference, custom_rule);
```

### Neural Training
```c
// Train atom embeddings
THTensor *target_embedding = THTensor_(newWithSize1d)(128);
// ... set target values ...
OpenCog_trainEmbedding(opencog, atom, target_embedding);

// Train attention networks
OpenCog_trainNeuralComponent(opencog, "attention_network", inputs, targets);
```

### Monitoring and Analysis
```c
// Get system statistics
char stats[1000];
OpenCog_getStats(opencog, stats, sizeof(stats));

// Export embeddings for analysis
THTensor *embeddings = OpenCog_getAtomEmbeddings(opencog);

// Get attention distribution
THTensor *attention_dist = OCAttentionBank_getAttentionDistribution(opencog->attention);
```

## Integration with Other Systems

The OpenCog tensor engine can be integrated with:

- **Neural Networks**: Export embeddings for deep learning models
- **Symbolic Reasoners**: Import/export logical statements
- **Databases**: Serialize atomspace to persistent storage
- **Robotics**: Real-time sensor integration and motor control
- **NLP Systems**: Language understanding and generation

## Future Extensions

Planned enhancements include:

- GPU acceleration using CUDA tensors
- Distributed atomspace across multiple nodes
- Quantum-inspired tensor operations
- Evolutionary optimization of inference rules
- Integration with large language models
- Real-time streaming data processing

## References

- OpenCog Framework: https://opencog.org/
- Torch Tensor Library: https://github.com/torch/torch7
- "The OpenCog Cognitive Architecture" by Ben Goertzel
- "Probabilistic Logic Networks" by Ben Goertzel et al.
- "Attention Allocation in OpenCog" by Nil Geisweiller

## License

This implementation is released under the same license as the parent TorCog project.
Loading