I'm aware its broken right now, I'll fix it! Ideally this just runs in yolo mode in cursor (or claude desktop) without human intervention and creates a "brain" available independent of LLM version.
A neural memory system for LLMs that can learn and predict sequences while maintaining state through a memory vector. This MCP (Model Context Protocol) server provides tools for Claude 3.7 Sonnet and other LLMs to maintain memory state across interactions.
- Perfect for Cursor: Now that Cursor automatically runs MCP in yolo mode, you can take your hands off the wheel with your LLM's new memory
- Neural Memory Architecture: Transformer-based memory system that can learn and predict sequences
- Memory Management: Efficient tensor operations with automatic memory cleanup
- MCP Integration: Fully compatible with Cursor and other MCP clients
- Text Encoding: Convert text inputs to tensor representations
- Memory Persistence: Save and load memory states between sessions
# Clone the repository
git clone https://github.com/yourusername/titan-memory.git
cd titan-memory
# Install dependencies
npm install
# Build the project
npm run build
# Start the server
npm start
The Titan Memory MCP server provides the following tools:
Get help about available tools.
Parameters:
tool
(optional): Specific tool name to get help forcategory
(optional): Category of tools to exploreshowExamples
(optional): Include usage examplesverbose
(optional): Include detailed descriptions
Initialize the Titan Memory model with custom configuration.
Parameters:
inputDim
: Input dimension size (default: 768)hiddenDim
: Hidden dimension size (default: 512)memoryDim
: Memory dimension size (default: 1024)transformerLayers
: Number of transformer layers (default: 6)numHeads
: Number of attention heads (default: 8)ffDimension
: Feed-forward dimension (default: 2048)dropoutRate
: Dropout rate (default: 0.1)maxSequenceLength
: Maximum sequence length (default: 512)memorySlots
: Number of memory slots (default: 5000)similarityThreshold
: Similarity threshold (default: 0.65)surpriseDecay
: Surprise decay rate (default: 0.9)pruningInterval
: Pruning interval (default: 1000)gradientClip
: Gradient clipping value (default: 1.0)
Perform a forward pass through the model to get predictions.
Parameters:
x
: Input vector or textmemoryState
(optional): Memory state to use
Execute a training step to update the model.
Parameters:
x_t
: Current input vector or textx_next
: Next input vector or text
Get the current memory state and statistics.
Parameters:
type
(optional): Optional memory type filter
Update memory along a manifold direction.
Parameters:
base
: Base memory statevelocity
: Update direction
Remove less relevant memories to free up space.
Parameters:
threshold
: Pruning threshold (0-1)
Save memory state to a file.
Parameters:
path
: Checkpoint file path
Load memory state from a file.
Parameters:
path
: Checkpoint file path
Reset accumulated gradients to recover from training issues.
Parameters: None
The Titan Memory MCP server is designed to work seamlessly with Claude 3.7 Sonnet in Cursor. Here's an example of how to use it:
// Initialize the model
const result = await callTool("init_model", {
inputDim: 768,
memorySlots: 10000,
transformerLayers: 8,
});
// Perform a forward pass
const { predicted, memoryUpdate } = await callTool("forward_pass", {
x: "const x = 5;", // or vector: [0.1, 0.2, ...]
memoryState: currentMemory,
});
// Train the model
const result = await callTool("train_step", {
x_t: "function hello() {",
x_next: " console.log('world');",
});
// Get memory state
const state = await callTool("get_memory_state", {});
The Titan Memory MCP server includes sophisticated memory management to prevent memory leaks and ensure efficient tensor operations:
- Automatic Cleanup: Periodically cleans up unused tensors
- Memory Encryption: Securely stores memory states
- Tensor Validation: Ensures tensors have the correct shape
- Error Recovery: Handles tensor errors gracefully
The Titan Memory MCP server is built with a modular architecture:
- TitanMemoryServer: Main server class that registers tools and handles requests
- TitanMemoryModel: Neural memory model implementation
- VectorProcessor: Handles input processing and text encoding
- MemoryManager: Manages tensor operations and memory cleanup
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.