This is a complete MLX rewrite of Neural Circuit Policies by Sydney Renee for The Solace Project. This implementation is specifically optimized for Apple Silicon using Apple's MLX framework, providing native GPU acceleration on M-series chips.
Neural Circuit Policies (NCPs) are designed sparse recurrent neural networks loosely inspired by the nervous system of C. elegans:
- Neural Circuit Policies Enabling Auditable Autonomy (Open Access)
- Closed-form continuous-time neural networks (Open Access)
This project is based on the original ncps library by Mathias Lechner and contributors, which provided PyTorch and TensorFlow implementations. This MLX version is a ground-up rewrite specifically for Apple Silicon.
import mlx.core as mx
from ncps import CfC
from ncps.wirings import AutoNCP
# Create a CfC model with 20 input features and 50 hidden units
rnn = CfC(input_size=20, units=50)
# Or use structured NCP wiring
wiring = AutoNCP(28, 4) # 28 neurons, 4 outputs
rnn = CfC(input_size=20, units=wiring)
# Forward pass
x = mx.random.normal((2, 3, 20)) # (batch, time, features)
output, state = rnn(x)- Apple Silicon Mac (M1, M2, M3, M4, or later)
- Python 3.8+
- MLX 0.1.0+
# Clone the repository
git clone https://github.com/SolaceHarmony/ncps-mlx.git
cd ncps-mlx
# Install in development mode
pip install -e .
# Or install with optional dependencies
pip install -e .[viz] # Includes matplotlib and networkx for visualizationpip install mlxFull documentation is available at ReadTheDocs (coming soon) or can be built locally:
cd docs
pip install -r ../.readthedocs-requirements.txt
make html
# Open docs/_build/html/index.html in your browserThe documentation includes:
- Quickstart Guide - Get started with NCPs and MLX
- API Reference - Complete API documentation
- Examples - Detailed usage examples and tutorials
Check out the examples directory for complete working examples:
- mlx_smnist_training.py - Sequential MNIST classification with LTC
- mlx_cfc_regression.py - Time series regression with CfC
- currency_predictor_mlx.py - Currency prediction example
- maze_train_mlx.py - Maze navigation training
- mlx_cell_comparison.py - Compare different RNN cell types
# Train on Sequential MNIST
python examples/mlx_smnist_training.py --epochs 200 --hidden-size 64
# Time series regression
python examples/mlx_cfc_regression.pyThis package provides MLX implementations of liquid time-constant (LTC) and closed-form continuous-time (CfC) neural networks as mlx.nn.Module layers.
from ncps import CfC, LTC, CTRNN, CTGRU
from ncps.wirings import AutoNCP, FullyConnected
# Fully-connected models
input_size = 20
units = 28 # 28 neurons
rnn = CfC(input_size, units)
rnn = LTC(input_size, units)
rnn = CTRNN(input_size, units)
rnn = CTGRU(input_size, units)The key innovation of NCPs is their structured wiring diagrams, inspired by biological neural circuits. You can use predefined wiring patterns:
from ncps import CfC, LTC
from ncps.wirings import AutoNCP, FullyConnected
# AutoNCP: Automatically generates a structured NCP wiring
wiring = AutoNCP(28, 4) # 28 neurons, 4 outputs
input_size = 20
rnn = CfC(input_size, wiring)
rnn = LTC(input_size, wiring)import mlx.core as mx
import mlx.nn as nn
import mlx.optimizers as optim
from ncps import LTC
from ncps.wirings import FullyConnected
# Create model
wiring = FullyConnected(units=64, output_dim=64)
wiring.build(input_size=20)
model = LTC(input_size=20, units=wiring, return_sequences=True)
# Define loss function
def loss_fn(model, x, y):
outputs, _ = model(x)
return mx.mean((outputs - y) ** 2)
# Training loop
optimizer = optim.Adam(learning_rate=1e-3)
value_and_grad_fn = nn.value_and_grad(model, loss_fn)
for epoch in range(100):
loss, grads = value_and_grad_fn(model, x_train, y_train)
optimizer.update(model, grads)
mx.eval(model.parameters(), optimizer.state)This MLX implementation differs from the original PyTorch/TensorFlow version:
- MLX-Native Operations: All operations use MLX primitives for optimal Apple Silicon performance
- Unified API: Single, consistent API instead of separate torch/tf modules
- Direct Import:
from ncps import LTC, CfC(no need forncps.mlx) - Batch-First by Default: Follows MLX conventions with
(batch, time, features)ordering
- Apple Neural Engine: Automatically leverages Apple's specialized ML hardware
- Unified Memory: Takes advantage of Apple Silicon's unified memory architecture
- Metal Backend: Uses Metal for GPU acceleration on M-series chips
- Simpler Initialization: More intuitive parameter naming
- Better Type Hints: Full type annotations for better IDE support
- State Management: Cleaner hidden state handling
- Model Checkpointing: Native support for MLX weight serialization
import mlx.core as mx
import mlx.nn as nn
from ncps import CfC
from ncps.wirings import AutoNCP
class SequenceClassifier(nn.Module):
def __init__(self, input_dim: int, hidden_dim: int, num_classes: int):
super().__init__()
wiring = AutoNCP(hidden_dim, num_classes)
self.rnn = CfC(input_size=input_dim, units=wiring, return_sequences=False)
self.output = nn.Linear(self.rnn.output_size, num_classes)
def __call__(self, x):
features, _ = self.rnn(x)
return self.output(features)
# Create and use model
model = SequenceClassifier(input_dim=20, hidden_dim=64, num_classes=10)
x = mx.random.normal((32, 100, 20)) # (batch, time, features)
logits = model(x)- π Optimized for Apple Silicon: Native MLX implementation for M1/M2/M3/M4 chips
- π§ Multiple RNN Architectures: LTC, CfC, CTRNN, CTGRU, and more
- π Flexible Wiring: Support for structured NCP wirings and fully-connected layers
- π Production Ready: Includes model checkpointing, state management, and profiling
- π― Type Safe: Full type annotations for better development experience
- π Easy Training: Compatible with standard MLX training patterns
- π§ Extensible: Easy to customize and extend for research
We welcome contributions! See CONTRIBUTING.md for guidelines.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
If you use this software in your research, please cite both the original NCP papers and this implementation:
@article{lechner2020neural,
title={Neural circuit policies enabling auditable autonomy},
author={Lechner, Mathias and Hasani, Ramin and Amini, Alexander and Henzinger, Thomas A and Rus, Daniela and Grosu, Radu},
journal={Nature Machine Intelligence},
volume={2},
number={10},
pages={642--652},
year={2020},
publisher={Nature Publishing Group}
}
@article{hasani2021closed,
title={Closed-form continuous-time neural networks},
author={Hasani, Ramin and Lechner, Mathias and Amini, Alexander and Liebenwein, Lucas and Ray, Aaron and Tschaikowski, Max and Teschl, Gerald and Rus, Daniela},
journal={Nature Machine Intelligence},
volume={4},
number={11},
pages={992--1003},
year={2022},
publisher={Nature Publishing Group}
}@software{ncps_mlx_2025,
title={ncps-mlx: Neural Circuit Policies for Apple MLX},
author={Renee, Sydney},
year={2025},
url={https://github.com/SolaceHarmony/ncps-mlx},
note={MLX implementation for Apple Silicon}
}- GitHub Repository: https://github.com/SolaceHarmony/ncps-mlx
- Original NCP Repository: https://github.com/mlech26l/ncps
- Apple MLX: https://github.com/ml-explore/mlx
This project builds upon the groundbreaking work of Mathias Lechner, Ramin Hasani, and their colleagues on Neural Circuit Policies. We are grateful for their research and the original open-source implementation that made this MLX port possible.
Developed by Sydney Renee for The Solace Project π

