This project generates and visualizes beautiful chaotic attractors using simple feedforward neural networks with feedback, based on the paper "Artificial Neural Net Attractors" by J.C. Sprott. This is a revamped version of the project developed during the Nonlinear Dynamics course at NTU SPMS; an attractor generated by it won the CoSScienceArt competition.
The neural network architecture consists of:
- An input layer with D elements
- A hidden layer with N neurons
- A single output, which is fed back to the input
Because of the feedback loop, this NN becomes a nonlinear map capable of chaotic behavior. Input parameters (N, D, scaling factor s) define the time-series evolution of the system, and some sets of parameters lead to the phase space settling into a basin of attraction. These attractors might be point attractors, limit cycles, toroidal and strange (chaotic) attractors.
- Clone the repository:
git clone https://github.com/anyakors/neural_attractors.git cd neural_attractors
Generate and visualize an attractor with default parameters:
python main.py
python main.py --N 4 --D 16 --s 0.75 --tmax 100000 --seed 3160697950
--N
: Number of neurons in the hidden layer (default: 4)--D
: Dimension of the input vector (default: 16)--s
: Scaling factor for the output (default: 0.75)--tmax
: Number of iterations (default: 100000)--discard
: Number of initial warm up iterations to discard (default: 1000)--seed
: Random seed for reproducibility
python main.py --skip-value 8 --figsize 16 16 --cmap viridis --linewidth 0.2 --alpha 0.15
--skip-value
: Number of points to skip for visualization (default: 16)--figsize
: Figure size in inches (default: 12 12)--dpi
: DPI for saved figures (default: 300)--cmap
: Colormap for visualization (default: 'Spectral')--linewidth
: Line width for visualization (default: 0.1)--alpha
: Alpha (transparency) for visualization (default: 0.1)--interpolate-steps
: Interpolation steps for visualization (default: 3)
python main.py --plot-time-series --plot-scatter --plot-lyapunov --show
--plot-trajectory
: Plot the attractor trajectory (default: True)--plot-time-series
: Plot the time series--plot-scatter
: Plot a scatter plot of the attractor--plot-lyapunov
: Calculate and plot the Lyapunov exponent--lyapunov-iterations
: Number of iterations for Lyapunov exponent calculation (default: 5000)--no-plot
: Don't create any plots--show
: Display plots instead of saving them
python main.py --output-dir images --prefix my_attractor --save-data
--output-dir
: Output directory (default: 'output')--prefix
: Prefix for output files (default: 'attractor')--save-data
: Save trajectory data to files
Generate multiple attractors:
python main.py --count 10 --auto-filter --save-data
--count
: Number of attractors to generate (default: 1)--auto-filter
: Filter out uninteresting attractors
python main.py --load-data output/attractor_x1.npy output/attractor_x2.npy
python main.py --N 4 --D 32 --s 0.5 --tmax 1000000 --discard 10000 --skip-value 8 --figsize 16 16 --dpi 600
python main.py --plot-lyapunov --lyapunov-iterations 10000
python main.py --count 20 --auto-filter --save-data
for s in 0.2 0.3 0.4 0.5 0.6 0.7 0.8; do
python main.py --s $s --prefix "attractor_s${s}" --save-data
done
lyapunov_multi.sh
Lyapunov exponent measures the rate of separation of two trajectories in a dynamical system that start infinitesimally close to each other. It indicates chaotic behavior in the system:
- Positive Lyapunov exponent: Indicates chaos. Nearby trajectories diverge exponentially over time.
- Zero Lyapunov exponent: Indicates stability or a bifurcation point.
- Negative Lyapunov exponent: Indicates stability. Nearby trajectories converge over time.
For neural network attractors, the Lyapunov exponent calculation:
- Takes an initial state and a slightly perturbed initial state and runs the system forward
- Measures the divergence between the trajectories
- Computes the logarithm of this divergence rate
Example usage:
python main.py --N 4 --D 32 --s 0.8 --plot-lyapunov --lyapunov-iterations 8000
This project is licensed under the MIT License - see the LICENSE file for details.
Inspired by the original work by J.C. Sprott "Artificial Neural Net Attractors" published in Computers & Graphics, Vol. 22, No. 1, pp. 143-149, 1998 (doi.org/10.1016/S0097-8493(97)00089-7).