A novel approach to neural network architecture that incorporates harmonic frequencies into activation functions, inspired by Fourier analysis principles.
An experimental implementation of a Multi-Layer Perceptron using frequency-modulated activation functions, where each neuron operates at a different harmonic frequency. This approach draws parallels between neural network components and Fourier series, treating weights as amplitudes and introducing frequency as a structural parameter.
#README.md
This repository presents an experimental neural network architecture that incorporates harmonic frequencies into its activation functions. The implementation treats neural network components through the lens of Fourier analysis, where activation functions act as wave components with different frequencies following a harmonic series.
Fourier-Inspired Architecture: Each neuron in a layer operates at a different frequency following the harmonic series Frequency-Scaled Initialization: Weight initialization scaled according to the neuron's frequency Frequency-Aware Optimization: Gradient updates are scaled based on each neuron's frequency Smooth Activation: Uses a modified activation function that combines sinusoidal behavior with residual connections
The network architecture implements:
Frequency-based activation functions where each neuron operates at a harmonic of the base frequency Batch normalization and dropout for training stability Frequency-aware weight initialization Residual connections to improve gradient flow Custom optimizer that scales updates based on neuron frequencies
Requirements Copytorch torchvision numpy Usage Basic usage example: pythonCopy# Initialize model model = HarmonicMLP()
base_optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5) freq_optimizer = FrequencyAwareOptimizer(base_optimizer, model, freq_scale_factor=0.1)
model.train() Model Architecture The current implementation includes:
Input layer: 784 neurons (for MNIST) Hidden layers: [512, 256] neurons Output layer: 10 neurons Batch normalization after each hidden layer Dropout (0.2) for regularization
On the MNIST dataset, the model typically achieves:
95% test accuracy within first few epochs
Fast initial convergence Stable training behavior
The implementation draws parallels between neural networks and Fourier analysis:
Weights ≈ Amplitude Bias ≈ Phase Activation functions ≈ Wave components Each neuron operates at a different harmonic frequency
Contributions are welcome! Some interesting areas to explore:
Different frequency distributions Alternative activation functions Applications to other datasets Performance optimizations
If you use this code in your research, please cite: bibtexCopy@software{harmonic_mlp, title = {Harmonic MLP: Neural Networks with Frequency-Based Activation}, year = {2024}, author = {[Your Name]}, url = {[Repository URL]} }
[Choose appropriate license - MIT suggested for open collaboration]
The key components of the implementation are:
HarmonicLayer
pythonCopyclass HarmonicLayer(nn.Module): def init(self, input_size, output_size, base_freq=1.0, max_freq=10.0):
# Layer implementation with frequency-based activation
FrequencyAwareOptimizer
pythonCopyclass FrequencyAwareOptimizer: def init(self, optimizer, model, freq_scale_factor=0.1):
# Optimizer that scales updates based on frequencies
See the source code for complete implementation details.
- lr sweep
- With batch_size = 64
- [32, 16]