This project implements a basic neural network from scratch using Python. The network can be trained to classify or predict based on input data without the use of external machine learning libraries.
- Forward propagation
- Backpropagation
- Adjustable learning rate
- Multi-layer architecture
- Clone the repository:
git clone https://github.com/darwin-luque/neural-network.git
cd neural-network
- Install dependencies:
pip install -r requirements.txt
- Run the script:
python testing.py
Note: Generate your own dataset. Maybe try using the XOR problem to test the network.
- Forward Propagation: The network computes the output by passing inputs through multiple layers.
- Backpropagation: The error is computed and propagated back to update weights.
To showcase how the neural network works, you can run it on a simple dataset (e.g., XOR problem) and evaluate the model’s accuracy.
- Adding more complex activation functions like ReLU or Tanh.
- Implementing optimization techniques like momentum or learning rate decay.
- Expanding the network to include convolutional neural networks (CNNs) or recurrent neural networks (RNNs).