Pynet is a Python library containing all of the deep learning building blocks.
Tensor operations are included in the pynet.functional namespace. Pynet provides a couple of basic tensor operations such as matrix multiplication or tensor addition. New tensor operation can be created by implementing the Function abstract class. Tensor functions that are included in the library are:
- Tensor addition [link]
- Matrix multiplication [link]
- Element-wise maximum [link]
- Element-wise sigmoid [link]
Implementation of neural network layers and activation functions are included in the pynet.nn namespace. The namespace contains basic layer and activation function implementations and can be extended by implementing the Module abstract class. List of layers and activation functions included in the Pynet library:
- Fully connected layer (Linear) [link]
- Sequential module (container for other modules) [link]
- Rectified linear unit (ReLU) activation function [link]
- Sigmoid activation function [link]
Loss function implementations are included in the pynet.loss namespace. Custom loss functions can be created by implementing the Loss abstract class. Loss functions included in the Pynet library are:
Optimization algorithm implementations are included in the pynet.optimizers namespace. New optimization algorithm can be created by implementing the Optimizer abstract class. Optimization algorithms included in the library:
- Stochastic Gradient Descent (SGD) [link]
pynet.data namespace provides abstraction over the datasets used for training the neural network. In-memory dataset implementation is already included in the library, however one can create their own dataset implementation by extending the Dataset abstract class.
For the neural network's training/testing procedure, one can either use default implementation of these procedures, which can be found in the DefaultTrainer class or implement their own training/testing logic by extending the Trainer abstract class. One can also omit both these options and implement its own logic from scratch (for example, see the DefaultTrainer train and test function implementations).
When using one of the Trainer's implementations, a Callback class was created to perform additional actions at various stages of training/testing procedure. There are a couple of callbacks implemented and ready to use, such as:
- Callback providing printing all the measured model's metrics onto the console (PrintCallback)
- Callback providing optimizer's learning rate scheduling (LrSchedule)
One can also easily create their own callback by extending the Callback abstract class.
Table below summarizes the feature section. Every row of a table contains name of the namespace, namespace description (what functionality this namespace provides) and a way of extending this functionality (by implementing the abstract class).
Namespace | Functionality | Abstract class |
---|---|---|
pynet.functional | Tensor operations | Function |
pynet.nn | Neural network layers and activation functions | Module |
pynet.loss | Loss functions | Loss |
pynet.optimizers | Optimization algorithms | Optimizer |
pynet.data | Data manipulation | Dataset |
pynet.training.trainer | Neural network training/testing procedure | Trainer |
pynet.training.callbacks | Callback functions during the training/testing | Callback |
Use the package manager pip to install pynet.
pip install pynet-dl
Here is the simple example how to use the Pynet library. In this example we are going to train a very small neural network on the make_circles dataset from sklearn.dataset package.
First, we need to import all the necessary stuff:
import numpy as np
from sklearn.datasets import make_circles
# Tensor class
from pynet.tensor import Tensor
# Neural network modules
from pynet.nn.sequential import Sequential
from pynet.nn.linear import Linear
from pynet.nn.relu import ReLU
from pynet.nn.sigmoid import Sigmoid
# Datasets
from pynet.data.in_memory import InMemoryDataset
# Loss functions
from pynet.loss.bce import BinaryCrossEntropy
# Optimizers
from pynet.optimizers.sgd import SGD
# Weight initializers
from pynet.initializers.he_normal import HeNormal
# Trainer and training/testing callbacks
from pynet.training.trainer.default import DefaultTrainer
from pynet.training.callbacks.print import PrintCallback
Then, load the dataset and do preprocessing. As the single input to the neural network (i. e. sample xi) is supposed to be of shape [n_features x 1] (i. e. column vector), we need to add one extra dimension to the X array:
X, y = make_circles(n_samples=1000, noise=0.025)
# inputs to neural net must be of shape [n, 1]
X = np.expand_dims(X, axis=2)
Then, we will create the network, dataset, optimizer, appropriate loss function, trainer and optionally also specify a list of callbacks:
epochs = 20
model = Sequential([
Linear(inputs=2, neurons=16, initializer=HeNormal()),
ReLU(),
Linear(inputs=16, neurons=1, initializer=HeNormal()),
Sigmoid()
])
dataset = InMemoryDataset(X, y)
loss_f = BinaryCrossEntropy()
sgd = SGD(learning_rate=0.01, momentum=0.9)
callbacks = [PrintCallback(), LrSchedule(optimizer=sgd, schedule=lr_schedule)]
trainer = DefaultTrainer()
And then run the training:
trainer.train(
model=model,
train_dataset=dataset,
val_dataset=None,
loss_f=loss_f,
optimizer=sgd,
epochs=epochs,
callbacks=callbacks
)
After training the neural network, we can use it for inference.
# Make prediction for the i-th sample in the dataset
i = 0
y_pred = model.forward(Tensor(X[i])).ndarray.item()
More examples of the Pynet library usage can be found in the notebooks directory.