Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/documentation Github Actions #29

Merged
merged 11 commits into from
Jul 18, 2023
Merged
47 changes: 47 additions & 0 deletions .github/workflows/docs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
name: Build and Deploy Documentation

on:
push:
branches:
- main
- dev

jobs:
build-and-deploy:
runs-on: ubuntu-latest

steps:
- name: Checkout Repository
uses: actions/checkout@v3

- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: 3.9

- name: Install Dependencies
run: |
python -m pip install -U pip
python -m pip install -e ".[docs]" --no-cache-dir

- name: Determine Version
id: determine_version
run: |
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
VERSION="v$(python -c "import quadra; print(quadra.__version__)")"
elif [[ "${{ github.ref }}" == "refs/heads/dev" ]]; then
VERSION="dev"
fi
echo "::set-output name=version::$VERSION"

- name: Build Documentation
run: |
git config user.name "${GITHUB_ACTOR}"
git config user.email "${GITHUB_ACTOR}@users.noreply.github.com"
git fetch origin gh-pages --depth=1
VERSION="${{ steps.determine_version.outputs.version }}"
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
mike deploy --push --update-aliases $VERSION latest
else
mike deploy --push $VERSION
fi
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ quadra experiment=custom/<config_name> logger=csv

It will run the experiment using the configuration file you have just created and it will apply the default parameters from the classification configuration file. Furthermore, it will log the metrics to a csv file. You can add or customize the parameters in the configuration file to fit your needs.

For more information about advanced usage, please check [tutorials](/tutorials/configurations.html) and [task specific examples](/tutorials/examples/classification.html).
For more information about advanced usage, please check [tutorials](https://orobix.github.io/quadra/tutorials/configurations.html) and [task specific examples](https://orobix.github.io/quadra/tutorials/examples/classification.html).

## Development

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/configurations.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ For example we can set the seed, decide for a run name, or set the log level.

### Datamodule

Datamodule setting files are used to configure the datamodule which manages the datasets and dataloaders used in experiment. For a detailed explanatation on how to implement `DataModule` classes, please refer to the [datamodule documentation](/tutorials/datamodules.html).
Datamodule setting files are used to configure the datamodule which manages the datasets and dataloaders used in experiment. For a detailed explanation on how to implement `DataModule` classes, please refer to the [datamodule documentation](../tutorials/datamodules.md).

Here is the structure of the folder:

Expand Down Expand Up @@ -327,7 +327,7 @@ Each scheduler file defines how we initialize the learning rate schedulers with

### Task

The tasks are the building blocks containing the actual training and evaluation logic. They are discussed in more details in the [tasks](/tutorials/tasks.html) section.
The tasks are the building blocks containing the actual training and evaluation logic. They are discussed in more details in the [tasks](../tutorials/tasks.md) section.

### Trainer

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/contribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this guide, we'll cover the steps you should take to ensure your code meets t

## Setting up the Environment

Before contributing to the repository, you'll need to set up your development environment. Please check the [Getting Started Guide](/getting_started.html) for instructions on how to set up your environment.
Before contributing to the repository, you'll need to set up your development environment. Please check the [Getting Started Guide](../getting_started.md) for instructions on how to set up your environment.

After setting up your environment you can install `Quadra` Library in different ways:

Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/examples/anomaly_detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This example will demonstrate how to create custom experiments starting from def

Let's start with the dataset that we are going to use. Since we are using the base anomaly datamodule, images
and masks must be arranged in a folder structure that follows the anomaly datamodule guidelines defined in the
[anomaly datamodule documentation](/tutorials/datamodules.html#anomaly-detection).
[anomaly datamodule documentation](../../tutorials/datamodules.md#anomaly-detection).
For this example, we will use the `mnist` dataset (using 9 as good and all the other number as anomalies), the dataset will be automatically downloaded by the generic experiment described next.

```tree
Expand Down
6 changes: 3 additions & 3 deletions docs/tutorials/examples/segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ In this page, we will show you how to run a segmentation experiment (either bina
This example will demonstrate how to create a custom experiment starting from default settings.
### Dataset

Let's start with the dataset we are going to use. Since we are using base segmentation datamodule, we must arrange our images and masks in a folder structure that follows the segmentation datamodule guideline defined in the [segmentation datamodule documentation](/tutorials/datamodules.html#segmentation).
Let's start with the dataset we are going to use. Since we are using base segmentation datamodule, we must arrange our images and masks in a folder structure that follows the segmentation datamodule guideline defined in the [segmentation datamodule documentation](../../tutorials/datamodules.md#segmentation).
Imagine that we have a dataset with the following structure:

```tree
Expand Down Expand Up @@ -197,9 +197,9 @@ core:
When defining the `idx_to_class` dictionary, the keys should be the class index and the values should be the class name. The class index starts from 1.


In the final configuration experiment we have specified the path to the dataset, batch size, split files, gpu device, experiment name and toggled some evaluation options.
In the final configuration experiment we have specified the path to the dataset, batch size, split files, GPU device, experiment name and toggled some evaluation options.

By default data will be logged to mlflow if it is [configured properly](/tutorials/install.html#mlflow-credentials). If mlflow is not available it's possible to configure a simple csv logger by adding an override to the file above:
By default data will be logged to `Mlflow`. If `Mlflow` is not available it's possible to configure a simple csv logger by adding an override to the file above:

```yaml
# @package _global_
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorials/tasks.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ graph LR;
- **[`Segmentation Task`][quadra.tasks.Segmentation]:** It has the same functionality as the lightning task but it will also generate segmentation reports on demand.
- **[`Classification Task`][quadra.tasks.Classification]:** This task is designed to train from scratch or finetune a classification model using the `pytorch-lightning` library.
- **[`SklearnClassification Task`][quadra.tasks.classification.SklearnClassification]:** This task is designed to train an `sklearn` classifier on top of a torch feature extractor.
- **[`PatchSklearnClassification Task`][quadra.tasks.classification.PatchSklearnClassification]:** This task is designed to train an `sklearn` patch classifier on top of a torch feature extractor.
- **[`PatchSklearnClassification Task`][quadra.tasks.patch.PatchSklearnClassification]:** This task is designed to train an `sklearn` patch classifier on top of a torch feature extractor.
- **[`Anomalib Detection Task`][quadra.tasks.AnomalibDetection]:** This task is designed to train an anomaly detection model using the `anomalib` library.
- **[`SSL (Self Supervised Learning) Task`][quadra.tasks.SSL]:** This task is designed to train a torch module with a given SSL algorithm.

Expand Down
14 changes: 7 additions & 7 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -90,13 +90,13 @@ dev = [
]

docs = [
"mkdocs==1.4.*",
"mkdocs-material==9.1.*",
"mkdocstrings-python==0.8.*",
"mkdocs-gen-files==0.4.*",
"mkdocs-literate-nav==0.6.*",
"mkdocs-section-index==0.3.*",
"mike==1.1.*",
"mkdocs==1.4.3",
"mkdocs-material==9.1.18",
"mkdocstrings-python==1.2.0",
"mkdocs-gen-files==0.5.0",
"mkdocs-literate-nav==0.6.0",
"mkdocs-section-index==0.3.5",
"mike==1.1.2",
"cairosvg==2.7.0"
]

Expand Down
1 change: 0 additions & 1 deletion quadra/utils/vit_explainability.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,7 +299,6 @@ class LinearModelPytorchWrapper(torch.nn.Module):

Args:
backbone: Backbone
num_classes: Number of classes
linear_classifier: ScikitLearn linear classifier model
device: The device to use. Defaults to "cpu"
example_input: Input example needed to obtain output shape
Expand Down