This repository is for Child Mind Institute - Detect Sleep States
1. install rye
MacOS
curl -sSf https://rye-up.com/get | bash
echo 'source "$HOME/.rye/env"' >> ~/.zshrc
source ~/.zshrc
Linux
curl -sSf https://rye-up.com/get | bash
echo 'source "$HOME/.rye/env"' >> ~/.bashrc
source ~/.bashrc
Windows
see install documentation
rye sync
. .venv/bin/activate
Rewrite run/conf/dir/local.yaml to match your environment
data_dir:
processed_dir:
output_dir:
model_dir:
sub_dir: ./
cd data
kaggle competitions download -c child-mind-institute-detect-sleep-states
unzip child-mind-institute-detect-sleep-states.zip
rye run python run/prepare_data.py -m phase=train,test
The following commands are for training the model of LB0.714
rye run python run/train.py downsample_rate=2 duration=5760 exp_name=exp001 dataset.batch_size=32
You can easily perform experiments by changing the parameters because hydra is used. The following commands perform experiments with downsample_rate of 2, 4, 6, and 8.
rye run python run/train.py -m downsample_rate=2,4,6,8
rye run python tools/upload_dataset.py
The following commands are for inference of LB0.714
rye run python run/inference.py dir=kaggle exp_name=exp001 weight.run_name=single downsample_rate=2 duration=5760 model.params.encoder_weights=null pp.score_th=0.005 pp.distance=40 phase=test
The model is built with two components: feature_extractor and decoder.
The feature_extractor and decoder that can be specified are as follows
- CNNSpectrogram
- LSTMFeatureExtractor
- PANNsFeatureExtractor
- SpecFeatureExtractor
- MLPDecoder
- LSTMDecoder
- TransformerDecoder
- TransformerCNNDecoder
- UNet1DDecoder
- MLPDecoder
- Spec2DCNN: Segmentation through UNet.
- Spec1D: Segmentation without UNet
- DETR2DCNN: Use UNet to detect sleep as in DETR. This model is still under development.
- CenterNet: Detect onset and offset, respectively, like CenterNet using UNet
- TransformerAutoModel:
- Segmentation using huggingface's AutoModel. don't use feature_extractor and decoder.
- Since the Internet is not available during inference, it is necessary to create a config dataset and specify the path in the model_name.
The correspondence table between each model and dataset is as follows.
model | dataset |
---|---|
Spec1D | seg |
Spec2DCNN | seg |
DETR2DCNN | detr |
CenterNet | centernet |
TransformerAutoModel | seg |
The command to train CenterNet with feature_extractor=CNNSpectrogram, decoder=UNet1DDecoder is as follows
rye run python run/train.py model=CenterNet dataset=centernet feature_extractor=CNNSpectrogram decoder=UNet1DDecoder