Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training error on demo data #61

Open
manojneuro opened this issue Aug 3, 2023 · 3 comments
Open

Training error on demo data #61

manojneuro opened this issue Aug 3, 2023 · 3 comments

Comments

@manojneuro
Copy link

Hi I get the following error on the Keras version while running training on the demo data. Please let me know if the code needs to be me modified or if I need to go to a lower version of keras.

Traceback (most recent call last):
  File "/mnt/home/mkumar/miniconda3/envs/sleep/bin/ut", line 8, in <module>
    sys.exit(entry_func())
             ^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/bin/ut.py", line 103, in entry_func
    mod.entry_func(script_args + help_agrs)
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/bin/train.py", line 305, in entry_func
    run(args=args)
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/bin/train.py", line 272, in run
    trainer.compile_model(n_classes=hparams["build"].get("n_classes"),
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/train/trainer.py", line 59, in compile_model
    optimizer = init_optimizer(optimizer, **optimizer_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/train/utils.py", line 176, in init_optimizer
    return optimizer[0](**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/adam.py", line 104, in __init__
    super().__init__(
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/optimizer.py", line 1087, in __init__
    super().__init__(
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/optimizer.py", line 105, in __init__
    self._process_kwargs(kwargs)
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/optimizer.py", line 134, in _process_kwargs
    raise ValueError(
ValueError: decay is deprecated in the new Keras optimizer, pleasecheck the docstring for valid arguments, or use the legacy optimizer, e.g., tf.keras.optimizers.legacy.Adam.
@manojneuro
Copy link
Author

manojneuro commented Aug 3, 2023

I also tried to reinstall the lates version of the package (version 1.1.8) and run the same code. Now I get a numpy error:

ut train --num_gpus=0 --preprocessed --overwri
te --seed 123

2023/08/03 13:20:03 | INFO | Entry script args dump: {'script': 'train', 'project_dir': './', 'log_dir': 'logs', 'log_level': 'INFO', 'seed': 123}

2023/08/03 13:20:03 | INFO | Project directory set: /mnt/home/mkumar/code/demo (initialized project: True)

2023/08/03 13:20:09 | INFO | Seeding TensorFlow, numpy and random modules with seed: 123
/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/tensorflow_addons/utils/tfa_eol_msg.py:23: UserWarning: 

TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP). 

For more information see: https://github.com/tensorflow/addons/issues/2807 

  warnings.warn(

2023/08/03 13:20:12 | INFO | Args dump: {'num_gpus': 0, 'force_gpus': '', 'continue_training': False, 'initialize_from': None, 'overwrite': True, 'log_file': 'training_log', 'datasets': None, 'just': None, 'no_val': False, 'max_val_studies_per_dataset': 20, 'max_train_samples_per_epoch': 500000.0, 'n_epochs': None, 'channels': None, 'train_queue_type': 'eager', 'val_queue_type': 'lazy', 'max_loaded_per_dataset': 40, 'num_access_before_reload': 32, 'preprocessed': True, 'final_weights_file_name': 'model_weights.h5', 'train_on_val': False}
Parent directory of the 'utime' package is not a git repository or Git is not installed. Git information will not be added to this hyperparameter file.

2023/08/03 13:20:12 | INFO | SingleH5Dataset(identifier=processed_data, path=/mnt/home/mkumar/code/demo/data/processed_data.h5)

2023/08/03 13:20:12 | INFO | [Dataset: sedf_sc/TRAIN] H5Dataset(identifier=sedf_sc/TRAIN, members=2, loaded=2)

2023/08/03 13:20:12 | INFO | [Dataset: sedf_sc/VAL] H5Dataset(identifier=sedf_sc/VAL, members=2, loaded=2)

2023/08/03 13:20:12 | INFO | [Dataset: sedf_sc/TRAIN] Setting access-time random channel selector: RandomChannelSelector(group_0: ['EEG_FPZ-CZ', 'EEG_PZ-OZ'], group_1: ['EOG_HORIZONTAL-None'])

2023/08/03 13:20:12 | INFO | [Dataset: sedf_sc/VAL] Setting access-time random channel selector: RandomChannelSelector(group_0: ['EEG_FPZ-CZ', 'EEG_PZ-OZ'], group_1: ['EOG_HORIZONTAL-None'])

2023/08/03 13:20:12 | INFO | [Dataset: dcsm/TRAIN] H5Dataset(identifier=dcsm/TRAIN, members=3, loaded=3)

2023/08/03 13:20:12 | INFO | [Dataset: dcsm/VAL] H5Dataset(identifier=dcsm/VAL, members=1, loaded=1)

2023/08/03 13:20:12 | INFO | [Dataset: dcsm/TRAIN] Setting access-time random channel selector: RandomChannelSelector(group_0: ['F3-M2', 'F4-M1', 'C3-M2', 'C4-M1', 'O1-M2', 'O2-M1'], group_1: ['E1-M2', 'E2-M2'])

2023/08/03 13:20:12 | INFO | [Dataset: dcsm/VAL] Setting access-time random channel selector: RandomChannelSelector(group_0: ['F3-M2', 'F4-M1', 'C3-M2', 'C4-M1', 'O1-M2', 'O2-M1'], group_1: ['E1-M2', 'E2-M2'])

2023/08/03 13:20:12 | INFO | Using data queue type: EagerQueue

2023/08/03 13:20:12 | INFO | Using data queue type: EagerQueue

2023/08/03 13:20:12 | INFO | Inferred DPE: 3840, n_channels=2 for dataset queue '<psg_utils.dataset.queue.eager_queue.EagerQueue object at 0x15541d317d50>'

2023/08/03 13:20:12 | INFO | Creating sequence class '<class 'utime.sequences.balanced_random_batch_sequence.BalancedRandomBatchSequence'>'

2023/08/03 13:20:12 | INFO | Setting augmenter: RegionalErase(ordereddict([('min_region_fraction', 0.001), ('max_region_fraction', 0.33), ('log_sample', True), ('apply_prob', 0.1)]))

2023/08/03 13:20:12 | INFO | Setting augmenter: ChannelDropout(ordereddict([('drop_fraction', 0.5), ('apply_prob', 0.1)]))

2023/08/03 13:20:12 | INFO | 
[*] BalancedRandomBatchSequence initialized (sedf_sc/TRAIN):
    Data queue type: <class 'psg_utils.dataset.queue.eager_queue.EagerQueue'>
    Batch shape:     [64, 35, 3840, 2]
    Sample prob.:    [0.2, 0.2, 0.2, 0.2, 0.2]
    N pairs:         2
    Margin:          17
    Augmenters:      [<RegionalErase>, <ChannelDropout>]
    Aug enabled:     True
    Batch scaling:   False
    All loaded:      True
    N classes:       5

2023/08/03 13:20:12 | INFO | Inferred DPE: 3840, n_channels=2 for dataset queue '<psg_utils.dataset.queue.eager_queue.EagerQueue object at 0x15541d31b050>'

2023/08/03 13:20:12 | INFO | Creating sequence class '<class 'utime.sequences.balanced_random_batch_sequence.BalancedRandomBatchSequence'>'

2023/08/03 13:20:12 | INFO | Setting augmenter: RegionalErase(ordereddict([('min_region_fraction', 0.001), ('max_region_fraction', 0.33), ('log_sample', True), ('apply_prob', 0.1)]))

2023/08/03 13:20:12 | INFO | Setting augmenter: ChannelDropout(ordereddict([('drop_fraction', 0.5), ('apply_prob', 0.1)]))

2023/08/03 13:20:12 | INFO | 
[*] BalancedRandomBatchSequence initialized (dcsm/TRAIN):
    Data queue type: <class 'psg_utils.dataset.queue.eager_queue.EagerQueue'>
    Batch shape:     [64, 35, 3840, 2]
    Sample prob.:    [0.2, 0.2, 0.2, 0.2, 0.2]
    N pairs:         3
    Margin:          17
    Augmenters:      [<RegionalErase>, <ChannelDropout>]
    Aug enabled:     True
    Batch scaling:   False
    All loaded:      True
    N classes:       5

2023/08/03 13:20:12 | INFO | Inferred DPE: 3840, n_channels=2 for dataset queue '<psg_utils.dataset.queue.eager_queue.EagerQueue object at 0x15541d319bd0>'

2023/08/03 13:20:12 | INFO | Creating sequence class '<class 'utime.sequences.balanced_random_batch_sequence.BalancedRandomBatchSequence'>'

2023/08/03 13:20:12 | INFO | 
[*] BalancedRandomBatchSequence initialized (sedf_sc/VAL):
    Data queue type: <class 'psg_utils.dataset.queue.eager_queue.EagerQueue'>
    Batch shape:     [64, 35, 3840, 2]
    Sample prob.:    [0.2, 0.2, 0.2, 0.2, 0.2]
    N pairs:         2
    Margin:          17
    Augmenters:      []
    Aug enabled:     False
    Batch scaling:   False
    All loaded:      True
    N classes:       5

2023/08/03 13:20:12 | INFO | Inferred DPE: 3840, n_channels=2 for dataset queue '<psg_utils.dataset.queue.eager_queue.EagerQueue object at 0x15541d31af90>'

2023/08/03 13:20:12 | INFO | Creating sequence class '<class 'utime.sequences.balanced_random_batch_sequence.BalancedRandomBatchSequence'>'

2023/08/03 13:20:12 | INFO | 
[*] BalancedRandomBatchSequence initialized (dcsm/VAL):
    Data queue type: <class 'psg_utils.dataset.queue.eager_queue.EagerQueue'>
    Batch shape:     [64, 35, 3840, 2]
    Sample prob.:    [0.2, 0.2, 0.2, 0.2, 0.2]
    N pairs:         1
    Margin:          17
    Augmenters:      []
    Aug enabled:     False
    Batch scaling:   False
    All loaded:      True
    N classes:       5

2023/08/03 13:20:12 | INFO | 
[*] MultiSequence initialized:
    --- Contains 2 sequences
    --- Sequence IDs: sedf_sc/TRAIN, dcsm/TRAIN
    --- Sequence sample probs (alpha=0.5): [0.45 0.55]
    --- Batch shape: [64, 35, 3840, 2]

2023/08/03 13:20:12 | INFO | 
[*] ValidationMultiSequence initialized:
    --- Contains 2 sequences
    --- Sequence IDs: sedf_sc, dcsm

2023/08/03 13:20:12 | INFO | Setting CUDA_VISIBLE_DEVICES = ''
2023-08-03 13:20:13.170026: E tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:268] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected

2023/08/03 13:20:13 | INFO | Using TF distribution strategy: <tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x15541d948d10> on GPUs: []. (CPU:0 if empty).

2023/08/03 13:20:13 | INFO | Creating new model of type 'USleep'

2023/08/03 13:20:13 | INFO | Found requested class 'elu' in module '<module 'keras.api._v2.keras.activations' from '/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/keras/api/_v2/keras/activations/__init__.py'>'
Traceback (most recent call last):
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/utils/conv_arithmetics.py", line 85, in compute_receptive_fields
    dilation = np.array(layer.dilation_rate).astype(np.int)
                                                    ^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/numpy/__init__.py", line 305, in __getattr__
    raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'inf'?

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/bin/ut", line 8, in <module>
    sys.exit(entry_func())
             ^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/bin/ut.py", line 103, in entry_func
    mod.entry_func(script_args + help_agrs)
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/bin/train.py", line 305, in entry_func
    run(args=args)
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/bin/train.py", line 266, in run
    model = init_model(hparams["build"], clear_previous=False)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/models/model_init.py", line 37, in init_model
    return models.__dict__[cls_name](**build_hparams)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/models/usleep.py", line 196, in __init__
    self.receptive_field = compute_receptive_fields(self.layers[:ind])[-1][-1]
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/utime/utils/conv_arithmetics.py", line 87, in compute_receptive_fields
    dilation = np.ones(shape=[dim], dtype=np.int)
                                          ^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/u-sleep/lib/python3.11/site-packages/numpy/__init__.py", line 305, in __getattr__
    raise AttributeError(__former_attrs__[attr])
AttributeError: module 'numpy' has no attribute 'int'.
`np.int` was a deprecated alias for the builtin `int`. To avoid this error in existing code, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations. Did you mean: 'inf'?

@saraheldah
Copy link

hi @manojneuro, did you resolve numpy error? because I'm facing it as well

@happybegining
Copy link

Hi @manojneuro,
I had the same problem as you. When I downgraded the version of tensorflow to 2.9.3, it worked. I hope it helps you.

Hi I get the following error on the Keras version while running training on the demo data. Please let me know if the code needs to be me modified or if I need to go to a lower version of keras.

Traceback (most recent call last):
  File "/mnt/home/mkumar/miniconda3/envs/sleep/bin/ut", line 8, in <module>
    sys.exit(entry_func())
             ^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/bin/ut.py", line 103, in entry_func
    mod.entry_func(script_args + help_agrs)
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/bin/train.py", line 305, in entry_func
    run(args=args)
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/bin/train.py", line 272, in run
    trainer.compile_model(n_classes=hparams["build"].get("n_classes"),
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/train/trainer.py", line 59, in compile_model
    optimizer = init_optimizer(optimizer, **optimizer_kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/utime/train/utils.py", line 176, in init_optimizer
    return optimizer[0](**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/adam.py", line 104, in __init__
    super().__init__(
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/optimizer.py", line 1087, in __init__
    super().__init__(
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/optimizer.py", line 105, in __init__
    self._process_kwargs(kwargs)
  File "/mnt/home/mkumar/miniconda3/envs/sleep/lib/python3.11/site-packages/keras/optimizers/optimizer.py", line 134, in _process_kwargs
    raise ValueError(
ValueError: decay is deprecated in the new Keras optimizer, pleasecheck the docstring for valid arguments, or use the legacy optimizer, e.g., tf.keras.optimizers.legacy.Adam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants