Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rafactor(NeuralNetwork): remove workers, batch_queue_size, multiprocessing parameters #227

Open
wants to merge 13 commits into
base: development
Choose a base branch
from

Conversation

rizoudal
Copy link

@rizoudal rizoudal commented Aug 12, 2024

This pull request removes the parameters workers, batch_queue_size, and multiprocessing from the neural network class and all related files. These parameters are no longer supported in the latest versions of TensorFlow and Keras, and their removal resolves compatibility issues.

Additionally, this pull request refactors the code to replace the usage of the .hdf5 file format with the .keras format across the project.

Please review the changes and let me know if there are any questions or further adjustments needed.

rizoudal added 6 commits August 9, 2024 15:47
- added F1Score to automl training block
- cast labels to float32 to prevent type errors
- Removed workers parameter in NeuralNetwork class.
- Updated related documentation and files to reflect this change.
- Removed multiprocessing parameter from NeuralNetwork class.
- Updated related documentation and files to reflect this change.
- Removed batch_queue_size parameter from Neural Network class.
- Updated related documentation and files to reflect this change.
@muellerdo
Copy link
Member

  1. Classifier buggy? -> input_stack = [input_stack, self.metadata[index_array]] muss in klammern (input_stack, self.metadata[index_array]) in zeile 304 vom datagenerator
  2. albumentations ai.pad(aug_image, org_shape[0], org_shape[1]) in image_augmentation, value=0, pad_mode="edge"
  3. xai problem: gradcam & co (gradienten basiert), find last conv layer -> class variable output_shape not existent anymore -> how can we find the last convolutional layer in a random architecture? is there an alternative to output_shape class variable to get the shape of a tensor?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants