-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(tf): improve the activation setting in tebd #3971
Conversation
WalkthroughWalkthroughThe updates introduce logging functionality to monitor the use of linear activation with multiple neurons in neural network configurations. Changes also include adding activation function settings in multiple JSON configuration files for different neural network examples, enhancing their customization and control. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## devel #3971 +/- ##
==========================================
- Coverage 82.84% 82.82% -0.02%
==========================================
Files 522 522
Lines 50866 50870 +4
Branches 3015 3015
==========================================
- Hits 42138 42135 -3
- Misses 7794 7795 +1
- Partials 934 940 +6 ☔ View full report in Codecov by Sentry. |
We also need to improve |
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added logging to notify users when multiple neurons are used with a "linear" activation function, equivalent to a single linear layer. - **Configuration Updates** - Introduced `"activation_function": "linear"` in various neural network configuration files (`input.json`). - Added `"activation_function": "tanh"` in the `input.json` file for water model configuration. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Summary by CodeRabbit
New Features
Configuration Updates
"activation_function": "linear"
in various neural network configuration files (input.json
)."activation_function": "tanh"
in theinput.json
file for water model configuration.