Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Complexity property test #74

Closed
Ohjeah opened this issue May 1, 2020 · 3 comments
Closed

Complexity property test #74

Ohjeah opened this issue May 1, 2020 · 3 comments

Comments

@Ohjeah
Copy link
Collaborator

Ohjeah commented May 1, 2020

Sorry, I didn't see your comment before making the most recent commit. I relaxed the complexity test constraint a bit more.

I was able to reproduce the failed test locally by using the same parameters: n_samples=1124, n_features=27, n_informative=5, random_state=6. It looks like LASSO somehow finds 4 informative features, while the other heavily regularized models find 6. This seems like a bit of a fluke; for different random states this doesn't occur.

Maybe we could remove LASSO from this test? Or we could stick with the quick fix I made.

In any case, the bug isn't caused by this PR, so I think it's okay to accept it.

Originally posted by @briandesilva in #71 (comment)

@Ohjeah
Copy link
Collaborator Author

Ohjeah commented May 1, 2020

Let's discuss it in a new issue.

Maybe the default values for the regularization parameters are not comparable enough, or we should average over more datasets to account for the randomness.

@briandesilva
Copy link
Member

I like the idea of averaging over multiple datasets. So the idea would be that we would have the function, say, look over some nearby seed values, compute the average complexity for each optimizer across the different resulting datasets, then do the comparison?

@briandesilva
Copy link
Member

Okay, I created a quick PR #75 implementing the averaging idea. Let me know what you think.

@Ohjeah Ohjeah closed this as completed May 3, 2020
jpcurbelo pushed a commit to jpcurbelo/pysindy_fork that referenced this issue May 9, 2024
This commit unifies the way the different models preprocess the static and dynamic input data.

Closes dynamicslab#57 and dynamicslab#74, part of dynamicslab#64.

Similar to the model head, there is now an InputLayer that all (single-frequency) models use. Depending on the config, this layer will:
- pass the static variables through an embedding layer (if cfg.statics_embedding is True)
- pass the dynamic variables through an embedding layer (if cfg.dynamics_embedding is True)
- concatenate static to each step of dynamic inputs (unless this is deactivated; e.g., for EA-LSTM this is not wanted)

Key changes:
- New config arguments statics_embedding, dynamics_embedding (both bool, default False).
- EmbCudaLSTM is no longer needed since the functionality is now part of CudaLSTM. For backwards compatibility, the class still exists but defers to CudaLSTM.
- If EA-LSTM and static embeddings are used, there will now be a linear layer between the embedding output and the lstm input to make sure the dimensions match. Previously, the linear layer would be left out if static embeddings were used.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants