Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validate network predictions #73

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

MihailBogojeski
Copy link
Collaborator

Included notebook and python script for evaluating how model predictions are distributed per residue. The purpose is to see whether the model tends to predict the similar affinity values for the same residue type regardless of the neighboring environment.

The residue-wise affinity predictions are collected for all samples in the dataset, and the means and standard deviations are printed for each dataset. If the means are similar among different datasets with very low deviation, then the model tends to predict the same values regardless of the environment.

An additional analysis is provided correlating the model RMSE across different different datasets, with the deviation from the training mean. This analysis is intended to show whether the model does better than the mean predictor, and whether the error is dependent on the deviation from the mean.

@moritzschaefer
Copy link
Collaborator

Let's leave this PR open, so we can share/discuss code here on Marco's next/new model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants