You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There's a sequence of cells to modify the experiment. The sequence consists of the following
Markup cell describing the changes to make
yq commands to modify the experiment.yaml
Markup cell saying to verify it
Commands to verify the output
The first time I ran this I got it wrong. In the markup cell, I said change "resultsDB" when I got to the verify step I realized it should be "outputDB". So I then went back and edited the markup and code cell.
So now we have two executions of the code cell which modifies the experiment file.
We don't want to learn from the first one because its wrong; there is no field resultsDB. Do we correctly filter that out in our learning process?
I think right now we would only filter it out if user goes back and corrects the code cell. However, if the user corrects the markup cell, and then lets Foyle generate a new code cell, we would end up treating it as a new example to learn from.
One potential solution we could keep track of a parent-child relationship in cell metadata. If a markup cell keeps track of its child code cells. Then in sessions we could filter out all examples that weren't generated from final version of the markup cell.
The text was updated successfully, but these errors were encountered:
Consider the following notebook for I setup an experiment.
https://gist.github.com/jlewi/49a6dc0598c33b29c98aabc0cdb16029
There's a sequence of cells to modify the experiment. The sequence consists of the following
The first time I ran this I got it wrong. In the markup cell, I said change "resultsDB" when I got to the verify step I realized it should be "outputDB". So I then went back and edited the markup and code cell.
So now we have two executions of the code cell which modifies the experiment file.
The first one runs
and the second runs
We don't want to learn from the first one because its wrong; there is no field resultsDB. Do we correctly filter that out in our learning process?
I think right now we would only filter it out if user goes back and corrects the code cell. However, if the user corrects the markup cell, and then lets Foyle generate a new code cell, we would end up treating it as a new example to learn from.
One potential solution we could keep track of a parent-child relationship in cell metadata. If a markup cell keeps track of its child code cells. Then in sessions we could filter out all examples that weren't generated from final version of the markup cell.
The text was updated successfully, but these errors were encountered: