Releases: gsbDBI/bemb
Releases · gsbDBI/bemb
v0.1.7
What's Changed
- 34 bug fix reshape observable method in bembpy by @TianyuDu in #36
- Other major changes were implemented in
torch-choice v1.0.5
, please have a look at the change log here: https://github.com/gsbDBI/torch-choice/releases/tag/v1.0.5
Full Changelog: v0.1.6...v0.1.7
BEMB v.0.1.6
What's Changed
Users can now specify the optimizer they want using the model_optimizer
argument while initializing the model as following:
bemb = LitBEMBFlex(... model_optimizer="Adam", ...)
bemb = LitBEMBFlex(... model_optimizer="LBFGS", ...)
The optimizer specified needs to be in torch.optim
.
We have developed a cleaner mode estimation pipeline blackened by PyTorch-Lightning:
from bemb import run
run(bemb, dataset_train=dataset_train, dataset_val=dataset_val, dataset_test=dataset_test, batch_size=len(dataset_train) // 20, num_epochs=1000, device="cuda")
import the run
directly from bemb
package to use it.
- fix typo. by @TianyuDu in #20
- commits for supermarket and deterministic vi by @kanodiaayush in #23
- 30 add lbfgs support and cleaner pytorch lightning training loops by @TianyuDu in #32
New Contributors
- @kanodiaayush made their first contribution in #23
Full Changelog: v0.1.5...v0.1.6
BEMB v.0.1.5
We have updated several features of our package.
What's Changed
- We allow for partial
prior_variance
dictionary, for example, you can specifyprior_variance={'alpha': 0.3, 'default': 1.5}
to set prior variance of alpha coefficient to 0.3 while all other coefficients will have prior variance 1.5.
…to have unit variance. - For label predictions (
pred_item=False
), we now include more performance metrics. - We have updated the utility formula parser mentioned in this issue.
- We have added a
posterior_distribution()
to query posterior/variational distribution of coefficient directly. - We have fixed known issues on sherlock mentioned in this issue.
Full Changelog: v0.1.4...v0.1.5
BEMB release v0.1.4
- Update: the
forward()
method of BEMB module has been updated for easier inference. The newforward()
method can return both log-probabilities and utility values. - For use cases of predicting binary labels (
pred_item = False
), now the returned log-probabilities are always the predicted log-probability of the actual label. To get the predicted log-probabilities of the positive class for all observations, please get the utility first and apply a log-sigmoid transformation on utility values. - We have also added a helper function
predict_proba()
that gives (1) ifpred_item=True
, predicted probability of choosing each item among items in its category, so output shape is(batch_size, num_items)
(2) ifpred_item=False
predicted probabilities of both label = 0 and label = 1 (so output shape is(batch_size, 2)
. - Please refer to the Jupyter notebook for more details.