-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Merging vision into main #4800
Merging vision into main #4800
Commits on Jul 16, 2020
-
An initial VilBERT model for NLVR2 (#4423)
* Some initial work; lots left to do * Initial test mostly passing, though things are still a bit of a mess * tests are passing with small fixtures * remove prints * Test more stuff * PathLike * Make vilbert pass tests * PR comments * call float before log * add CI Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for 0bbe84b - Browse repository at this point
Copy the full SHA 0bbe84bView commit details -
Configuration menu - View commit details
-
Copy full SHA for f87df83 - Browse repository at this point
Copy the full SHA f87df83View commit details
Commits on Jul 20, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 6cc508d - Browse repository at this point
Copy the full SHA 6cc508dView commit details
Commits on Jul 24, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 3137961 - Browse repository at this point
Copy the full SHA 3137961View commit details
Commits on Jul 27, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 71d7cb4 - Browse repository at this point
Copy the full SHA 71d7cb4View commit details
Commits on Aug 3, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 3833f7a - Browse repository at this point
Copy the full SHA 3833f7aView commit details
Commits on Aug 4, 2020
-
Initializing a VilBERT model from a pre-trained transformer (#4495)
* saving state * Code is running, though it is returning zero gradients (but not None) * initial test passing, still working on albert * albert works, but bert-base-uncased still gives zero gradients * Loading of weights should now work * black, flake, mypy * remove drop and mask functionality from reader * make comment better * fix tests * flake Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for a7d45de - Browse repository at this point
Copy the full SHA a7d45deView commit details
Commits on Aug 18, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 6f82005 - Browse repository at this point
Copy the full SHA 6f82005View commit details
Commits on Aug 20, 2020
-
* first implementation * update docstrings * fixes * fix sharding logic * clean up DatasetReader * fix samplers * fixes * fixes * patch models for now * more fixes * fix linting error * fix model test case * some fixes * fix linting err * updates * rename dataloader -> data_loader * fixes * more JoinableQueue * set daemon=True * fixes * fix * fixes * fix * update shuffle logic * load instances right away when not lazy * add tqdm when num_workers <= 0 * apply_token_indexers * fix bug causing high mem usage * address some of @dirkgr's comments * fix lazy * use sensible default for max_batches_in_mem * ensure workers terminated on err * fix * start adding some tests * more tests * add some more tests * address most of Matt's comments * update PyTorchDataLoader test * get rid of lazy option * fix linting * update docs, change max_batches_per_epoch to max_instances_per_epcoh * update CHANGELOG * fix drop_last validation * fix py2md test fixture * handle drop_last * update docs * implement sharding for most readers * fix worker init fn * limit tqdm output * fixes
Configuration menu - View commit details
-
Copy full SHA for e74a736 - Browse repository at this point
Copy the full SHA e74a736View commit details -
Configuration menu - View commit details
-
Copy full SHA for 95e8253 - Browse repository at this point
Copy the full SHA 95e8253View commit details
Commits on Aug 24, 2020
-
ensure vision CI runs on each commit (#4582)
* ensure vision CI runs on each commit * fix * try fix CHANGELOG check
Configuration menu - View commit details
-
Copy full SHA for 44c8791 - Browse repository at this point
Copy the full SHA 44c8791View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1b08fd6 - Browse repository at this point
Copy the full SHA 1b08fd6View commit details
Commits on Aug 26, 2020
-
Configuration menu - View commit details
-
Copy full SHA for cde06e6 - Browse repository at this point
Copy the full SHA cde06e6View commit details -
Formatting updates for new version of black (#4607)
* reformat for new version of black (#4605) * reformat for new version of black * pin black * reformat for black * fix
Configuration menu - View commit details
-
Copy full SHA for 3d11419 - Browse repository at this point
Copy the full SHA 3d11419View commit details
Commits on Aug 28, 2020
-
rename 'node_rank' to 'global_rank' in dataset reader 'DistributedInf…
…o' (#4608) * rename 'node_rank' to 'global_rank' * Clarify doc comments * fix line length
Configuration menu - View commit details
-
Copy full SHA for de9165e - Browse repository at this point
Copy the full SHA de9165eView commit details
Commits on Sep 1, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 319794a - Browse repository at this point
Copy the full SHA 319794aView commit details
Commits on Sep 2, 2020
-
Merge branch 'master' into vision
# Conflicts: # allennlp/commands/train.py # tests/data/dataset_readers/dataset_reader_test.py # tests/data/samplers/bucket_batch_sampler_test.py
Configuration menu - View commit details
-
Copy full SHA for 8746361 - Browse repository at this point
Copy the full SHA 8746361View commit details
Commits on Sep 3, 2020
-
fix len calculation for new data loader (#4618)
* fix len calculation for new data loader * add test Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for d7124d4 - Browse repository at this point
Copy the full SHA d7124d4View commit details
Commits on Sep 11, 2020
-
make existing readers work with multi-process loading (#4597)
* make existing readers work with multi-process loading * add 'overrides' decorator * call apply_token_indexers in predictor * clean up * fix tests
Configuration menu - View commit details
-
Copy full SHA for 191b641 - Browse repository at this point
Copy the full SHA 191b641View commit details -
Configuration menu - View commit details
-
Copy full SHA for f886fd0 - Browse repository at this point
Copy the full SHA f886fd0View commit details
Commits on Sep 12, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 41872ae - Browse repository at this point
Copy the full SHA 41872aeView commit details
Commits on Sep 14, 2020
-
Configuration menu - View commit details
-
Copy full SHA for fa22f73 - Browse repository at this point
Copy the full SHA fa22f73View commit details
Commits on Sep 29, 2020
-
* Initial design of the multi-task model * PR comments, more implementation * changelog and docs fix * More tests, and fixes for those tests * mypy and make test less flaky * Update allennlp/models/multitask.py * Update allennlp/models/multitask.py Co-authored-by: Dirk Groeneveld <groeneveld@gmail.com> * Update allennlp/models/multitask.py Co-authored-by: James Barry <james.barry26@mail.dcu.ie> * respect active heads in get_metrics * Clean up changelog * black (apparently github UI doesn't add newlines?) Co-authored-by: Dirk Groeneveld <dirkg@allenai.org> Co-authored-by: Dirk Groeneveld <groeneveld@gmail.com> Co-authored-by: James Barry <james.barry26@mail.dcu.ie>
1Configuration menu - View commit details
-
Copy full SHA for f1e46fd - Browse repository at this point
Copy the full SHA f1e46fdView commit details
Commits on Oct 6, 2020
-
Configuration menu - View commit details
-
Copy full SHA for e39a5f6 - Browse repository at this point
Copy the full SHA e39a5f6View commit details
Commits on Oct 7, 2020
-
* Passes a batch of detectron images to the model in the correct format * Loads a model and runs inference on it * Some initial work; lots left to do * Initial test mostly passing, though things are still a bit of a mess * tests are passing with small fixtures * remove prints * More configurable reader * add image_root and feature extraction to detectron model * Use general detectron cfg functions * Adds TensorField * Fix detectron dependency * Adds a detectron processor that we can use in dataset readers * Test more stuff * PathLike * Make vilbert pass tests * PR comments * call float before log * add CI * PathLike * Adds another NLVR2 reader * add region feature and grid feature configuration json and attrtibute to cfg file * change detectron_utils based on https://github.com/vedanuj/grid-feats-vqa/blob/master/extract_feature.py * add bottom up and top down roi head into detectron2 based on allennlp/models/detectron.py * Fix padding in TensorField * Fix field construction * Adds ability to read an arbitrary file * More type annotations * Remove old reader, add test for new one * Use the right kind of field * Run Jiasen's configs as tests * We don't need this field * Removes detectron reader * Remove detectron reader and field * Unify ArrayField and TensorField * Making sure that no merge will go cleanly from now on * Clean up the new output from the detectron processor a bit * Fix Detectron2 version as v0.2 * saving state * Code is running, though it is returning zero gradients (but not None) * initial test passing, still working on albert * albert works, but bert-base-uncased still gives zero gradients * Note * Formatting * Adds Registrable base classes for image operations * Adds a real example of a image2image module * Run the new code (without implementation) in the nlvr2 reader * Solve some issue involving circular imports * add new modules for vilbert * add parameters for detectron image loader. * push current code on implementing proposal generator. * push current progress on proposal generator * Update FasterRCNNProposalGenerator & Merge Detectron2 config * Loading of weights should now work * black, flake, mypy * Run detectron pipeline pieces one at a time This is unfinished and will not run this way. * Fix the data format for the backbone * Handle image sizes separately * remove drop and mask functionality from reader * make comment better * remove proposal_embedder, and finish proposal generator * working on grid embedder * added simple test for resnet backbone, which passes * Got proposal generator test passing * Change default number of detections per image: 100 => 36 * Fix detectron config hierarchy: test_detectron_per_image * Make number of detections configurable & Add test * rename ProposalGenerator to RegionDetector * try to fix makefile * another attempt at makefile * quotes in the pip command... * added a simple test for the dataset reader, made it pass * add feature caching to the dataset reader * another try with the makefile * a better temporary fix for installing detectron * writing files before committing is good... * fix tests * fix (at least part of) the vilbert tests * ok, this makefile change should actually work * add torchvision, try to remove eager import of detectron code * flake * cleanup * more cleanup * mypy, flake * add back code I shouldn't have removed * black * test and flake fixes * fix region_detector for multiple images and add feature and coords padding * fix imports * restore null grid embedder * add back (todo) null region detector * Bring back import changes, to fix circular imports caused by NLVR2 reader * region detector test passing * model test finally passing * update torchvision version * add vqav2 dataset * add gpu support for detectron feature extraction * add lmdbCache to cache feature into lmdb database * fix typo * update vqa jsonnet * fix url adding by cat * Fixes type annotation * Fixes borked error message * New feature cache * Formatting * Fix the tensor cache * Be explicit about our dependencies * Use the new tensor cache * Adds a test using the tensor cache * Run NLVR dataprep on GPU * Tqdm when finding images * Fixes padding in array field * Adjust max_length when truncating in PretrainedTransformerTokenizer * Fewer print statements * remove VQA from this branch and copy default vilbert parameters. * Sanjay's vision features cache script (#4633) * Use LMDB cache in NLVR2 dataset reader; fix a few typos * Standalone script for caching image features * Removing reference to LMDB cache in NLVR2 dataset reader * Adding back asterisk in nlvr2 dataset reader * Fixing one variable name mistake * Decreasing batch size and making a few cuda-related changes * Loading images in batches to avoid GPU OOM error * Pedantic changes for consistency * Run the pre-processing with the models and not the data loading * Filter out paths of images already cached * Add image extensions other than png * Fixes import error * Makes the vision features script work alongside other scripts or training runs Co-authored-by: sanjays <sanjays@ip-10-0-0-157.us-west-2.compute.internal> Co-authored-by: sanjays <sanjays@ip-10-1-10-157.us-west-2.compute.internal> Co-authored-by: Sanjay Subramanian <sanjays@allennlp-server1.corp.ai2> Co-authored-by: Sanjay Subramanian <sanjays_ssubramanian@hotmail.com> * Adds missing imports * Makes TensorCache into a real MutableMapping * Formatting * Changelog * Fix typecheck * Makes the NLVR2 reader work with Pete's new code * Fix type annotation * Formatting * Backwards compatibility * Fix tests * Fix broken config * Update grid embedder test * Fix vilbert_from_huggingface configuration * Don't run the vilbert_from_huggingface test anymore * Remove unused test fixtures * Fix the region detector test * Fix vilbert-from-huggingface and bring it back * Fuck the linter * Run the region detector test on GPU * Run more stuff on GPU The CPU test runner doesn't have enough memory. * Depend on newer version of Detectron * Reinstall Detectron before running tests * Just force CUDA to be on, instead of reinstalling Detecton2 * Detectron needs CUDA_HOME to be set during install At least this thing fails quickly. * Try a different way of wrangling the detectron installer * Bring back amp * Trying to make tests faster, and passing * use two regions, to make tests pass * black * Documentation for TensorCache * Documentation for the NLVR2 dataset reader * Rename ArrayField to TensorField Co-authored-by: Matt Gardner <mattg@allenai.org> Co-authored-by: jiasenlu <jiasenlu@gatech.edu> Co-authored-by: Jaemin Cho <heythisischo@gmail.com> Co-authored-by: jiasenlu <echosenm@gmail.com> Co-authored-by: sanjays <sanjays@ip-10-0-0-157.us-west-2.compute.internal> Co-authored-by: sanjays <sanjays@ip-10-1-10-157.us-west-2.compute.internal> Co-authored-by: Sanjay Subramanian <sanjays@allennlp-server1.corp.ai2> Co-authored-by: Sanjay Subramanian <sanjays_ssubramanian@hotmail.com>
Configuration menu - View commit details
-
Copy full SHA for c5d264a - Browse repository at this point
Copy the full SHA c5d264aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 2985236 - Browse repository at this point
Copy the full SHA 2985236View commit details
Commits on Oct 8, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 677a9ce - Browse repository at this point
Copy the full SHA 677a9ceView commit details
Commits on Oct 10, 2020
-
* transformer toolkit: BertEmbeddings * transformer toolkit: BertSelfAttention * transformer toolkit: BertSelfOutput * transformer toolkit: BertAttention * transformer toolkit: BertIntermediate * transformer toolkit: BertOutput * transformer toolkit: BertLayer * transformer toolkit: BertBiAttention * transformer toolkit: BertEmbeddings * transformer toolkit: BertSelfAttention * transformer toolkit: BertSelfOutput * transformer toolkit: BertAttention * transformer toolkit: BertIntermediate * transformer toolkit: BertOutput * transformer toolkit: BertLayer * transformer toolkit: BertBiAttention * Attention scoring functions * merging output and self output * utility to replicate layers, further cleanup * adding sinusoidal positional encoding * adding activation layer * adding base class for generic loading of pretrained weights * further generalizing, adding tests * updates * adding bimodal encoder, kwargs in from_pretrained_module * vilbert using transformer toolkit * fixing test function * changing to torch.allclose * fixing attention score api * bug fix in bimodal output * changing to older attention modules * _construct_default_mapping returns mapping * adding kwargs to _get_input_arguments, adding examples * using cached_transformers * making transformer_encoder more general * added get_relevant_module, loading by name * fixing constructor name * undoing failure after merge * misc minor changes Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for 91631ef - Browse repository at this point
Copy the full SHA 91631efView commit details
Commits on Oct 20, 2020
-
Transformer toolkit: BiModalEncoder now has separate `num_attention_h…
…eads` for both modalities (#4728) * separate num_attention_heads for both modalities, default arguments * adding tests for toolkit examples * debug statements for failing test * removing debug statements, reordering * Let's be more tolerant * removing commented code Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for 4ccfa88 - Browse repository at this point
Copy the full SHA 4ccfa88View commit details -
separating TransformerPooler as a new module (#4730)
* separating TransformerPooler as a new module * adding size check
Configuration menu - View commit details
-
Copy full SHA for cc53afe - Browse repository at this point
Copy the full SHA cc53afeView commit details
Commits on Oct 27, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 8da3508 - Browse repository at this point
Copy the full SHA 8da3508View commit details -
Configuration menu - View commit details
-
Copy full SHA for 98edd25 - Browse repository at this point
Copy the full SHA 98edd25View commit details -
Configuration menu - View commit details
-
Copy full SHA for 81892db - Browse repository at this point
Copy the full SHA 81892dbView commit details
Commits on Nov 3, 2020
-
Configuration menu - View commit details
-
Copy full SHA for b48347b - Browse repository at this point
Copy the full SHA b48347bView commit details
Commits on Nov 4, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 63f61f0 - Browse repository at this point
Copy the full SHA 63f61f0View commit details
Commits on Nov 5, 2020
-
Generalizing self attention (#4756)
* generalizing SelfAttention * typecheck changes * adding shape information to docstring Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for 12c8d1b - Browse repository at this point
Copy the full SHA 12c8d1bView commit details
Commits on Nov 9, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 7c47c3a - Browse repository at this point
Copy the full SHA 7c47c3aView commit details
Commits on Nov 11, 2020
-
Multitask data loading and scheduling (#4625)
* Some initial work, still a bunch left to do * Adds a utility function that can shuffle iterables * remove shuffle * Getting close; saving state before fixing lint and adding tests * mypy and flake * put in some initial schedulers and samplers; just need to write tests * added some tests * changelog * add more-itertools to setup.py * finish docstring * some PR comments addressed * mypy * use homogeneous scheduler by default, not the non-homogeneous one * add option to not shuffle * normalize dataset proportions * Update allennlp/data/data_loaders/multitask_data_loader.py Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for ffafaf6 - Browse repository at this point
Copy the full SHA ffafaf6View commit details -
Configuration menu - View commit details
-
Copy full SHA for 602399c - Browse repository at this point
Copy the full SHA 602399cView commit details
Commits on Nov 13, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 5d22ce6 - Browse repository at this point
Copy the full SHA 5d22ce6View commit details
Commits on Nov 17, 2020
-
Configuration menu - View commit details
-
Copy full SHA for c780315 - Browse repository at this point
Copy the full SHA c780315View commit details -
Configuration menu - View commit details
-
Copy full SHA for 98018cc - Browse repository at this point
Copy the full SHA 98018ccView commit details -
improve independence of vision components (#4793)
* improve independence of vision components * fix install * fix failing test * haha, actually fix * include torchvision exception too * fix torchvision install
Configuration menu - View commit details
-
Copy full SHA for 22d4633 - Browse repository at this point
Copy the full SHA 22d4633View commit details -
Merge remote-tracking branch 'origin/master' into vision
# Conflicts: # .github/workflows/ci.yml
Configuration menu - View commit details
-
Copy full SHA for 7591465 - Browse repository at this point
Copy the full SHA 7591465View commit details -
Configuration menu - View commit details
-
Copy full SHA for 167bcaa - Browse repository at this point
Copy the full SHA 167bcaaView commit details
Commits on Nov 18, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 6bf1924 - Browse repository at this point
Copy the full SHA 6bf1924View commit details -
Configuration menu - View commit details
-
Copy full SHA for db2d1d3 - Browse repository at this point
Copy the full SHA db2d1d3View commit details
Commits on Nov 20, 2020
-
Merge remote-tracking branch 'origin/master' into vision
# Conflicts: # .github/workflows/ci.yml
Configuration menu - View commit details
-
Copy full SHA for c787230 - Browse repository at this point
Copy the full SHA c787230View commit details
Commits on Nov 23, 2020
-
* albert works, but bert-base-uncased still gives zero gradients * Note * Formatting * Adds Registrable base classes for image operations * Adds a real example of a image2image module * Run the new code (without implementation) in the nlvr2 reader * Solve some issue involving circular imports * add new modules for vilbert * add parameters for detectron image loader. * push current code on implementing proposal generator. * push current progress on proposal generator * Update FasterRCNNProposalGenerator & Merge Detectron2 config * Loading of weights should now work * black, flake, mypy * Run detectron pipeline pieces one at a time This is unfinished and will not run this way. * Fix the data format for the backbone * Handle image sizes separately * remove drop and mask functionality from reader * make comment better * remove proposal_embedder, and finish proposal generator * working on grid embedder * added simple test for resnet backbone, which passes * Got proposal generator test passing * Change default number of detections per image: 100 => 36 * Fix detectron config hierarchy: test_detectron_per_image * Make number of detections configurable & Add test * rename ProposalGenerator to RegionDetector * try to fix makefile * another attempt at makefile * quotes in the pip command... * added a simple test for the dataset reader, made it pass * add feature caching to the dataset reader * another try with the makefile * a better temporary fix for installing detectron * writing files before committing is good... * fix tests * fix (at least part of) the vilbert tests * ok, this makefile change should actually work * add torchvision, try to remove eager import of detectron code * flake * cleanup * more cleanup * mypy, flake * add back code I shouldn't have removed * black * test and flake fixes * fix region_detector for multiple images and add feature and coords padding * fix imports * restore null grid embedder * add back (todo) null region detector * Bring back import changes, to fix circular imports caused by NLVR2 reader * region detector test passing * model test finally passing * update torchvision version * add vqav2 dataset * add gpu support for detectron feature extraction * add lmdbCache to cache feature into lmdb database * fix typo * update vqa jsonnet * fix url adding by cat * Fixes type annotation * Fixes borked error message * New feature cache * Formatting * Fix the tensor cache * Be explicit about our dependencies * Use the new tensor cache * Adds a test using the tensor cache * Run NLVR dataprep on GPU * Tqdm when finding images * Fixes padding in array field * Adjust max_length when truncating in PretrainedTransformerTokenizer * Fewer print statements * remove VQA from this branch and copy default vilbert parameters. * add VQAv2 dataset * Added dataset reader and model tests, which are now passing * Sanjay's vision features cache script (#4633) * Use LMDB cache in NLVR2 dataset reader; fix a few typos * Standalone script for caching image features * Removing reference to LMDB cache in NLVR2 dataset reader * Adding back asterisk in nlvr2 dataset reader * Fixing one variable name mistake * Decreasing batch size and making a few cuda-related changes * Loading images in batches to avoid GPU OOM error * Pedantic changes for consistency * Run the pre-processing with the models and not the data loading * Filter out paths of images already cached * Add image extensions other than png * Fixes import error * Makes the vision features script work alongside other scripts or training runs Co-authored-by: sanjays <sanjays@ip-10-0-0-157.us-west-2.compute.internal> Co-authored-by: sanjays <sanjays@ip-10-1-10-157.us-west-2.compute.internal> Co-authored-by: Sanjay Subramanian <sanjays@allennlp-server1.corp.ai2> Co-authored-by: Sanjay Subramanian <sanjays_ssubramanian@hotmail.com> * Adds missing imports * Makes TensorCache into a real MutableMapping * Formatting * Changelog * Fix typecheck * Makes the NLVR2 reader work with Pete's new code * Fix type annotation * Formatting * Backwards compatibility * Restore NLVR to former glory * Types and multi-process reading for VQAv2 * Formatting * Fix tests * Fix broken config * Update grid embedder test * Fix vilbert_from_huggingface configuration * Don't run the vilbert_from_huggingface test anymore * Remove unused test fixtures * Fix the region detector test * Fix vilbert-from-huggingface and bring it back * Fuck the linter * Fix for VQA test * Why was this metric disabled? * Black and flake * Re-add VQA reader * Image featurizers now need to be called with sizes * Run the region detector test on GPU * Run more stuff on GPU The CPU test runner doesn't have enough memory. * Depend on newer version of Detectron * Reinstall Detectron before running tests * Just force CUDA to be on, instead of reinstalling Detecton2 * Fixes VQA2 DatasetReader * Fix documentation * Detectron needs CUDA_HOME to be set during install At least this thing fails quickly. * Try a different way of wrangling the detectron installer * Try a different way of wrangling the detectron installer * Bring back amp * Refactored VQA reader * More training paths * Remove debug code * Don't check in debug code * Auto-detect GPU to use * Apply indexers later * Fix typo * Register the model * Fields live on CPU. Only batches get GPUs. * black * black, flake * mypy * more flake * More realistic training config * Adds a basic Predictor for VQAv2 * Make vilbert output human-readable * Forgot to enumerate * Use the right namspace * Trying to make tests faster, and passing * add image prefix when loading coco image * fix vqav2 dataset reader and config file * use two regions, to make tests pass * black * Output probabilities in addition to logits * Make it possible to turn off the cache * Turn off the cache in the predictor * Fix the VQA predictor * change the experiment to the defualt vilbert hyperparams. * add default experiment_from_huggingface.json * fix typos in vqa reader * Proper probabilities * Formatting * Remove unused variable * Make mypy happy * Fixed loss function, metric, and got tests to pass * Updates the big training config * Put real settings into the vilbert_vqa config * Strings are lists in Python * Make mypy happy * Formatting * Unsatisfying mypy * Config changes to make this run * Fix dimensionality of embeddings * clean the code and add the image_num_heads and combine_num_heads * fix answer vocab and add save and load from pre-extracted vocab * fix loss and update save_answer_vocab script * Typo * Fixed fusion method * Tweaking the VQA config some more * Moved the from_huggingface config * 20 epochs * Set up the learning rate properly * Simplify * Hardcoded answer vocab * Don't be lazy * Steps per epoch cannot be None * Let's chase the right score * Fixing some parameter names * Fields are stored on CPUs * Bigger batch size, easier distributed training * Don't run the debug code by default * VQA with the Transformer Toolkit (#4729) * transformer toolkit: BertEmbeddings * transformer toolkit: BertSelfAttention * transformer toolkit: BertSelfOutput * transformer toolkit: BertAttention * transformer toolkit: BertIntermediate * transformer toolkit: BertOutput * transformer toolkit: BertLayer * transformer toolkit: BertBiAttention * transformer toolkit: BertEmbeddings * transformer toolkit: BertSelfAttention * transformer toolkit: BertSelfOutput * transformer toolkit: BertAttention * transformer toolkit: BertIntermediate * transformer toolkit: BertOutput * transformer toolkit: BertLayer * transformer toolkit: BertBiAttention * Attention scoring functions * merging output and self output * utility to replicate layers, further cleanup * adding sinusoidal positional encoding * adding activation layer * adding base class for generic loading of pretrained weights * further generalizing, adding tests * updates * adding bimodal encoder, kwargs in from_pretrained_module * vilbert using transformer toolkit * fixing test function * changing to torch.allclose * fixing attention score api * bug fix in bimodal output * changing to older attention modules * _construct_default_mapping returns mapping * adding kwargs to _get_input_arguments, adding examples * using cached_transformers * making transformer_encoder more general * added get_relevant_module, loading by name * fixing constructor name * undoing failure after merge * misc minor changes * Transformer toolkit (#4577) * transformer toolkit: BertEmbeddings * transformer toolkit: BertSelfAttention * transformer toolkit: BertSelfOutput * transformer toolkit: BertAttention * transformer toolkit: BertIntermediate * transformer toolkit: BertOutput * transformer toolkit: BertLayer * transformer toolkit: BertBiAttention * transformer toolkit: BertEmbeddings * transformer toolkit: BertSelfAttention * transformer toolkit: BertSelfOutput * transformer toolkit: BertAttention * transformer toolkit: BertIntermediate * transformer toolkit: BertOutput * transformer toolkit: BertLayer * transformer toolkit: BertBiAttention * Attention scoring functions * merging output and self output * utility to replicate layers, further cleanup * adding sinusoidal positional encoding * adding activation layer * adding base class for generic loading of pretrained weights * further generalizing, adding tests * updates * adding bimodal encoder, kwargs in from_pretrained_module * vilbert using transformer toolkit * fixing test function * changing to torch.allclose * fixing attention score api * bug fix in bimodal output * changing to older attention modules * _construct_default_mapping returns mapping * adding kwargs to _get_input_arguments, adding examples * using cached_transformers * making transformer_encoder more general * added get_relevant_module, loading by name * fixing constructor name * undoing failure after merge * misc minor changes Co-authored-by: Dirk Groeneveld <dirkg@allenai.org> * separate num_attention_heads for both modalities, default arguments * adding tests for toolkit examples * debug statements for failing test * removing debug statements, reordering * Typo * Some compatibility with the transformer toolkit * Reorganize the image inputs * More transformer toolkit compatibility * Debug settings * Let's be more tolerant * Fix how VilBERT runs Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com> * Make the region detector and region embedder lazy * Fix references to the model * Make various automated tests pass * Formatting * More logging * One more logging statement * Read answer vocab from vocab file instead of determining it automatically * Don't keep the files open so long * Use most of the validation set for training as well * Get ready to be lazy * Upgrade paths * Be lazy * Keep unanswerable questions only during test time * Fix the from_huggingface config * Fixes the VQA score * VQA specific metric * Fixes some tests * Tests pass! * Formatting * Use the correct directory * Use the region detector that's meant for testing * Read the test split properly * Be a little more verbose while discovering images * Modernize Vilbert VQA * Update NLVR, but it still doesn't run * Formatting * Remove NLVR * Fix the last test * Formatting * Conditionally export the VilbertVqaPredictor * ModuleNotFoundError is a type of ImportError * Fix test-install * Try the broken test with a fixed seed * Try a bunch of seeds * Smaller model to get bigger magnitudes * Now that the test works, we don't need to specify the seeds anymore Co-authored-by: Matt Gardner <mattg@allenai.org> Co-authored-by: jiasenlu <jiasenlu@gatech.edu> Co-authored-by: Jaemin Cho <heythisischo@gmail.com> Co-authored-by: jiasenlu <echosenm@gmail.com> Co-authored-by: sanjays <sanjays@ip-10-0-0-157.us-west-2.compute.internal> Co-authored-by: sanjays <sanjays@ip-10-1-10-157.us-west-2.compute.internal> Co-authored-by: Sanjay Subramanian <sanjays@allennlp-server1.corp.ai2> Co-authored-by: Sanjay Subramanian <sanjays_ssubramanian@hotmail.com> Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com> Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for b659e66 - Browse repository at this point
Copy the full SHA b659e66View commit details
Commits on Nov 26, 2020
-
SNLI_VE dataset reader (#4799)
* adding VE reader * removing jsonlines * blackify * intial VE model * adding VisionReader for common vision components * fix test file * fix doc * temporarily removing VE model * bug fix * cleanup * removing unnecessary check * simplify
Configuration menu - View commit details
-
Copy full SHA for 3be6c97 - Browse repository at this point
Copy the full SHA 3be6c97View commit details
Commits on Dec 1, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 01f3a2d - Browse repository at this point
Copy the full SHA 01f3a2dView commit details -
Visual entailment model code (#4822)
* VE model code * adding VE model * misc minor updates * update changelog
Configuration menu - View commit details
-
Copy full SHA for 52e9dd9 - Browse repository at this point
Copy the full SHA 52e9dd9View commit details
Commits on Dec 2, 2020
-
* Adds reader for GQA dataset. Will download questions from https://cs.stanford.edu/people/dorarad/gqa/download.html. * Cleaned up GQA reader tests
Configuration menu - View commit details
-
Copy full SHA for e729e9a - Browse repository at this point
Copy the full SHA e729e9aView commit details
Commits on Dec 3, 2020
-
* Make the VQA reader work for the other datasets * Also find pngs * Really support pngs * Remove debug code * More logging * Unexpected formatting * Respect the device * This is how your replace things in named tuples. * Remove unused import * This is how you override a method properly. * This is how you set parameters in detectron. * Also set the device for the region detector * Training configs for all three datasets contained in VQA * Bigger batches * Bigger batches for image processing * Fix vilbert-from-huggingface config * Make the config switch modes for constructing vocab * More vocab, more docs, better way of deriving vocab * Modernize the from_huggingface config * More updates to the from_huggingface config * Better hyperparameters stolen from another project * Fix for inverted parameter * Formatting * Throw a meaningful error message when we don't have images * Add a warning that includes instructions for how to fix things * Remove unused script * Merge issue
Configuration menu - View commit details
-
Copy full SHA for 7887119 - Browse repository at this point
Copy the full SHA 7887119View commit details
Commits on Dec 5, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 52fdd75 - Browse repository at this point
Copy the full SHA 52fdd75View commit details
Commits on Dec 9, 2020
-
Generalizing transformer layers (#4776)
* adding HF tests, docstrings for AttentionLayer, TransformerLayer, TransformerBlock * temp change to check if tests pass * undoing temp change * ci update * more ci updates * changing test run * update makefile * temp change * isolating failing case * further debugging * fail check * reverting to older CI * test with reduced batch size * cleanup * more cleanup * oops, fix
Configuration menu - View commit details
-
Copy full SHA for 50e50df - Browse repository at this point
Copy the full SHA 50e50dfView commit details -
gqa reader fixes during vilbert training (#4851)
* Refactored shared code * typecheck fix * rebase * Refactored shared code * typecheck fix * rebase * Cleaned up GQA reader tests * Modify instance format for vilbert-vqa model * update for vision branch bump Co-authored-by: Jackson Stokes <jacksons@Jacksons-MacBook-Pro.local> Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for ddbc740 - Browse repository at this point
Copy the full SHA ddbc740View commit details
Commits on Dec 13, 2020
-
Toolkit: Adding documentation and small changes for
BiModalAttention
(#4859) * adding documentation for bimodal attn, small fixes * changing the way mask is applied * using large value rather than inf * Update comment Co-authored-by: Dirk Groeneveld <groeneveld@gmail.com> * moving apply_mask to util Co-authored-by: Dirk Groeneveld <groeneveld@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for c8521d8 - Browse repository at this point
Copy the full SHA c8521d8View commit details
Commits on Dec 15, 2020
-
Merge branch 'master' into vision
# Conflicts: # allennlp/data/dataset_readers/sharded_dataset_reader.py
Configuration menu - View commit details
-
Copy full SHA for 457e56e - Browse repository at this point
Copy the full SHA 457e56eView commit details -
Configuration menu - View commit details
-
Copy full SHA for d16a5c7 - Browse repository at this point
Copy the full SHA d16a5c7View commit details -
* New import paths * Duplicate entries * Dataset readers can't be lazy anymore
Configuration menu - View commit details
-
Copy full SHA for 87e3536 - Browse repository at this point
Copy the full SHA 87e3536View commit details
Commits on Dec 16, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 147fefe - Browse repository at this point
Copy the full SHA 147fefeView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3da8e62 - Browse repository at this point
Copy the full SHA 3da8e62View commit details
Commits on Dec 17, 2020
-
Switch to torchvision for vision components 👀, simplify and improve M…
…ultiProcessDataLoader (#4821) * implement TorchImageLoader * implement ResnetBackbone * add resize + normalize to image loader * finalize FasterRcnnRegionDetector * pin torchvision * fix VQAv2Reader * add box mask field * dataset reader fixes * fix model tests * doc fixes * add threshold parameters to FasterRcnnRegionDetector * address @dirkgr comments * mask fixes * shape comments * add some more comments * cache answers_by_question_id * implement LocalCacheResource * fix * add read-only option to cache * fix * simplify data loader * make featurizer and detector optional in readers * Cache in memory * back pressure is important I guess * merge * Updated configs * Fixes the way we apply masks * Use more of Jiasen's real settings * Upgrade the from_huggingface config * Switch back to the images on corpnet * Fix random seeds * Bigger model needs smaller batch size * Adds ability to selectively ignore one input * address some comments * format + lint * fixes * Bring back bert-base configs * fix error handling * fix test * fix typo * use lock when possible Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for c4e3f77 - Browse repository at this point
Copy the full SHA c4e3f77View commit details -
Configuration menu - View commit details
-
Copy full SHA for 85d38ff - Browse repository at this point
Copy the full SHA 85d38ffView commit details
Commits on Dec 19, 2020
-
Configuration menu - View commit details
-
Copy full SHA for 1c72a30 - Browse repository at this point
Copy the full SHA 1c72a30View commit details
Commits on Dec 21, 2020
-
Only cache, no featurizing (#4870)
* implement TorchImageLoader * implement ResnetBackbone * add resize + normalize to image loader * finalize FasterRcnnRegionDetector * pin torchvision * fix VQAv2Reader * add box mask field * dataset reader fixes * fix model tests * doc fixes * add threshold parameters to FasterRcnnRegionDetector * address @dirkgr comments * mask fixes * shape comments * add some more comments * cache answers_by_question_id * implement LocalCacheResource * fix * add read-only option to cache * fix * simplify data loader * make featurizer and detector optional in readers * Cache in memory * back pressure is important I guess * merge * Updated configs * Fixes the way we apply masks * Use more of Jiasen's real settings * Upgrade the from_huggingface config * Switch back to the images on corpnet * Fix random seeds * Bigger model needs smaller batch size * Adds ability to selectively ignore one input * address some comments * format + lint * fixes * Bring back bert-base configs * fix error handling * fix test * Adds the ability to read from a feature cache, but not run any featurization * Update tests * Let's stick with "feature_cache" As long as we're consistent ... * More epochs, more random * Use the new parameters * Fix initialization * Make tests work, add some documentation * Remove the read_from_cache parameter * Cleanup of training configs * Typecheck * Building docs right * Better settings for VQA * Leave the image_feature_dim at 1024 Co-authored-by: epwalsh <epwalsh10@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 7a7c7ea - Browse repository at this point
Copy the full SHA 7a7c7eaView commit details -
Make images easier to find for Visual Entailment (#4878)
* implement TorchImageLoader * implement ResnetBackbone * add resize + normalize to image loader * finalize FasterRcnnRegionDetector * pin torchvision * fix VQAv2Reader * add box mask field * dataset reader fixes * fix model tests * doc fixes * add threshold parameters to FasterRcnnRegionDetector * address @dirkgr comments * mask fixes * shape comments * add some more comments * cache answers_by_question_id * implement LocalCacheResource * fix * add read-only option to cache * fix * simplify data loader * make featurizer and detector optional in readers * Cache in memory * back pressure is important I guess * merge * Updated configs * Fixes the way we apply masks * Use more of Jiasen's real settings * Upgrade the from_huggingface config * Switch back to the images on corpnet * Fix random seeds * Bigger model needs smaller batch size * Adds ability to selectively ignore one input * address some comments * format + lint * fixes * Bring back bert-base configs * fix error handling * fix test * Adds the ability to read from a feature cache, but not run any featurization * Update tests * Let's stick with "feature_cache" As long as we're consistent ... * More epochs, more random * Use the new parameters * Fix initialization * Make tests work, add some documentation * Remove the read_from_cache parameter * Cleanup of training configs * Typecheck * Building docs right * Better settings for VQA * Open cached paths when reading json lines * By default, autodetect GPUs when training * Switch to torchvision * Download training data from the web * This needs to stay at 1024 until we get the new featurization model * Have a more descriptive error message when images are missing * Update vilbert_ve_from_huggingface.jsonnet Co-authored-by: epwalsh <epwalsh10@gmail.com> Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for f62b819 - Browse repository at this point
Copy the full SHA f62b819View commit details
Commits on Dec 23, 2020
-
Configuration menu - View commit details
-
Copy full SHA for abacc01 - Browse repository at this point
Copy the full SHA abacc01View commit details -
Configuration menu - View commit details
-
Copy full SHA for d1cc146 - Browse repository at this point
Copy the full SHA d1cc146View commit details -
Configuration menu - View commit details
-
Copy full SHA for fbab0bd - Browse repository at this point
Copy the full SHA fbab0bdView commit details
Commits on Jan 4, 2021
-
* Refactored shared code * typecheck fix * rebase * Refactored shared code * typecheck fix * rebase * Cleaned up GQA reader tests * Modify instance format for vilbert-vqa model * update for vision branch bump * Adding training config for GQA * Unnamed variable * Various GQA fixes * Temporary extra configs needed to make vocab * Remove unused file * Optimize VQA score instead of F-Score * Use our newly created vocab * Remove temporary configs * Don't fail when we don't need to create a directory * Make a config that works on the servers as well * Update comment * A new command to count instances * Temporary config to count instances * Undo temporary changes * Put in the correct number of steps per epoch * Remove this number from the config because it's almost certainly wrong * Don't put Fields in Tuples * Formatting * More informative error message when batches are heterogeneous * Formatting * Not my type * Generate the fields properly when answers are missing * Properly discard instances with missing answers * Changelog * Update number of steps per epoch * Adds a config for balanced GQA * fix file_utils extract with directory * fix Batch._check_types * Fill in URL Co-authored-by: Jackson Stokes <jacksons@Jacksons-MacBook-Pro.local> Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com> Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 15d32da - Browse repository at this point
Copy the full SHA 15d32daView commit details
Commits on Jan 9, 2021
-
Toolkit: Cleaning up TransformerEmbeddings (#4900)
* fixing issue of non-deterministic dropout * updating TransformerEmbeddings * ImageFeatureEmbeddings is now a subclass of Embeddings * allowing for no token type embeddings * fixing kwargs for loading pretrained module
Configuration menu - View commit details
-
Copy full SHA for aedd3be - Browse repository at this point
Copy the full SHA aedd3beView commit details
Commits on Jan 11, 2021
-
Data loading cuda device (#4879)
* add test with tensor fields * improve nn.util.move_to_device * ensure start_method is 'spawn' when using lazy and mem pin * add 'non_blocking' arg to 'move_to_device' * fix fake test tensor * fix sampler test * lint * fix 'move_to_device' * fix condition check * add device to data loader * clean up doc string * rename 'device' arg to 'cuda_device' * pinning is very slow, revert * DataLoaders load to CUDA device * fix evaluate test
Configuration menu - View commit details
-
Copy full SHA for df36636 - Browse repository at this point
Copy the full SHA df36636View commit details
Commits on Jan 12, 2021
-
Configuration menu - View commit details
-
Copy full SHA for 5e3757b - Browse repository at this point
Copy the full SHA 5e3757bView commit details -
Configuration menu - View commit details
-
Copy full SHA for 31ec6a5 - Browse repository at this point
Copy the full SHA 31ec6a5View commit details -
remove PyTorchDataLoader, add SimpleDataLoader for testing (#4907)
* remove PyTorchDataLoader, add SimpleDataLoader for testing * fix test * comments
Configuration menu - View commit details
-
Copy full SHA for 2f54570 - Browse repository at this point
Copy the full SHA 2f54570View commit details -
improve data loading docs (#4909)
* improve data loading docs * document best practices, add 'get_batch_size' method to samplers * try fix annoying unrelated test * revert that * clarify handling of 'max_instances_in_memory'
Configuration menu - View commit details
-
Copy full SHA for effcc4e - Browse repository at this point
Copy the full SHA effcc4eView commit details -
Configuration menu - View commit details
-
Copy full SHA for 03c7ffb - Browse repository at this point
Copy the full SHA 03c7ffbView commit details -
Configuration menu - View commit details
-
Copy full SHA for c9585af - Browse repository at this point
Copy the full SHA c9585afView commit details
Commits on Jan 13, 2021
-
Configuration menu - View commit details
-
Copy full SHA for 94dd9cc - Browse repository at this point
Copy the full SHA 94dd9ccView commit details
Commits on Jan 14, 2021
-
improve worker error handling in MultiProcessDataLoader (#4912)
* improve worker error handling * rename test file
Configuration menu - View commit details
-
Copy full SHA for d7c9eab - Browse repository at this point
Copy the full SHA d7c9eabView commit details
Commits on Jan 15, 2021
-
* adding cross_attention, renaming block -> stack * stack can be initialized with layer too Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>
Configuration menu - View commit details
-
Copy full SHA for 5229da8 - Browse repository at this point
Copy the full SHA 5229da8View commit details
Commits on Jan 19, 2021
-
* resolve _read type * fix sharded reader * fix data loader arg
Configuration menu - View commit details
-
Copy full SHA for 0f00d4d - Browse repository at this point
Copy the full SHA 0f00d4dView commit details -
* Make the VQA reader work for the other datasets * Also find pngs * Really support pngs * Remove debug code * More logging * Unexpected formatting * Respect the device * This is how your replace things in named tuples. * Remove unused import * This is how you override a method properly. * This is how you set parameters in detectron. * Also set the device for the region detector * Training configs for all three datasets contained in VQA * Bigger batches * Bigger batches for image processing * Fix vilbert-from-huggingface config * Make the config switch modes for constructing vocab * More vocab, more docs, better way of deriving vocab * Modernize the from_huggingface config * More updates to the from_huggingface config * Better hyperparameters stolen from another project * Fix for inverted parameter * Formatting * Throw a meaningful error message when we don't have images * Add a warning that includes instructions for how to fix things * Remove unused script * Merge issue * Adds named splits to the SNLI-VE reader * Make the multitask data loader discoverable * Formatting * More flexible inputs to the dataset readers * Prototype config for the multitask training job * json_lines_from_file() already calls cached_path() * Visual entailment should track accuracy * Switching to torch * Fixing VE image paths * Formatting * Experimentally use threaded_generator to read instances from readers simultaneously * Vilbert backbone * Fixed paths * Formatting * Adds heads * Revert "Experimentally use threaded_generator to read instances from readers simultaneously" This reverts commit a633e67. * Multitask trains now! * Remove useless parameter from GQA reader * Updated multitask config * Schedulers produce batches, not instances * Track multiple metrics * Make mypy happy * Formatting * Keep better track of which heads have been called * Fix the merge * We have more than strings for input * Remove unused imports * -1 is CPU * Go back to tracking instances per epoch so that the samplers can work * Better error message * A useful sampler to have * We haven't indexed until we've indexed * Makes tests pass * Formatting * Fine-tuning the metric tracker * Update model configs for my changes * Fixing model configs for Akshita's changes * Implement VisionTextModel in terms of VilbertBackbone * Formatting * Fix stale comment * Use the server paths by default, not Dirk's desktop * Fix tests * Formatting again * Removed data loader parameters that don't exist anymore * Clarified comment Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 5497394 - Browse repository at this point
Copy the full SHA 5497394View commit details
Commits on Jan 20, 2021
-
Configuration menu - View commit details
-
Copy full SHA for 412896b - Browse repository at this point
Copy the full SHA 412896bView commit details
Commits on Jan 21, 2021
-
Moves vision models to allennlp-models (#4918)
* Moves vision models to allennlp-models * Also move test fixtures * Don't return so many instances if we're cutting them out later anyways * We actually need this image * Formatting * Fixing more paths
Configuration menu - View commit details
-
Copy full SHA for 9a4a424 - Browse repository at this point
Copy the full SHA 9a4a424View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0a12299 - Browse repository at this point
Copy the full SHA 0a12299View commit details
Commits on Jan 22, 2021
-
2
Configuration menu - View commit details
-
Copy full SHA for 4e43b94 - Browse repository at this point
Copy the full SHA 4e43b94View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1f79840 - Browse repository at this point
Copy the full SHA 1f79840View commit details -
Configuration menu - View commit details
-
Copy full SHA for e413c78 - Browse repository at this point
Copy the full SHA e413c78View commit details -
* Better Callbacks * Reformatting * Fixes * Tests for updated TrainerCallback * Formatting and Type-Checking fixes
Configuration menu - View commit details
-
Copy full SHA for 93054bc - Browse repository at this point
Copy the full SHA 93054bcView commit details
Commits on Jan 25, 2021
-
Consistent metric tracker (#4928)
* Makes the metric tracker more consistent * Turns out we need best_epoch_metrics after all. * Backwards compatibility * Formatting
Configuration menu - View commit details
-
Copy full SHA for f91502c - Browse repository at this point
Copy the full SHA f91502cView commit details
Commits on Jan 26, 2021
-
Merge remote-tracking branch 'origin/main' into vision
# Conflicts: # mkdocs-skeleton.yml
Configuration menu - View commit details
-
Copy full SHA for aa5dae1 - Browse repository at this point
Copy the full SHA aa5dae1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 409b5b9 - Browse repository at this point
Copy the full SHA 409b5b9View commit details -
Configuration menu - View commit details
-
Copy full SHA for f0a9e38 - Browse repository at this point
Copy the full SHA f0a9e38View commit details -
Configuration menu - View commit details
-
Copy full SHA for b019e77 - Browse repository at this point
Copy the full SHA b019e77View commit details