Skip to content

Commit

Permalink
#6524 add doc for dataset storage size
Browse files Browse the repository at this point in the history
  • Loading branch information
sekmiller authored Feb 4, 2020
1 parent fef91a8 commit 2d18b81
Showing 1 changed file with 22 additions and 0 deletions.
22 changes: 22 additions & 0 deletions doc/sphinx-guides/source/api/native-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -898,6 +898,28 @@ In practice, you only need one the ``dataset_id`` or the ``persistentId``. The e
print '-' * 40
print r.json()
print r.status_code
Report the data (file) size of a Dataset
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Shows the combined size in bytes of all the files uploaded into the dataset ``id``. ::
.. code-block:: bash
export API_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
export SERVER_URL=https://demo.dataverse.org
export ID=xxxxxx
curl -H X-Dataverse-key:$API_TOKEN $SERVER_URL/api/datasets/$ID/storagesize
The fully expanded example above (without environment variables) looks like this:
.. code-block:: bash
curl -H X-Dataverse-key:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx https://demo.dataverse.org/api/dataverses/xxxxxx/storagesize
The size of published and unpublished files will be summed in the dataset specified.
By default, only the archival files are counted - i.e., the files uploaded by users (plus the tab-delimited versions generated for tabular data files on ingest). If the optional argument ``includeCached=true`` is specified, the API will also add the sizes of all the extra files generated and cached by Dataverse - the resized thumbnail versions for image files, the metadata exports for published datasets, etc. Because this deals with unpublished files the token supplied must have permission to view unpublished drafts.
Submit a Dataset for Review
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down

0 comments on commit 2d18b81

Please sign in to comment.