File objects represent individual files in Box. They can be used to download a file's contents, upload new versions, and perform other common file operations (move, copy, delete, etc.).
- Get a File's Information
- Update a File's Information
- Download a File
- Get Download URL
- Upload a File
- Chunked Upload
- Move a File
- Copy a File
- Rename a File
- Delete a File
- Get Previous Versions of a File
- Upload a New Version of a File
- Promote a Previous Version of a File
- Delete a Previous Version of a File
- Lock a File
- Unlock a File
- Create a Shared Link Download URL
- Find a File for a Shared Link
- Create or update a Shared Link
- Get a Shared Link
- Remove a Shared Link
- Get an Embed Link
- Get File Representations
- Get Thumbnail (Deprecated)
- Get Thumbnail
- Set Metadata
- Get Metadata
- Remove Metadata
- Get All Metadata
- Set a Classification
- Retrieve a Classification
- Remove a Classification
- Set retention policy expiration date
Calling file.get(*, fields=None, etag=None, **kwargs)
on a File
retrieves information
about the file from the API. This method returns a new File
object populated with the information retrieved.
You can specify an Iterable
of fields to retrieve from the API in the fields
parameter.
file_id = '11111'
file_info = client.file(file_id).get()
print(f'File "{file_info.name}" has a size of {file_info.size} bytes')
To update fields on the File
object, call [file.update_info(data=data_to_update)
][update_info] with
a dict
of fields to update. This method returns the updated File
object, leaving the original it
was called on unmodified.
file_id = '11111'
updated_file = client.file(file_id).update_info(data={'description': 'My file'})
A file can be downloaded in two ways: by returning the entire contents of the file as bytes
or by providing an output
stream to which the contents of the file will be written. For both methods, you can optionally download a specific
version of the file by passing the desired FileVersion
in the file_version
parameter. You may
also wish to download only a certain chunk of the file by passing a tuple of byte offsets via the byte_range
parameter — the lower and upper bounds you wish to download.
To get the entire contents of the file as bytes
, call file.content(file_version=None, byte_range=None)
.
file_id = '11111'
file_content = client.file(file_id).content()
For users with premium accounts, previous versions of a file can be downloaded.
file_id = '11111'
file_version = client.file_version('12345')
version_content = client.file(file_id).content(file_version=file_version)
Additonally, only a part of the file can be downloaded by specifying a byte range.
file_id = '11111'
beginning_of_file_content = client.file(file_id).content(byte_range=(0,99))
To download the file contents to an output stream, call
file.download_to(writeable_stream, file_version=None, byte_range=None)
with the stream.
file_id = '11111'
# Write the Box file contents to disk
with open('file.pdf', 'wb') as output_file:
client.file(file_id).download_to(output_file)
To get a download URL suitable for passing to a web browser or other application, which will allow someone to download
the file, call file.get_download_url(file_version=None)
. The will return a unicode
string
containing the file's download URL. You can optionally pass a FileVersion
via the
file_version
parameter to get a download URL for a specific version of the file.
file_id = '11111'
download_url = client.file(file_id).get_download_url()
print(f'The file\'s download URL is: {download_url}')
Files are uploaded to a folder in one of two ways: by providing a path to a file on disk, or via a readable stream containing the file contents.
To upload a file from a path on disk, call the
folder.upload(file_path, file_name=None, file_description=None,preflight_check=False, preflight_expected_size=0)
method
on the Folder
you want to upload the file into. By default, the file uploaded to Box will have the
same file name as the one on disk; you can override this by passing a different name in the file_name
parameter. You can, optionally, also choose to set a file description upon upload by using the file_description
parameter.
This method returns a File
object representing the newly-uploaded file.
folder_id = '22222'
new_file = client.folder(folder_id).upload('/home/me/document.pdf')
print(f'File "{new_file.name}" uploaded to Box with file ID {new_file.id}')
To upload a file from a readable stream, call
folder.upload_stream(file_stream, file_name, file_description=None, preflight_check=False, preflight_expected_size=0)
with the stream and a name for the file. This method returns a File
object representing the
newly-uploaded file.
file_name = 'file.pdf'
stream = open('/path/to/file.pdf', 'rb')
folder_id = '22222'
new_file = client.folder(folder_id).upload_stream(stream, file_name)
print(f'File "{new_file.name}" uploaded to Box with file ID {new_file.id}')
For large files or in cases where the network connection is less reliable, you may want to upload the file in parts. This allows a single part to fail without aborting the entire upload, and failed parts can then be retried.
Since box-python-sdk 3.11.0 release, by default the SDK uses upload urls provided in response
when creating a new upload session. This allowes to always upload your content to the closest Box data center and
can significantly improve upload speed. You can always disable this feature and always use base upload url by
setting use_upload_session_urls
flag to False
when creating upload session.
Since box-python-sdk 3.7.0 release, automatic uploader uses multiple threads, which significantly speeds up the upload process.
By default, automatic chunked uploader will use 5 threads.
You can change this number by setting API.CHUNK_UPLOAD_THREADS
to a new number.
from boxsdk.config import API
API.CHUNK_UPLOAD_THREADS = 6
The SDK provides a method of automatically handling a chunked upload. First get a folder you want to upload the file to.
Then call folder.get_chunked_uploader(file_path, rename_file=False, use_upload_session_urls=True)
to retrieve a ChunkedUploader
object. Setting use_upload_session_urls
to True
inilializes
the uploader that utlizies urls returned by the Create Upload Session
endpoint response unless a custom
API.UPLOAD_URL was set in the config. Setting use_upload_session_urls
to False
inilializes the uploader that uses always base upload urls.
Calling the method chunked_upload.start()
will kick off the chunked upload process and return the File
object that was uploaded.
# uploads large file to a root folder
chunked_uploader = client.folder('0').get_chunked_uploader(file_path='/path/to/file.txt', file_name='new_name.txt')
uploaded_file = chunked_uploader.start()
print(f'File "{uploaded_file.name}" uploaded to Box with file ID {uploaded_file.id}')
You can also upload file stream by creating a UploadSession
first. This can be done by calling
folder.create_upload_session(file_size, file_name=None, use_upload_session_urls=True)
method.
use_upload_session_urls
flag is used to determine if the upload session should use urls returned by
the Create Upload Session
endpoint or should it always use base upload urls. Then you can call
method upload_session.get_chunked_uploader_for_stream(content_stream, file_size)
.
test_file_path = '/path/to/large_file.mp4'
with open(test_file_path, 'rb') as content_stream:
total_size = os.stat(test_file_path).st_size
upload_session = client.folder('0').create_upload_session(file_size=total_size, file_name='large_file.mp4')
chunked_uploader = upload_session.get_chunked_uploader_for_stream(content_stream=content_stream, file_size=total_size)
uploaded_file = chunked_uploader.start()
print(f'File "{uploaded_file.name}" uploaded to Box with file ID {uploaded_file.id}')
To upload a new file version for a large file, first get a file you want to replace.
Then call file.get_chunked_uploader(file_path, rename_file=False, use_upload_session_urls=True)
to retrieve a ChunkedUploader
object. Calling the method chunked_upload.start()
will kick off the chunked upload process and return the updated File.
# uploads new large file version
chunked_uploader = client.file('existing_big_file_id').get_chunked_uploader(file_path='/path/to/file')
uploaded_file = chunked_uploader.start()
print(f'File "{uploaded_file.name}" uploaded to Box with file ID {uploaded_file.id}')
# the uploaded_file.id will be the same as 'existing_big_file_id'
To check if a file can be uploaded with given name to a specific folder call
folder.preflight_check(size, name)
. If the check did not pass, this method will raise an exception
including some details on why it did not pass.
file_name = 'large_file.mp4'
test_file_path = '/path/to/large_file.mp4'
total_size = os.stat(test_file_path).st_size
destination_folder_id = '0'
try:
client.folder(destination_folder_id).preflight_check(size=total_size, name=file_name)
except BoxAPIException as e:
print(f'File {file_name} cannot be uploaded to folder with id: {destination_folder_id}. Reason: {e.message}')
Sometimes an upload can be interrupted, in order to resume uploading where you last left off, simply call the
chunked_uploader.resume()
method. This will return the File object that was uploaded.
chunked_uploader = client.file('12345').get_chunked_uploader('/path/to/file')
try:
uploaded_file = chunked_uploader.start()
except:
uploaded_file = chunked_uploader.resume()
print(f'File "{uploaded_file.name}" uploaded to Box with file ID {uploaded_file.id}')
To abort a running upload, which cancels all currently uploading chunks and aborts the upload session, call the method
chunked_uploader.abort()
.
from boxsdk.exception import BoxNetworkException
test_file_path = '/path/to/large_file.mp4'
content_stream = open(test_file_path, 'rb')
total_size = os.stat(test_file_path).st_size
chunked_uploader = client.file('existing_big_file_id').get_chunked_uploader(file_path='/path/to/file')
try:
uploaded_file = chunked_uploader.start()
except BoxNetworkException:
chunked_uploader.abort()
For more complicated upload scenarios, such as those being coordinated across multiple processes or when an unrecoverable error occurs with the automatic uploader, the endpoints for chunked upload operations are also exposed directly.
For example, this is roughly how a chunked upload is done manually:
import hashlib
import os
test_file_path = '/path/to/large_file.mp4'
total_size = os.stat(test_file_path).st_size
sha1 = hashlib.sha1()
content_stream = open(test_file_path, 'rb')
upload_session = client.folder(folder_id='11111').create_upload_session(file_size=total_size, file_name='test_file_name.mp4')
part_array = []
for part_num in range(upload_session.total_parts):
copied_length = 0
chunk = b''
while copied_length < upload_session.part_size:
bytes_read = content_stream.read(upload_session.part_size - copied_length)
if bytes_read is None:
# stream returns none when no bytes are ready currently but there are
# potentially more bytes in the stream to be read.
continue
if len(bytes_read) == 0:
# stream is exhausted.
break
chunk += bytes_read
copied_length += len(bytes_read)
uploaded_part = upload_session.upload_part_bytes(chunk, part_num*upload_session.part_size, total_size)
part_array.append(uploaded_part)
updated_sha1 = sha1.update(chunk)
content_sha1 = sha1.digest()
uploaded_file = upload_session.commit(content_sha1=content_sha1, parts=part_array)
print(f'File ID: {uploaded_file.id} and File Name: {uploaded_file.name}')
The individual endpoint methods are detailed below:
To create an upload session for uploading a large version, call
file.create_upload_session(file_size, file_name=None, use_upload_session_urls=True)
with the size of the file to be uploaded. You can optionally specify a new file_name
to rename the file on upload.
use_upload_session_urls
flag is used to determine if the upload session should use urls returned by
the Create Upload Session
endpoint or should it always use base upload urls. This method returns an
UploadSession
object representing the created upload session.
file_size = 26000000
upload_session = client.file('11111').create_upload_session(file_size)
print(f'Created upload session {upload_session.id} with chunk size of {upload_session.part_size} bytes')
To create an upload session for uploading a new large file, call
folder.create_upload_session(file_size, file_name, use_upload_session_urls=True)
with
the size and filename of the file to be uploaded. use_upload_session_urls
flag is used to determine if the upload
session should use urls returned by the Create Upload Session
endpoint or should it always use base upload urls.
This method returns an UploadSession
object representing the created upload session.
file_size = 26000000
file_name = 'test_file.pdf'
upload_session = client.folder('22222').create_upload_session(file_size, file_name)
print(f'Created upload session {upload_session.id} with chunk size of {upload_session.part_size} bytes')
To upload a part of the file to this session, call
upload_session.upload_part_bytes(part_bytes, offset, total_size, part_content_sha1=None)
with
the bytes
to be uploaded, the byte offset within the file (which should be a multiple of the upload session
part_size
), and the total size of the file being uploaded. This method returns a dict
for the part record; these
records should be kept for the commit operation.
Note: The number of bytes uploaded for each part must be exactly
upload_sesion.part_size
, except for the last part (which just includes however many bytes are left in the file).
upload_session = client.upload_session('11493C07ED3EABB6E59874D3A1EF3581')
offset = upload_session.part_size * 3
total_size = 26000000
part_bytes = b'abcdefgh'
part = upload_session.upload_part_bytes(part_bytes, offset, total_size)
print(f'Successfully uploaded part ID {part["part_id"]}')
After uploading all parts of the file, commit the upload session to Box by calling
upload_session.commit(content_sha1, parts=None, file_attributes=None, etag=None)
with the SHA1 hash of the
entire file. For best consistency guarantees, you should also pass an Iterable
of the parts dict
s via the parts
parameter; otherwise, the list of parts will be retrieved from the API. You may also pass a dict
of file_attributes
to set on the new file.
import hashlib
sha1 = hashlib.sha1()
# sha1 should have been updated with all the bytes of the file
file_atributes = {
'description': 'A file uploaded via Chunked Upload',
}
upload_session = client.upload_session('11493C07ED3EABB6E59874D3A1EF3581')
uploaded_file = upload_session.commit(sha1.digest(), file_atributes=file_atributes)
print(f'Successfully uploaded file {uploaded_file.id} with description {uploaded_file.description}')
To abort a chunked upload and lose all uploaded file parts, call upload_session.abort()
. This method returns
True
to indicate that the deletion succeeded.
client.upload_session('11493C07ED3EABB6E59874D3A1EF3581').abort()
print('Upload was successfully canceled')
To return the list of parts uploaded so far, call upload_session.get_parts(limit=None, offset=None)
.
This method returns a BoxObjectCollection
that allows you to iterate over the part dict
s in the collection.
parts = client.upload_session('11493C07ED3EABB6E59874D3A1EF3581').get_parts()
for part in parts:
print(f'Part {part["part_id"]} at offset {part["offset"]} has already been uploaded')
To move a file from one folder into another, call file.move(parent_folder, name=None)
with the destination
folder to move the file into. You can optionally provide a name
parameter to automatically rename the file in case
of a name conflict in the destination folder. This method returns the updated File
object in the new
folder.
file_id = '11111'
destination_folder_id = '44444'
file_to_move = client.file(file_id)
destination_folder = client.folder(destination_folder_id)
moved_file = file_to_move.move(parent_folder=destination_folder)
print(f'File "{moved_file.name}" has been moved into folder "{moved_file.parent.name}"')
A file can be copied to a new folder by calling file.copy(*, parent_folder, name=None, file_version=None, **_kwargs)
with the destination folder and an optional new name for the file in case there is a name conflict in the destination
folder. This method returns a File
object representing the copy of the file in the destination folder.
file_id = '11111'
destination_folder_id = '44444'
file_to_copy = client.file(file_id)
destination_folder = client.folder(destination_folder_id)
file_copy = file_to_copy.copy(parent_folder=destination_folder)
print(f'File "{file_copy.name}" has been copied into folder "{file_copy.parent.name}"')
A file can be renamed by calling file.rename(name)
. This method returns the updated
File
object with a new name. Remeber to provide also extension of the file along with the new name.
file = client.file(file_id='11111')
renamed_file = file.rename("new-name.pdf")
print(f'File was renamed to "{renamed_file.name}"')
Calling the file.delete()
method will delete the file. Depending on enterprise settings, this will either move
the file to the user's trash or permanently delete the file. This method returns True
to signify that the deletion
was successful.
client.file(file_id='11111').delete()
Previous versions of a file can be retrieved with the
file.get_previous_versions(limit=None, offset=None, fields=None)
method. This method returns
a BoxObjectCollection
that can iterate over the FileVersion
objects
in the collection.
file_id = '11111'
file_versions = client.file(file_id).get_previous_versions()
for version in file_versions:
print(f'File version {version.id} was created at {version.created_at}')
New versions of a file can be uploaded in one of two ways: by providing a path to a file on disk, or via a readable stream containing the file contents.
To upload a new file version from a path on disk, call the
file.update_contents(file_path, etag=None, preflight_check=False, preflight_expected_size=0)
method. This method returns a File
object representing the updated file.
file_id = '11111'
file_path = '/path/to/file.pdf'
updated_file = client.file(file_id).update_contents(file_path)
print(f'File "{updated_file.name}" has been updated')
To upload a file version from a readable stream, call
file.update_contents_with_stream(file_stream, etag=None, preflight_check=False, preflight_expected_size=0)
with the stream. This method returns a File
object representing the
newly-uploaded file.
file_id = '11111'
stream = open('/path/to/file.pdf', 'rb')
updated_file = client.file(file_id).update_contents_with_stream(stream)
print(f'File "{updated_file.name}" has been updated')
A previous version of a file can be promoted by calling the file.promote_version(file_version)
method to become the current version of the file with the FileVersion
to promote. This create a
copy of the old file version and puts it on the top of the versions stack. This method returns the new copy
FileVersion
object.
file_id = '11111'
file_version_id = '12345'
version_to_promote = client.file_version(file_version_id)
new_version = client.file(file_id).promote_version(version_to_promote)
print(f'Version {file_version_id} promoted; new version {new_version.id} created')
A version of a file can be deleted and moved to the trash by calling
file.delete_version(file_version, etag=None)
with the [FileVersion
] to delete.
file_id = '11111'
version_id = '12345'
version_to_delete = client.file_version(version_id)
client.file(file_id).delete_version(version_to_delete)
A locked file cannot be modified by any other user until it is unlocked. This is useful if you want to "check out" a file while you're working on it, to ensure that other collaborators do not make changes while your changes are in progress.
To lock a file, call file.lock(prevent_download=False, expire_time=None)
. You can optionally prevent other
users from downloading the file while it is locked by passing True
for the prevent_download
parameter. You can also
set an expiration time for the lock, which will automatically release the lock at the specified time. The expiration
time is formatted as an RFC3339 datetime.
This method returns the updated File
object.
file_id = '11111'
updated_file = client.file(file_id).lock(expiration_time='2020-01-01T00:00:00-08:00')
print(f'File "{updated_file.name}" has been locked!')
A locked file can be unlocked by calling file.unlock()
. This method returns the updated
File
object.
file_id = '11111'
updated_file = client.file(file_id).unlock()
print(f'File "{updated_file.name}" has been unlocked!')
A shared link for a file can be generated by calling
file.get_shared_link_download_url(access=None, etag=None, unshared_at=None, allow_preview=None, password=None, vanity_name=None)
.
This method returns a unicode
string containing the shared link URL.
file_id = '11111'
url = client.file(file_id).get_shared_link_download_url(access='collaborators', vanity_name="my-unique-vanity-name")
print(f'The file shared link download URL is: {url}')
To find a file given a shared link, use the
client.get_shared_item
method.
file = client.get_shared_item('https://app.box.com/s/gjasdasjhasd', password='letmein')
A shared link for a file can be generated or updated by calling
file.get_shared_link(*, access=None, etag=None, unshared_at=None, allow_download=None, allow_preview=None, allow_edit=None, password=None, vanity_name=None, **kwargs)
.
This method returns a unicode
string containing the shared link URL.
file_id = '11111'
url = client.file(file_id).get_shared_link(access='open', allow_download=True, allow_edit=True)
print(f'The file shared link URL is: {url}')
To check for an existing shared link on a file, simply call
file.shared_link
This method returns a unicode
string containing the shared link URL.
file_id = '11111'
shared_link = client.file(file_id).get().shared_link
url = shared_link['url']
A shared link for a file can be removed by calling file.remove_shared_link(*, etag=None, **kwargs)
.
file_id = '11111'
client.file(file_id).remove_shared_link()
A file embed URL can be generated by calling file.get_embed_url()
. This method returns a unicode
string containing a URL suitable for embedding in an <iframe>
to embed the a file viewer in a web page.
file_id = '11111'
embed_url = client.file(file_id).get_embed_url()
print(f'<iframe src="{embed_url}"></iframe>')
To get the preview representations of a file, call the
file.get_representation_info(rep_hints=None)
method with the
representation hints to fetch — if no hints are provided, limited information about all available
representations will be returned. This method returns a list
of dict
s containing the information about the
requested representations.
Note that this method only provides information about a set of available representations; your application will need to handle checking the status of the representations and downlaoding them via the provided content URL template.
file_id = '11111'
rep_hints = '[pdf][extracted_text]'
representations = client.file(file_id).get_representation_info(rep_hints)
for rep in representations:
print(f'{rep["representation"]} representation has status {rep["status"]["state"]}')
print(f'Info URL for this representation is: {rep["info"]["url"]}')
print(f'Content URL template is: {rep["content"]["url_template"]}')
A thumbnail for a file can be retrieved by calling
file.get_thumbnail(extension='png', min_width=None, min_height=None, max_width=None, max_height=None)
.
This method returns the bytes
of the thumbnail image.
file_id = '11111'
thumbnail = client.file(file_id).get_thumbnail(extension='jpg')
A thumbnail for a file can now be retrieved by calling file.get_thumbnail_representation(dimensions, extension='png')
. This method returns the bytes
of the thumbnail image. You must pass in a dimension that is valid for the extension you pass in for this file. To find valid dimensions, you must first make a call with [file.get_representation_info(rep_hints=None)
]. This will return a dict
of all available representations with their extensions and dimensions. More details about can be found on our developer docs here.
file_id = '11111'
thumbnail = client.file(file_id).get_thumbnail_representation('92x92', extension='jpg')
To set metadata on a file call the file.metadata(scope='global', template='properties')
to specify the scope and template key of the metadata template to attach. Then, call the [metadata.set(data)
][metadata_set] method with the key/value pairs to attach. This method returns a dict
containing the applied metadata instance.
Note: This method will unconditionally apply the provided metadata, overwriting the existing metadata for the keys provided.
To specifically create or update metadata, see the create()
or update()
methods.
metadata = {
'foo': 'bar',
}
applied_metadata = client.file(file_id='11111').metadata(scope='enterprise', template='testtemplate').set(metadata)
print(f'Set metadata in instance ID {applied_metadata["$id"]}')
Metadata can be added to a file either as free-form key/value pairs or from an existing template. To add metadata to
a file, first call file.metadata(scope='global', template='properties')
to specify the scope and
template key of the metadata template to attach (or use the default values to attach free-form keys and values). Then,
call metadata.create(data)
with the key/value pairs to attach. This method can only be used to
attach a given metadata template to the file for the first time, and returns a dict
containing the applied metadata
instance.
Note: This method will only succeed if the provided metadata template is not currently applied to the file, otherwise it will fail with a Conflict error.
metadata = {
'foo': 'bar',
'baz': 'quux',
}
applied_metadata = client.file(file_id='11111').metadata().create(metadata)
print(f'Applied metadata in instance ID {applied_metadata["$id"]}')
Updating metadata values is performed via a series of discrete operations, which are applied atomically against the
existing file metadata. First, specify which metadata will be updated by calling
file.metadata(scope='global', template='properties')
. Then, start an update sequence by calling
metadata.start_update()
and add update operations to the returned
MetadataUpdate
. Finally, perform the update by calling
metadata.update(metadata_update)
. This final method returns a dict
of the updated metadata
instance.
Note: This method will only succeed if the provided metadata template has already been applied to the file; If the file does not
have existing metadata, this method will fail with a Not Found error. This is useful you know the file will already have metadata applied,
since it will save an API call compared to set()
.
file_obj = client.file(file_id='11111')
file_metadata = file_obj.metadata(scope='enterprise', template='myMetadata')
updates = file_metadata.start_update()
updates.add('/foo', 'bar')
updates.update('/baz', 'murp', old_value='quux') # Ensure the old value was "quux" before updating to "murp"
updated_metadata = file_metadata.update(updates)
print('Updated metadata on file!')
print(f'foo is now {updated_metadata["foo"]} and baz is now {updated_metadata["baz"]}')
To retrieve the metadata instance on a file for a specific metadata template, first call
file.metadata(scope='global', template='properties')
to specify the scope and template key of the
metadata template to retrieve, then call metadata.get()
to retrieve the metadata values attached to
the file. This method returns a dict
containing the applied metadata instance.
metadata = client.file(file_id='11111').metadata(scope='enterprise', template='myMetadata').get()
print(f'Got metadata instance {metadata["$id"]}')
To remove a metadata instance from a file, call
file.metadata(scope='global', template='properties')
to specify the scope and template key of the
metadata template to remove, then call metadata.delete()
to remove the metadata from the file.
This method returns True
to indicate that the removal succeeded.
client.file(file_id='11111').metadata(scope='enterprise', template='myMetadata').delete()
To retrieve all metadata attached to a file, call file.get_all_metadata()
. This method returns a
BoxObjectCollection
that can be used to iterate over the dict
s representing each metadata
instance attached to the
file.
file_metadata = client.file(file_id='11111').get_all_metadata()
for instance in file_metadata:
if 'foo' in instance:
print(f'Metadata instance {instance["id"]} has value "{instance["foo"]}" for foo')
It is important to note that this feature is only available if you have Governance.
To add classification to a File
, call file.set_classification(classification)
.
This method returns the classification type on the File
object. If a classification already exists then
this call will update the existing classification with the new ClassificationType
.
from boxsdk.object.item import ClassificationType
classification = client.file(file_id='11111').set_classification(ClassificationType.PUBLIC)
print(f'Classification Type is: {classification}')
The set method will always work no matter the state your File
is in. For cases already where a
classification value already exists set_classification(classification)
may make multiple
API calls.
Alternatively, if you already know you have a classification and you are simple updating it, you can use the
update_classification(classification)
. This will ultimately help you save one extra API call.
classification = client.file(file_id='11111').update_classification(ClassificationType.NONE)
print(f'Classification Type is: {classification}')
To retrieve a classification from a File
, call file.get_classification()
.
This method returns the classification type on the File
object.
classification = client.file(file_id='11111').get_classification()
print(f'Classification Type is: {classification}')
To remove a classification from a File
, call file.remove_classification()
.
client.file(file_id='11111').remove_classification()
To set new retention policy expiration date for the file, call set_disposition_at(date_time)
.
This method will only work for files under retention with permanently_delete
disposition action set. Remember that
disposition date can't be shortened once set on a file.
As the date_time
parameter you can use either datetime string, e.g. '2035-03-04T10:14:24+14:00' or
datetime.datetime
object.
import datetime, pytz
new_disposition_date = datetime.datetime(year=2029, month=3, day=4, hour=10, minute=14, second=24, tzinfo=pytz.timezone('US/Alaska'))
client.file(file_id='11111').set_disposition_at(date_time=new_disposition_date)
If datetime.datetime
object doesn't have timezone specified, the local timezone will be used.
To get the current disposition date you can use the snippet below.
disposition_date = client.file(file_id='11111').get(fields=('disposition_at',)).disposition_at