Optional
options: { Optional
config?: DeepPartial<{ Optional
oPrivate
oPrivate
restPrivate
userCreate a new cluster. See the API Documentation for more details.
-Add a member. See the API Documentation for more details.
-Create a new connector secret. See the API Documentation for more details.
-See the API Documentation for more details.
-Delete a cluster. See the API Documentation for more details.
-Delete a member from your organization. See the API Documentation for more details.
-Delete a connector secret. See the API Documentation for more details.
-Get the details of an API client. See the API Documentation for more details.
-The cluster UUID
-Get an array of the current API clients for this cluster. See the API Documentation for more details.
-Retrieve the metadata for a cluster. See the API Documentation for more details.
-Return an array of clusters. See the API Documentation for more details.
-Private
getRetrieve the available parameters for cluster creation. See the API Documentation for more details.
-Retrieve the connector secrets. See the API Documentation for more details.
-Retrieve a list of members and pending invites for your organisation. See the API Documentation for more details.
-Add one or more IPs to the whitelist for the cluster. See the API Documentation for more details.
-Generated using TypeDoc
Generated using TypeDoc
Optional
options: { Optional
config?: DeepPartial<{ Private
audiencePrivate
authPrivate
cachePrivate
camundaPrivate
clientPrivate
clientPrivate
consolePrivate
consolePrivate
Optional
customPrivate
failedPrivate
failurePrivate
Optional
inflightPrivate
isPrivate
scopePrivate
tokenPrivate
useStatic
Private
Readonly
defaultPrivate
addCamunda SaaS needs an audience for a Modeler token request, and Self-Managed does not.
-Private
evictPrivate
evictPrivate
getPrivate
getPrivate
getPrivate
isPrivate
makePrivate
retrievePrivate
sendPrivate
sendGenerated using TypeDoc
A single point of configuration for all Camunda Platform 8 clients.
-This class is a facade for all the clients in the Camunda Platform 8 SDK.
-import { Camunda8 } from '@camunda8/sdk'
const c8 = new Camunda8()
const zeebe = c8.getZeebeGrpcClient()
const operate = c8.getOperateApiClient()
const optimize = c8.getOptimizeApiClient()
const tasklist = c8.getTasklistApiClient()
const modeler = c8.getModelerApiClient()
const admin = c8.getAdminApiClient()
-
-Private
Optional
adminPrivate
configurationThe base url for the Admin Console API.
-Credentials for Admin Console and Modeler API
-Credentials for Admin Console and Modeler API
-The audience parameter for an Admin Console OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-When using custom or self-signed certificates, provide the path to the certificate chain
-When using custom or self-signed certificates, provide the path to the private key
-In an environment using self-signed certificates, provide the path to the root certificate
-Custom user agent
-The base url for the Modeler API. Defaults to Camunda Saas - https://modeler.cloud.camunda.io/api
-The audience parameter for a Modeler OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-Set to true to disable OAuth completely
-How soon in milliseconds before its expiration time a cached OAuth token should be considered expired. Defaults to 1000
-The OAuth token exchange endpoint url
-The base url for the Operate API
-The audience parameter for an Operate OAuth token request. Defaults to operate.camunda.io
-The base url for the Optimize API
-The audience parameter for an Optimize OAuth token request. Defaults to optimize.camunda.io
-Control TLS for Zeebe Grpc. Defaults to true. Set to false when using an unsecured gateway
-The base url for the Tasklist API
-The audience parameter for a Tasklist OAuth token request. Defaults to tasklist.camunda.io
-The tenant id when multi-tenancy is enabled
-The directory to cache OAuth tokens on-disk. Defaults to $HOME/.camunda
-Set to true to disable disk caching of OAuth tokens and use memory caching only
-Optional scope parameter for OAuth (needed by some OIDC)
-The audience parameter for a Zeebe OAuth token request. Defaults to zeebe.camunda.io
-The address for the Zeebe Gateway. Defaults to localhost:26500
-This is the client ID for the client credentials
-This is the client secret for the client credentials
-This channel argument controls the maximum number -of pings that can be sent when there is no other -data (data frame or header frame) to be sent. -GRPC Core will not continue sending pings if we -run over the limit. Setting it to 0 allows sending -pings without sending data.
-Minimum allowed time between a server receiving -successive ping frames without sending any data -frame. Int valued, milliseconds. Default: 90000
-Defaults to 90000.
-The time between the first and second connection attempts, -in ms. Defaults to 1000.
-This channel argument if set to 1 -(0 : false; 1 : true), allows keepalive pings -to be sent even if there are no calls in flight. -Defaults to 1.
-After waiting for a duration of this time, if the keepalive ping sender does not receive the ping ack, it will close the -transport. Int valued, milliseconds. Defaults to 120000.
-After a duration of this time the client/server pings its peer to see if the transport is still alive. -Int valued, milliseconds. Defaults to 360000.
-The maximum time between subsequent connection attempts, -in ms. Defaults to 10000.
-The minimum time between subsequent connection attempts, -in ms. Default is 1000ms, but this can cause an SSL Handshake failure. -This causes an intermittent failure in the Worker-LongPoll test when run -against Camunda Cloud. -Raised to 5000ms. -See: https://github.com/grpc/grpc/issues/8382#issuecomment-259482949
-Log level of Zeebe Client and Workers - 'DEBUG' | 'INFO' | 'NONE'. Defaults to 'INFO'
-Zeebe client log output can be human-readable 'SIMPLE' or structured 'JSON'. Defaults to 'SIMPLE'
-The gRPC channel can "jitter". This suppresses a connection error message if the channel comes back within this window in milliseconds. Defaults to 3000
-Immediately connect to the Zeebe Gateway (issues a silent topology request). Defaults to false
-This suppresses intermediate errors during initial connection negotiation. On Camunda SaaS this defaults to 6000, on Self-Managed to 0
-Maximum number of retries of network operations before failing. Defaults to -1 (infinite retries)
-When retrying failed network operations, retries back off to this maximum period. Defaults to 10s
-Automate retrying operations that fail due to network conditions or broker backpressure. Defaults to true
-How long in seconds the long poll Job Activation request is held open by a worker. Defaults to 60
-After a long poll Job Activation request, this is the cool-off period in milliseconds before the worker requests more work. Defaults to 300
-Private
Optional
modelerPrivate
oPrivate
Optional
operatePrivate
Optional
optimizePrivate
Optional
tasklistPrivate
Optional
zeebeGenerated using TypeDoc
Optional
options: { Optional
config?: DeepPartial<{ Optional
oPrivate
oPrivate
restPrivate
userAdds a new collaborator to a project or modifies the permission level of an existing collaborator. -Note: Only users that are part of the authorized organization (see GET /api/v1/info) and logged in to Web Modeler at least once can be added to a project.
-This endpoint creates a file.
-To create a file, specify projectId and/or folderId:
-When only folderId is given, the file will be created in that folder. The folder can be in any project of the same organization.
-When projectId is given and folderId is either null or omitted altogether, the file will be created in the root of the project.
-When projectId and folderId are both given, they must be consistent - i.e. the folder is in the project.
-For connector templates, the following constraints apply:
-The value of content.$schema will be replaced with https://unpkg.com/@camunda/zeebe-element-templates-json-schema/resources/schema.json and validated against it.
-The value of name takes precedence over content.name. In case of mismatch, the latter will be adjusted to match the former automatically.
-The value of content.id will be replaced with the file id generated by Web Modeler.
-The value of content.version is managed by Web Modeler and will be updated automatically.
-Note: The simplePath transforms any occurrences of slashes ("/") in file and folder names into an escape sequence consisting of a backslash followed by a slash ("/"). This form of escaping facilitates the processing of path-like structures within file and folder names.
-Creates a new folder.
-When only parentId is given, the folder will be created in that folder. The folder can be in any project of the same organization.
-When projectId is given and parentId is either null or omitted altogether, the folder will be created in the root of the project.
-When projectId and parentId are both given, they must be consistent - i.e. the parent folder is in the project.
-Creates a new project. This project will be created without any collaborators, so it will not be visible in the UI by default. To assign collaborators, use addCollaborator()
.
Private
decodeDeletes a file. -Note: Deleting a file will also delete other resources attached to the file (comments, call activity/business rule task links, milestones and shares) which might have side-effects. Deletion of resources is recursive and cannot be undone.
-Retrieves a file.
-Note: The simplePath transforms any occurrences of slashes ("/") in file and folder names into an -escape sequence consisting of a backslash followed by a slash ("/"). This form of escaping -facilitates the processing of path-like structures within file and folder names.
-Does this throw if it is not found?
-Private
getReturns a link to a visual comparison between two milestones where the milestone referenced by milestone1Id acts as a baseline to compare the milestone referenced by milestone2Id against.
-Searches for collaborators. -filter specifies which fields should match. Only items that match the given fields will be returned. -sort specifies by which fields and direction (ASC/DESC) the result should be sorted. -page specifies the page number to return. -size specifies the number of items per page. The default value is 10.
-Searches for files. -filter specifies which fields should match. Only items that match the given fields will be returned.
-Note: Date fields need to be specified in a format compatible with java.time.ZonedDateTime; for example 2023-09-20T11:31:20.206801604Z.
-You can use suffixes to match date ranges:
-Modifier Description -||/y Within a year -||/M Within a month -||/w Within a week -||/d Within a day -||/h Within an hour -||/m Within a minute -||/s Within a second
-sort specifies by which fields and direction (ASC/DESC) the result should be sorted.
-page specifies the page number to return. -size specifies the number of items per page. The default value is 10.
-Note: The simplePath transform any occurrences of slashes ("/") in file and folder names into an escape sequence consisting of a backslash followed by a slash ("/"). This form of escaping facilitates the processing of path-like structures within file and folder names.
-Searches for milestones.
-filter specifies which fields should match. Only items that match the given fields will be returned.
-Note: Date fields need to be specified in a format compatible with java.time.ZonedDateTime; for example 2023-09-20T11:31:20.206801604Z.
-You can use suffixes to match date ranges:
-Modifier Description -||/y Within a year -||/M Within a month -||/w Within a week -||/d Within a day -||/h Within an hour -||/m Within a minute -||/s Within a second -sort specifies by which fields and direction (ASC/DESC) the result should be sorted.
-page specifies the page number to return.
-size specifies the number of items per page. The default value is 10.
-Searches for projects.
-filter specifies which fields should match. Only items that match the given fields will be returned.
-Note: Date fields need to be specified in a format compatible with java.time.ZonedDateTime; for example 2023-09-20T11:31:20.206801604Z.
-You can use suffixes to match date ranges:
-Modifier Description -||/y Within a year -||/M Within a month -||/w Within a week -||/d Within a day -||/h Within an hour -||/m Within a minute -||/s Within a second
-sort specifies by which fields and direction (ASC/DESC) the result should be sorted.
-page specifies the page number to return.
-size specifies the number of items per page. The default value is 10.
-Updates the content, name, or location of a file, or all at the same time.
-To move a file, specify projectId and/or folderId: -When only folderId is given, the file will be moved to that folder. The folder can be in another project of the same organization. -When projectId is given and folderId is either null or omitted altogether, the file will be moved to the root of the project. -When projectId and folderId are both given, they must be consistent - i.e. the new parent folder is in the new project. -The field revision holds the current revision of the file. This is used for detecting and preventing concurrent modifications. -For connector templates, the following constraints apply: -The value of content.$schema is not updatable. -The value of content.name can only be changed via name. -The value of content.id is not updatable. -The value of content.version is managed by Web Modeler and will be updated automatically. -Note: The simplePath transforms any occurrences of slashes ("/") in file and folder names into an escape sequence consisting of a backslash followed by a slash ("/"). This form of escaping facilitates the processing of path-like structures within file and folder names.
-Updates the name or location of a folder, or both at the same time.
-To move a folder, specify projectId and/or parentId:
-When only parentId is given, the file will be moved to that folder. The folder must keep in the same organization.
-When projectId is given and parentId is either null or omitted altogether, the file will be moved to the root of the project.
-When projectId and parentId are both given, they must be consistent - i.e. the new parent folder is in the new project.
-Generated using TypeDoc
The high-level client for Operate.
-const operate = new OperateApiClient()
operate.searchProcessInstances({
filter: {
state: "ACTIVE"
},
size: 50
}).then(instances => {
console.log(instances)
})
-
-Optional
options: { Optional
config?: DeepPartial<{ Optional
oconst operate = new OperateApiClient()
-
-Private
oPrivate
restPrivate
tenantPrivate
userProtected
addDelete a specific process instance by key.
-const operate = new OperateApiClient()
await operate.deleteProcessInstance(2251799819847322)
-
-Private
getRetrieve the metadata for a specific process definition, by key.
- -const operate = new OperateApiClient()
const definition = await operate.getProcessDefinition(2251799817140074);
-
-Retrieve a specific process instance by id.
-const operate = new OperateApiClient()
const instance = await operate.getProcessInstance(2251799819847322)
-
-Get the statistics for a process instance, grouped by flow nodes
-Return a variable identified by its variable key
-Retrieve the variables for a Process Instance, given its key
-Private
safeJSONparseSearch and retrieve incidents.
-const operate = new OperateApiClient()
const query: Query<Incident> = {
filter: {
state: "ACTIVE"
},
size: 50,
sort: [
{
field: "creationTime",
order: "ASC"
}
]
}
const incidents = operate.searchIncidents(query)
-
-Search and retrieve process definitions.
- -const query: Query<ProcessDefinition> = {
filter: {},
size: 50,
sort: [
{
field: "bpmnProcessId",
order: "ASC",
},
],
};
const operate = newOperateClient()
const defs = await operate.searchProcessDefinitions(query);
-
-Search and retrieve process instances.
-const operate = new OperateApiClient()
const query: Query<ProcessInstance> = {
filter: {
processVersion: 1
},
size: 50,
sort: [
{
field: "bpmProcessId",
order: "ASC"
}
]
}
const instances = await operate.searchProcessInstances(query)
console.log(`Found ${instances.total} instances`)
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Optional
incidentGenerated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-ProcessDefinition key is a string in the SDK, but it's an int64 number in the database
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Optional
parentOptional
parentGenerated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
The high-level API client for Optimize.
-const optimize = new OptimizeApiClient()
async function main() {
await optimize.enableSharing()
const id = "8a7103a7-c086-48f8-b5b7-a7f83e864688"
const res = await optimize.exportDashboardDefinitions([id])
fs.writeFileSync('exported-dashboard.json', JSON.stringify(res, null, 2))
}
main()
-
-Optional
options: { Optional
config?: DeepPartial<{ Optional
oconst optimize = new OptimizeApiClient()
-
-Private
oPrivate
restPrivate
userThe ID of the report you wish to delete
-The report deletion API allows you to delete reports by ID from Optimize.
- -const client = new OptimizeApiClient()
client.deleteReport("e6c5aaa1-6a18-44e7-8480-d562d511ba62")
-
-This API allows users to disable the sharing functionality for all reports and dashboards in Optimize.
-Note that this setting will be permanently persisted in memory and will take precedence over any other previous configurations (e.g. configuration files).
-When sharing is disabled, previously shared URLs will no longer be accessible. Upon re-enabling sharing, the previously shared URLs will work once again under the same address as before. Calling this endpoint when sharing is already disabled will have no effect.
- -const client = new OptimizeApiClient()
client.disableSharing()
-
-This API allows users to enable the sharing functionality for all reports and dashboards in Optimize.
-Note that this setting will be permanently persisted in memory and will take precedence over any other previous configurations (e.g. configuration files).
-If sharing had been previously enabled and then disabled, re-enabling sharing will allow users to access previously shared URLs under the same address as before. Calling this endpoint when sharing is already enabled will have no effect.
- -const client = new OptimizeApiClient()
client.enableSharing()
-
-Array of dashboard ids
-This API allows users to export dashboard definitions which can later be imported into another Optimize system.
-Note that exporting a dashboard also exports all reports contained within the dashboard. The dashboards to be exported may be within a Collection or private entities, the API has access to both.
-The obtained list of entity exports can be imported into other Optimize systems either using the dedicated import API or via UI.
- -const client = new OptimizeApiClient()
const dashboardDefs = await client.exportDashboardDefinitions(["123", "456"])
-
-array of report IDs
-This API allows users to export report definitions which can later be imported into another Optimize system. The reports to be exported may be within a collection or private entities, the API has access to both.
-The obtained list of entity exports can be imported into other Optimize systems either using the dedicated import API or via UI.
- -const client = new OptimizeApiClient()
const reportDefs = await client.exportReportDefinitions(["123", "456"])
-
-The data export API allows users to export large amounts of data in a machine-readable format (JSON) from Optimize.
-const client = new OptimizeApiClient()
const exporter = client.exportReportResultData("e6c5aaa1-6a18-44e7-8480-d562d511ba62")
const page1 = await exporter.next()
-
-The ID of the collection for which to retrieve the dashboard IDs.
-This API allows users to retrieve all dashboard IDs from a given collection.
-The response contains a list of IDs of the dashboards existing in the collection with the given collection ID.
- -const client = new OptimizeApiClient()
const dashboardIds = await client.getDashboardIds(1234)
-
-Private
getThe purpose of Health-Readiness REST API is to return information indicating whether Optimize is ready to be used.
- -const client = new OptimizeApiClient()
try {
await client.getReadiness()
console.log('Ready!')
} catch (e: any) {
console.log('Error calling readiness point: ' + e.code)
}
-
-the id of the collection
-This API allows users to retrieve all report IDs from a given collection. The response contains a list of IDs of the reports existing in the collection with the given collection ID.
- -const client = new OptimizeApiClient()
const reports = await client.getReportIds(1234)
-
-This API allows users to import entity definitions such as reports and dashboards into existing collections. These entity definitions may be obtained either using the report or dashboard export API or via the UI.
- -const entities = [
{
"id": "61ae2232-51e1-4c35-b72c-c7152ba264f9",
"exportEntityType": "single_process_report",
"name": "Number: Process instance duration",
"sourceIndexVersion": 8,
"collectionId": null,
"data": {...}
},
{
"id": "b0eb845-e8ed-4824-bd85-8cd69038f2f5",
"exportEntityType": "dashboard",
"name": "Dashboard 1",
"sourceIndexVersion": 5,
"reports": [
{
"id": "61ae2232-51e1-4c35-b72c-c7152ba264f9",
...
}
],
"availableFilters": [...],
"collectionId": null
}
]
const client = new OptimizeApiClient()
await client.importEntities(123, entities)
-
-With the external variable ingestion API, variable data held in external systems can be ingested into Optimize directly, without the need for these variables to be present in your Camunda platform data. This can be useful when external business data, which is relevant for process analysis in Optimize, is to be associated with specific process instances.
-Especially if this data changes over time, it is advisable to use this REST API to persist external variable updates to Optimize, as otherwise Optimize may not be aware of data changes in the external system.
-const variables = [
{
"id": "7689fced-2639-4408-9de1-cf8f72769f43",
"name": "address",
"type": "string",
"value": "Main Street 1",
"processInstanceId": "c6393461-02bb-4f62-a4b7-f2f8d9bbbac1",
"processDefinitionKey": "shippingProcess"
},
{
"id": "993f4e73-7f6a-46a6-bd45-f4f8e3470ba1",
"name": "amount",
"type": "integer",
"value": "500",
"processInstanceId": "8282ed49-2243-44df-be5e-1bf893755d8f",
"processDefinitionKey": "orderProcess"
}
]
const client = new OptimizeApiClient()
client.ingestExternalVariable(variables)
-
-With the variable labeling endpoint, variable labels can be added, updated, and deleted from Optimize.
- -const variableLabels = {
"definitionKey": "bookrequest-1-tenant",
"labels" : [
{
"variableName": "bookAvailable",
"variableType": "Boolean",
"variableLabel": "book availability"
},
{
"variableName": "person.name",
"variableType": "String",
"variableLabel": "first and last name"
},
{
"variableName": "person.hobbies._listSize",
"variableType": "Long",
"variableLabel": "amount of hobbies"
}
]
}
const client = new OptimizeApiClient()
await client.labelVariables(variableLabels)
-
-Generated using TypeDoc
The high-level client for the Tasklist REST API
-
-
-Optional
options: { Optional
config?: DeepPartial<{ Optional
oconst tasklist = new TasklistApiClient()
const tasks = await tasklist.getTasks({ state: TaskState.CREATED })
-
-Private
oPrivate
restPrivate
userOptional
allowOptional
assignee?: stringAssign a task with taskId to assignee or the active user.
-Status 400 - An error is returned when the task is not active (not in the CREATED state). -Status 400 - An error is returned when task was already assigned, except the case when JWT authentication token used and allowOverrideAssignment = true. -Status 403 - An error is returned when user doesn't have the permission to assign another user to this task. -Status 404 - An error is returned when the task with the taskId is not found.
-Optional
variables: JSONDocComplete a task with taskId and optional variables
-Status 400 An error is returned when the task is not active (not in the CREATED state).
-Status 400 An error is returned if the task was not claimed (assigned) before.
-Status 400 An error is returned if the task is not assigned to the current user.
-Status 403 User has no permission to access the task (Self-managed only).
-Status 404 An error is returned when the task with the taskId is not found.
-Private
getReturn a task by id, or throw if not found.
-Will throw if no task of the given taskId exists
-https://docs.camunda.io/docs/apis-clients/tasklist-api/queries/variable/
-Throws 404 if no variable of the id is found
-Optional
includeOptional
variableThis method returns a list of task variables for the specified taskId and variableNames. If the variableNames parameter is empty, all variables associated with the task will be returned.
-Status 404 - An error is returned when the task with the taskId is not found.
-Private
replaceQuery Tasklist for a list of tasks. See the API documentation.
-Status 400 - An error is returned when more than one search parameters among [searchAfter
, searchAfterOrEqual
, searchBefore
, searchBeforeOrEqual
] are present in request
const tasklist = new TasklistApiClient()
async function getTasks() {
const res = await tasklist.searchTasks({
state: TaskState.CREATED
})
console.log(res ? 'Nothing' : JSON.stringify(res, null, 2))
return res
}
-
-Unassign a task with taskId
-Status 400 An error is returned when the task is not active (not in the CREATED state).
-Status 400 An error is returned if the task was not claimed (assigned) before.
-Status 404 An error is returned when the task with the taskId is not found.
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Optional
formOptional
isGenerated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Optional
formOptional
isGenerated using TypeDoc
Static
Private
parserStatic
Private
parserStatic
taskStatic
generateStatic
getStatic
getStatic
getStatic
Private
mergeStatic
parseStatic
scaffoldStatic
Private
scanStatic
Private
scanGenerated using TypeDoc
Protected
cancelProtected
loggerProtected
taskProtected
zbProtected
drainProtected
handleProtected
handleProtected
makeGenerated using TypeDoc
A client for interacting with a Zeebe broker. With the connection credentials set in the environment, you can use a "zero-conf" constructor with no arguments.
-const zbc = new ZeebeGrpcClient()
zbc.topology().then(info =>
console.log(JSON.stringify(info, null, 2))
)
-
-Optional
options: { Optional
config?: DeepPartial<{ Optional
oPrivate
Optional
closePrivate
closingPrivate
configThe base url for the Admin Console API.
-Credentials for Admin Console and Modeler API
-Credentials for Admin Console and Modeler API
-The audience parameter for an Admin Console OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-When using custom or self-signed certificates, provide the path to the certificate chain
-When using custom or self-signed certificates, provide the path to the private key
-In an environment using self-signed certificates, provide the path to the root certificate
-Custom user agent
-The base url for the Modeler API. Defaults to Camunda Saas - https://modeler.cloud.camunda.io/api
-The audience parameter for a Modeler OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-Set to true to disable OAuth completely
-How soon in milliseconds before its expiration time a cached OAuth token should be considered expired. Defaults to 1000
-The OAuth token exchange endpoint url
-The base url for the Operate API
-The audience parameter for an Operate OAuth token request. Defaults to operate.camunda.io
-The base url for the Optimize API
-The audience parameter for an Optimize OAuth token request. Defaults to optimize.camunda.io
-Control TLS for Zeebe Grpc. Defaults to true. Set to false when using an unsecured gateway
-The base url for the Tasklist API
-The audience parameter for a Tasklist OAuth token request. Defaults to tasklist.camunda.io
-The tenant id when multi-tenancy is enabled
-The directory to cache OAuth tokens on-disk. Defaults to $HOME/.camunda
-Set to true to disable disk caching of OAuth tokens and use memory caching only
-Optional scope parameter for OAuth (needed by some OIDC)
-The audience parameter for a Zeebe OAuth token request. Defaults to zeebe.camunda.io
-The address for the Zeebe Gateway. Defaults to localhost:26500
-This is the client ID for the client credentials
-This is the client secret for the client credentials
-This channel argument controls the maximum number -of pings that can be sent when there is no other -data (data frame or header frame) to be sent. -GRPC Core will not continue sending pings if we -run over the limit. Setting it to 0 allows sending -pings without sending data.
-Minimum allowed time between a server receiving -successive ping frames without sending any data -frame. Int valued, milliseconds. Default: 90000
-Defaults to 90000.
-The time between the first and second connection attempts, -in ms. Defaults to 1000.
-This channel argument if set to 1 -(0 : false; 1 : true), allows keepalive pings -to be sent even if there are no calls in flight. -Defaults to 1.
-After waiting for a duration of this time, if the keepalive ping sender does not receive the ping ack, it will close the -transport. Int valued, milliseconds. Defaults to 120000.
-After a duration of this time the client/server pings its peer to see if the transport is still alive. -Int valued, milliseconds. Defaults to 360000.
-The maximum time between subsequent connection attempts, -in ms. Defaults to 10000.
-The minimum time between subsequent connection attempts, -in ms. Default is 1000ms, but this can cause an SSL Handshake failure. -This causes an intermittent failure in the Worker-LongPoll test when run -against Camunda Cloud. -Raised to 5000ms. -See: https://github.com/grpc/grpc/issues/8382#issuecomment-259482949
-Log level of Zeebe Client and Workers - 'DEBUG' | 'INFO' | 'NONE'. Defaults to 'INFO'
-Zeebe client log output can be human-readable 'SIMPLE' or structured 'JSON'. Defaults to 'SIMPLE'
-The gRPC channel can "jitter". This suppresses a connection error message if the channel comes back within this window in milliseconds. Defaults to 3000
-Immediately connect to the Zeebe Gateway (issues a silent topology request). Defaults to false
-This suppresses intermediate errors during initial connection negotiation. On Camunda SaaS this defaults to 6000, on Self-Managed to 0
-Maximum number of retries of network operations before failing. Defaults to -1 (infinite retries)
-When retrying failed network operations, retries back off to this maximum period. Defaults to 10s
-Automate retrying operations that fail due to network conditions or broker backpressure. Defaults to true
-How long in seconds the long poll Job Activation request is held open by a worker. Defaults to 60
-After a long poll Job Activation request, this is the cool-off period in milliseconds before the worker requests more work. Defaults to 300
-Optional
connectedPrivate
Optional
customSSLPrivate
grpcPrivate
loggerPrivate
maxPrivate
maxPrivate
oOptional
onOptional
onPrivate
optionsPrivate
retryPrivate
stdoutPrivate
Optional
tenantPrivate
useTLSPrivate
workerPrivate
workersPrivate
_onactivateJobs allows you to manually activate jobs, effectively building a worker; rather than using the ZBWorker class.
-const zbc = new ZeebeGrpcClient()
zbc.activateJobs({
maxJobsToActivate: 5,
requestTimeout: 6000,
timeout: 5 * 60 * 1000,
type: 'process-payment',
worker: 'my-worker-uuid'
}).then(jobs =>
jobs.forEach(job =>
// business logic
zbc.completeJob({
jobKey: job.key,
variables: {}
))
)
})
-
-Broadcast a Signal
-const zbc = new ZeebeGrpcClient()
zbc.broadcastSignal({
signalName: 'my-signal',
variables: { reasonCode: 3 }
})
-
-Cancel a process instance by process instance key.
-const zbc = new ZeebeGrpcClient()
zbc.cancelProcessInstance(processInstanceId)
.catch(
(e: any) => console.log(`Error cancelling instance: ${e.message}`)
)
-
-Optional
timeout: numberGracefully shut down all workers, draining existing tasks, and return when it is safe to exit.
-const zbc = new ZeebeGrpcClient()
zbc.createWorker({
taskType:
})
setTimeout(async () => {
await zbc.close()
console.log('All work completed.')
}),
5 * 60 * 1000 // 5 mins
)
-
-Explicitly complete a job. The method is useful for manually constructing a worker.
-const zbc = new ZeebeGrpcClient()
zbc.activateJobs({
maxJobsToActivate: 5,
requestTimeout: 6000,
timeout: 5 * 60 * 1000,
type: 'process-payment',
worker: 'my-worker-uuid'
}).then(jobs =>
jobs.forEach(job =>
// business logic
zbc.completeJob({
jobKey: job.key,
variables: {}
))
)
})
-
-Private
constructOptional
onOptional
onOptional
tasktype?: stringCreate a new process instance. Asynchronously returns a process instance id.
-const zbc = new ZeebeGrpcClient()
zbc.createProcessInstance({
bpmnProcessId: 'onboarding-process',
variables: {
customerId: 'uuid-3455'
},
version: 5 // optional, will use latest by default
}).then(res => console.log(JSON.stringify(res, null, 2)))
zbc.createProcessInstance({
bpmnProcessId: 'SkipFirstTask',
variables: { id: random },
startInstructions: [{elementId: 'second_service_task'}]
}).then(res => (id = res.processInstanceKey))
-
-Create a process instance, and return a Promise that returns the outcome of the process.
-const zbc = new ZeebeGrpcClient()
zbc.createProcessInstanceWithResult({
bpmnProcessId: 'order-process',
variables: {
customerId: 123,
invoiceId: 567
}
})
.then(console.log)
-
-Create a worker that polls the gateway for jobs and executes a job handler when units of work are available.
-const zbc = new ZB.ZeebeGrpcClient()
const zbWorker = zbc.createWorker({
taskType: 'demo-service',
taskHandler: myTaskHandler,
})
// A job handler must return one of job.complete, job.fail, job.error, or job.forward
// Note: unhandled exceptions in the job handler cause the library to call job.fail
async function myTaskHandler(job) {
zbWorker.log('Task variables', job.variables)
// Task worker business logic goes here
const updateToBrokerVariables = {
updatedProperty: 'newValue',
}
const res = await callExternalSystem(job.variables)
if (res.code === 'SUCCESS') {
return job.complete({
...updateToBrokerVariables,
...res.values
})
}
if (res.code === 'BUSINESS_ERROR') {
return job.error({
code: res.errorCode,
message: res.message
})
}
if (res.code === 'ERROR') {
return job.fail({
errorMessage: res.message,
retryBackOff: 2000
})
}
}
-
-Delete a resource.
-The key of the resource that should be deleted. This can either be the key of a process definition, the key of a decision requirements definition or the key of a form.
-Deploys one or more resources (e.g. processes or decision models) to Zeebe. -Note that this is an atomic call, i.e. either all resources are deployed, or none of them are.
-Errors: -PERMISSION_DENIED:
-import {join} from 'path'
const zbc = new ZeebeGrpcClient()
zbc.deployResource({ processFilename: join(process.cwd(), 'bpmn', 'onboarding.bpmn' })
zbc.deployResource({ decisionFilename: join(process.cwd(), 'dmn', 'approval.dmn')})
-
-Evaluates a decision. The decision to evaluate can be specified either by using its unique key (as returned by DeployResource), or using the decision ID. When using the decision ID, the latest deployed version of the decision is used.
-const zbc = new ZeebeGrpcClient()
zbc.evaluateDecision({
decisionId: 'my-decision',
variables: { season: "Fall" }
}).then(res => console.log(JSON.stringify(res, null, 2)))
-
-Private
executeIf this.retry is set true, the operation will be wrapped in an configurable retry on exceptions -of gRPC error code 14 - Transient Network Failure. -See: https://github.com/grpc/grpc/blob/master/doc/statuscodes.md -If this.retry is false, it will be executed with no retry, and the application should handle the exception.
-Fail a job. This is useful if you are using the decoupled completion pattern or building your own worker. -For the retry count, the current count is available in the job metadata.
-const zbc = new ZeebeGrpcClient()
zbc.failJob( {
jobKey: '345424343451',
retries: 3,
errorMessage: 'Could not get a response from the order invoicing API',
retryBackOff: 30 * 1000 // optional, otherwise available for reactivation immediately
})
-
-Return an array of task types contained in a BPMN file or array of BPMN files. This can be useful, for example, to do
-const zbc = new ZeebeGrpcClient()
zbc.getServiceTypesFromBpmn(['bpmn/onboarding.bpmn', 'bpmn/process-sale.bpmn'])
.then(tasktypes => console.log('The task types are:', tasktypes))
-
-Modify a running process instance. This allows you to move the execution tokens, and change the variables. Added in 8.1. -See the gRPC protocol documentation.
-zbc.createProcessInstance('SkipFirstTask', {}).then(res =>
zbc.modifyProcessInstance({
processInstanceKey: res.processInstanceKey,
activateInstructions: [{
elementId: 'second_service_task',
ancestorElementInstanceKey: "-1",
variableInstructions: [{
scopeId: '',
variables: { second: 1}
}]
}]
})
)
-
-Publish a message to the broker for correlation with a workflow instance. See this tutorial for a detailed description of message correlation.
-const zbc = new ZeebeGrpcClient()
zbc.publishMessage({
// Should match the "Message Name" in a BPMN Message Catch
name: 'order_status',
correlationKey: 'uuid-124-532-5432',
variables: {
event: 'PROCESSED'
}
})
-
-Publish a message to the broker for correlation with a workflow message start event. -For a message targeting a start event, the correlation key is not needed to target a specific running process instance. -However, the hash of the correlationKey is used to determine the partition where this workflow will start. -So we assign a random uuid to balance workflow instances created via start message across partitions.
-We make the correlationKey optional, because the caller can specify a correlationKey + messageId -to guarantee an idempotent message.
-Multiple messages with the same correlationKey + messageId combination will only start a workflow once. -See: https://github.com/zeebe-io/zeebe/issues/1012 and https://github.com/zeebe-io/zeebe/issues/1022
-const zbc = new ZeebeGrpcClient()
zbc.publishStartMessage({
name: 'Start_New_Onboarding_Flow',
variables: {
customerId: 'uuid-348-234-8908'
}
})
// To do the same in an idempotent fashion - note: only idempotent during the lifetime of the created instance.
zbc.publishStartMessage({
name: 'Start_New_Onboarding_Flow',
messageId: 'uuid-348-234-8908', // use customerId to make process idempotent per customer
variables: {
customerId: 'uuid-348-234-8908'
}
})
-
-Resolve an incident by incident key.
-type JSONObject = {[key: string]: string | number | boolean | JSONObject}
const zbc = new ZeebeGrpcClient()
async updateAndResolveIncident({
processInstanceId,
incidentKey,
variables
} : {
processInstanceId: string,
incidentKey: string,
variables: JSONObject
}) {
await zbc.setVariables({
elementInstanceKey: processInstanceId,
variables
})
await zbc.updateRetries()
zbc.resolveIncident({
incidentKey
})
zbc.resolveIncident(incidentKey)
}
-
-Private
retryThis function takes a gRPC operation that returns a Promise as a function, and invokes it. -If the operation throws gRPC error 14, this function will continue to try it until it succeeds -or retries are exhausted.
-Directly modify the variables is a process instance. This can be used with resolveIncident
to update the process and resolve an incident.
type JSONObject = {[key: string]: string | number | boolean | JSONObject}
const zbc = new ZeebeGrpcClient()
async function updateAndResolveIncident({
incidentKey,
processInstanceKey,
jobKey,
variableUpdate
} : {
incidentKey: string
processInstanceKey: string
jobKey: string
variableUpdate: JSONObject
}) {
await zbc.setVariables({
elementInstanceKey: processInstanceKey,
variables: variableUpdate
})
await zbc.updateJobRetries({
jobKey,
retries: 1
})
return zbc.resolveIncident({
incidentKey
})
}
-
-Fail a job by throwing a business error (i.e. non-technical) that occurs while processing a job.
-The error is handled in the workflow by an error catch event.
-If there is no error catch event with the specified errorCode
then an incident will be raised instead.
-This method is useful when building a worker, for example for the decoupled completion pattern.
type JSONObject = {[key: string]: string | number | boolean | JSONObject}
interface errorResult {
resultType: 'ERROR' as 'ERROR'
errorCode: string
errorMessage: string
}
interface successResult {
resultType: 'SUCCESS' as 'SUCCESS'
variableUpdate: JSONObject
}
type Result = errorResult | successResult
const zbc = new ZeebeGrpcClient()
// This could be a listener on a return queue from an external system
async function handleJob(jobKey: string, result: Result) {
if (resultType === 'ERROR') {
const { errorMessage, errorCode } = result
zbc.throwError({
jobKey,
errorCode,
errorMessage
})
} else {
zbc.completeJob({
jobKey,
variables: result.variableUpdate
})
}
}
-
-Return the broker cluster topology.
-const zbc = new ZeebeGrpcClient()
zbc.topology().then(res => console.res(JSON.stringify(res, null, 2)))
-
-Update the number of retries for a Job. This is useful if a job has zero remaining retries and fails, raising an incident.
-type JSONObject = {[key: string]: string | number | boolean | JSONObject}
const zbc = new ZeebeGrpcClient()
async function updateAndResolveIncident({
incidentKey,
processInstanceKey,
jobKey,
variableUpdate
} : {
incidentKey: string
processInstanceKey: string
jobKey: string
variableUpdate: JSONObject
}) {
await zbc.setVariables({
elementInstanceKey: processInstanceKey,
variables: variableUpdate
})
await zbc.updateJobRetries({
jobKey,
retries: 1
})
return zbc.resolveIncident({
incidentKey
})
}
-
-Generated using TypeDoc
Extend the LosslessDto class with your own Dto classes to enable lossless parsing of int64 values.
-Decorate fields with @Int64String
or @BigIntValue
to specify how int64 JSON numbers should be parsed.
class MyDto extends LosslessDto {
@Int64String
int64NumberField: string
@BigIntValue
bigintField: bigint
@ChildDto(MyChildDto)
childDtoField: MyChildDto
normalField: string
normalNumberField: number
}
-
-Generated using TypeDoc
Private
Optional
blockingPrivate
errorPrivate
infoPrivate
emptyPrivate
emptyPrivate
wrapRest
...optionalParameters: unknown[]Generated using TypeDoc
Private
emitterGenerated using TypeDoc
Private
colorPrivate
colorisePrivate
Optional
idPrivate
namespacePrivate
Optional
pollPrivate
stdoutPrivate
Optional
taskPrivate
_colorisePrivate
makeGenerated using TypeDoc
Private
activePrivate
backProtected
cancelPrivate
capacityPrivate
Optional
closePrivate
Optional
closePrivate
closedPrivate
closingPrivate
connectedPrivate
customRest
...args: any[]Private
debugPrivate
fetchPrivate
idPrivate
inputPrivate
Optional
jobProtected
loggerPrivate
longPrivate
pollPrivate
pollPrivate
pollPrivate
readiedPrivate
stalledProtected
taskPrivate
Optional
tenantProtected
zbStatic
Private
Readonly
DEFAULT_Static
Private
Readonly
DEFAULT_Private
activatePrivate
completeProtected
drainPrivate
errorPrivate
failOptional
retries?: numberOptional
retryPrivate
handleProtected
handlePrivate
handleProtected
makePrivate
pollGenerated using TypeDoc
Private
channelPrivate
closingPrivate
configThe base url for the Admin Console API.
-Credentials for Admin Console and Modeler API
-Credentials for Admin Console and Modeler API
-The audience parameter for an Admin Console OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-When using custom or self-signed certificates, provide the path to the certificate chain
-When using custom or self-signed certificates, provide the path to the private key
-In an environment using self-signed certificates, provide the path to the root certificate
-Custom user agent
-The base url for the Modeler API. Defaults to Camunda Saas - https://modeler.cloud.camunda.io/api
-The audience parameter for a Modeler OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-Set to true to disable OAuth completely
-How soon in milliseconds before its expiration time a cached OAuth token should be considered expired. Defaults to 1000
-The OAuth token exchange endpoint url
-The base url for the Operate API
-The audience parameter for an Operate OAuth token request. Defaults to operate.camunda.io
-The base url for the Optimize API
-The audience parameter for an Optimize OAuth token request. Defaults to optimize.camunda.io
-Control TLS for Zeebe Grpc. Defaults to true. Set to false when using an unsecured gateway
-The base url for the Tasklist API
-The audience parameter for a Tasklist OAuth token request. Defaults to tasklist.camunda.io
-The tenant id when multi-tenancy is enabled
-The directory to cache OAuth tokens on-disk. Defaults to $HOME/.camunda
-Set to true to disable disk caching of OAuth tokens and use memory caching only
-Optional scope parameter for OAuth (needed by some OIDC)
-The audience parameter for a Zeebe OAuth token request. Defaults to zeebe.camunda.io
-The address for the Zeebe Gateway. Defaults to localhost:26500
-This is the client ID for the client credentials
-This is the client secret for the client credentials
-This channel argument controls the maximum number -of pings that can be sent when there is no other -data (data frame or header frame) to be sent. -GRPC Core will not continue sending pings if we -run over the limit. Setting it to 0 allows sending -pings without sending data.
-Minimum allowed time between a server receiving -successive ping frames without sending any data -frame. Int valued, milliseconds. Default: 90000
-Defaults to 90000.
-The time between the first and second connection attempts, -in ms. Defaults to 1000.
-This channel argument if set to 1 -(0 : false; 1 : true), allows keepalive pings -to be sent even if there are no calls in flight. -Defaults to 1.
-After waiting for a duration of this time, if the keepalive ping sender does not receive the ping ack, it will close the -transport. Int valued, milliseconds. Defaults to 120000.
-After a duration of this time the client/server pings its peer to see if the transport is still alive. -Int valued, milliseconds. Defaults to 360000.
-The maximum time between subsequent connection attempts, -in ms. Defaults to 10000.
-The minimum time between subsequent connection attempts, -in ms. Default is 1000ms, but this can cause an SSL Handshake failure. -This causes an intermittent failure in the Worker-LongPoll test when run -against Camunda Cloud. -Raised to 5000ms. -See: https://github.com/grpc/grpc/issues/8382#issuecomment-259482949
-Log level of Zeebe Client and Workers - 'DEBUG' | 'INFO' | 'NONE'. Defaults to 'INFO'
-Zeebe client log output can be human-readable 'SIMPLE' or structured 'JSON'. Defaults to 'SIMPLE'
-The gRPC channel can "jitter". This suppresses a connection error message if the channel comes back within this window in milliseconds. Defaults to 3000
-Immediately connect to the Zeebe Gateway (issues a silent topology request). Defaults to false
-This suppresses intermediate errors during initial connection negotiation. On Camunda SaaS this defaults to 6000, on Self-Managed to 0
-Maximum number of retries of network operations before failing. Defaults to -1 (infinite retries)
-When retrying failed network operations, retries back off to this maximum period. Defaults to 10s
-Automate retrying operations that fail due to network conditions or broker backpressure. Defaults to true
-How long in seconds the long poll Job Activation request is held open by a worker. Defaults to 60
-After a long poll Job Activation request, this is the cool-off period in milliseconds before the worker requests more work. Defaults to 300
-Private
connectionPrivate
Optional
failPrivate
gRPCRetryPrivate
listOptional
longPrivate
Optional
oPrivate
packagePrivate
Optional
readyPrivate
userPrivate
getPrivate
interceptorPrivate
setPrivate
setPrivate
waitGenerated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
This is the official Camunda 8 JavaScript SDK. It is written in TypeScript and runs on NodeJS (why not in a web browser?).
-Install the SDK as a dependency:
-npm i @camunda8/sdk
-
-In this release, the functionality of the Camunda Platform 8 is exposed via dedicated clients for the component APIs.
-import { Camunda8 } from '@camunda8/sdk'
const c8 = new Camunda8()
const zeebe = c8.getZeebeGrpcClient()
const operate = c8.getOperateApiClient()
const optimize = c8.getOptimizeApiClient()
const tasklist = c8.getTasklistApiClient()
const modeler = c8.getModelerApiClient()
const admin = c8.getAdminApiClient()
-
-The configuration for the SDK can be done by any combination of environment variables and explicit configuration passed to the Camunda8
constructor.
Any configuration passed in to the Camunda8
constructor is merged over any configuration in the environment.
The configuration object fields and the environment variables have exactly the same names. See the file src/lib/Configuration.ts
for a complete list of configuration.
Entity keys in Camunda 8 are stored and represented as int64
numbers. The range of int64
extends to numbers that cannot be represented by the JavaScript number
type. To deal with this, int64
keys are serialised by the SDK to the JavaScript string
type. See this issue for more details.
Some number values - for example: "total returned results " - may be specified as int64
in the API specifications. Although these numbers will usually not contain unsafe values, they are always serialised to string
.
For int64
values whose type is not known ahead of time, such as job variables, you can pass an annotated Dto object to decode them reliably. If no Dto is specified, the default behaviour of the SDK is to serialise all numbers to JavaScript number
, and if a number value is detected at runtime that cannot be accurately stored as number
, to throw an Exception.
Calls to APIs are authorized using a token that is obtained via a client id/secret pair exchange and then passes as an authorisation header on API calls. The SDK handles this transparently for you.
-If your Camunda 8 Platform is secured using token exchange you will need to provide the client id and secret to the SDK.
-To disable OAuth, set the environment variable CAMUNDA_OAUTH_DISABLED
. You can use this when, for example, running against a minimal Zeebe broker in a development environment.
With this environment variable set, the SDK will inject a NullAuthProvider
that does nothing.
To get a token for use with the application APIs you need to provide the following configuration fields at a minimum, either via the Camunda8
constructor or in environment variables:
ZEEBE_ADDRESS
ZEEBE_CLIENT_ID
ZEEBE_CLIENT_SECRET
CAMUNDA_OAUTH_URL
-
-To get a token for the Camunda SaaS Admin Console API or the Camunda SaaS Modeler API you need to set the following:
-CAMUNDA_CONSOLE_CLIENT_ID
CAMUNDA_CONSOLE_CLIENT_SECRET
-
-OAuth tokens are cached in-memory and on-disk. The disk cache is useful, for example, to prevent token endpoint saturation when restarting or rolling over workers. They can all hit the cache instead of requesting new tokens.
-You can turn off the disk caching by setting CAMUNDA_TOKEN_DISK_CACHE_DISABLE
to true. This will cache tokens in-memory only.
By default the token cache directory is $HOME/.camunda
. You can specify a different directory by providing a full file path value for CAMUNDA_TOKEN_CACHE_DIR
.
Here is an example of specifying a different cache directory via the constructor:
-import { Camunda8 } from '@camunda8/sdk'
const c8 = new Camunda8({
config: {
CAMUNDA_TOKEN_CACHE_DIR: '/tmp/cache',
},
})
-
-If the cache directory does not exist, the SDK will attempt to create it (recursively). If the SDK is unable to create it, or the directory exists but is not writeable by your application then the SDK will throw an exception.
-Token refresh timing relative to expiration is controlled by the CAMUNDA_OAUTH_TOKEN_REFRESH_THRESHOLD_MS
value. By default this is 1000ms. Tokens are renewed this amount of time before they expire.
If you experience intermittent 401: Unauthorized
errors, this may not be sufficient time to refresh the token before it expires in your infrastructure. Increase this value to force a token to be refreshed before it expires.
This is the complete environment configuration needed to run against the Dockerised Self-Managed Stack in the docker
subdirectory:
# Self-Managed
export ZEEBE_ADDRESS='localhost:26500'
export ZEEBE_CLIENT_ID='zeebe'
export ZEEBE_CLIENT_SECRET='zecret'
export CAMUNDA_OAUTH_URL='http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token'
export CAMUNDA_TASKLIST_BASE_URL='http://localhost:8082'
export CAMUNDA_OPERATE_BASE_URL='http://localhost:8081'
export CAMUNDA_OPTIMIZE_BASE_URL='http://localhost:8083'
export CAMUNDA_MODELER_BASE_URL='http://localhost:8070/api'
# Turn off the tenant ID, which may have been set by Multi-tenant tests
# You can set this in a constructor config, or in the environment if running multi-tenant
export CAMUNDA_TENANT_ID=''
# TLS for gRPC is on by default. If the Zeebe broker is not secured by TLS, turn it off
export CAMUNDA_SECURE_CONNECTION=false
-
-If you are using an OIDC that requires a scope
parameter to be passed with the token request, set the following variable:
CAMUNDA_TOKEN_SCOPE
-
-Here is an example of doing this via the constructor, rather than via the environment:
-import { Camunda8 } from '@camunda8/sdk'
const c8 = new Camunda8({
config: {
ZEEBE_ADDRESS: 'localhost:26500'
ZEEBE_CLIENT_ID: 'zeebe'
ZEEBE_CLIENT_SECRET: 'zecret'
CAMUNDA_OAUTH_URL: 'http://localhost:18080/auth/realms/camunda-platform/protocol/openid-connect/token'
CAMUNDA_TASKLIST_BASE_URL: 'http://localhost:8082'
CAMUNDA_OPERATE_BASE_URL: 'http://localhost:8081'
CAMUNDA_OPTIMIZE_BASE_URL: 'http://localhost:8083'
CAMUNDA_MODELER_BASE_URL: 'http://localhost:8070/api'
CAMUNDA_TENANT_ID: '' // We can override values in the env by passing an empty string value
CAMUNDA_SECURE_CONNECTION: false
}
})
-
-Here is a complete configuration example for connection to Camunda SaaS:
-export ZEEBE_ADDRESS='5c34c0a7-7f29-4424-8414-125615f7a9b9.syd-1.zeebe.camunda.io:443'
export ZEEBE_CLIENT_ID='yvvURO9TmBnP3zx4Xd8Ho6apgeiZTjn6'
export ZEEBE_CLIENT_SECRET='iJJu-SHgUtuJTTAMnMLdcb8WGF8s2mHfXhXutEwe8eSbLXn98vUpoxtuLk5uG0en'
# export CAMUNDA_CREDENTIALS_SCOPES='Zeebe,Tasklist,Operate,Optimize' # What APIs these client creds are authorised for
export CAMUNDA_TASKLIST_BASE_URL='https://syd-1.tasklist.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export CAMUNDA_OPTIMIZE_BASE_URL='https://syd-1.optimize.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export CAMUNDA_OPERATE_BASE_URL='https://syd-1.operate.camunda.io/5c34c0a7-7f29-4424-8414-125615f7a9b9'
export CAMUNDA_OAUTH_URL='https://login.cloud.camunda.io/oauth/token'
# This is on by default, but we include it in case it got turned off for local tests
export CAMUNDA_SECURE_CONNECTION=true
# Admin Console and Modeler API Client
export CAMUNDA_CONSOLE_CLIENT_ID='e-JdgKfJy9hHSXzi'
export CAMUNDA_CONSOLE_CLIENT_SECRET='DT8Pe-ANC6e3Je_ptLyzZvBNS0aFwaIV'
export CAMUNDA_CONSOLE_BASE_URL='https://api.cloud.camunda.io'
export CAMUNDA_CONSOLE_OAUTH_AUDIENCE='api.cloud.camunda.io'
-
-The SDK uses the debug
library. To enable debugging output, set a value for the DEBUG
environment variable. The value is a comma-separated list of debugging namespaces. The SDK has the following namespaces:
Value | -Component | -
---|---|
camunda:adminconsole |
-Admin Console API | -
camunda:modeler |
-Modeler API | -
camunda:operate |
-Operate API | -
camunda:optimize |
-Optimize API | -
camunda:tasklist |
-Tasklist API | -
camunda:oauth |
-OAuth Token Exchange | -
camunda:grpc |
-Zeebe gRPC channel | -
camunda:worker |
-Zeebe Worker | -
camunda:zeebeclient |
-Zeebe Client | -
The variable payload in a Zeebe Worker task handler is available as an object job.variables
. By default, this is of type any
.
The ZBClient.createWorker()
method accepts an inputVariableDto
to control the parsing of number values and provide design-time type information. Passing an inputVariableDto
class to a Zeebe worker is optional. If a Dto class is passed to the Zeebe Worker, it is used for two purposes:
job.variables
object.int64
values that cannot be represented accurately by the JavaScript number
type. With a Dto, you can specify that a specific JSON number fields be parsed losslessly to a string
or BigInt
.With no Dto specified, there is no design-time type safety. At run-time, all JSON numbers are converted to the JavaScript number
type. If a variable field has a number value that cannot be safely represented using the JavaScript number type (a value greater than 2^53 -1) then an exception is thrown.
To provide a Dto, extend the LosslessDto
class, like so:
class MyVariableDto extends LosslessDto {
name!: string
maybeAge?: number
@Int64String
veryBigNumber!: string
@BigIntValue
veryBigInteger!: bigint
}
-
-In this case, veryBigNumber
is an int64
value. It is transferred as a JSON number on the wire, but the parser will parse it into a string
so that no loss of precision occurs. Similarly, veryBigInteger
is a very large integer value. In this case, we direct the parser to parse this variable field as a bigint
.
You can nest Dtos like this:
-class MyLargerDto extends LosslessDto {
id!: string
@ChildDto(MyVariableDto)
entry!: MyVariableDto
}
-
-The Zeebe worker receives custom headers as job.customHeaders
. The ZBClient.createWorker()
method accepts a customHeadersDto
to control the behaviour of custom header parsing of number values and provide design-time type information.
This follows the same strategy as the job variables, as previously described.
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
maxLength: 255, minLength: 1
-maxLength: 255, minLength: 1
-maxLength: 255, minLength: 1
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
maxLength: 255, minLength: 1
-maxLength: 255, minLength: 1
-maxLength: 255, minLength: 1
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Optional
filterOptional
searchOptional
sizeOptional
sortGenerated using TypeDoc
Generated using TypeDoc
The object interface for CloudEvents 1.0.
-Optional
data[OPTIONAL] The event payload. This specification does not place any restriction -on the type of this information. It is encoded into a media format which is -specified by the datacontenttype attribute (e.g. application/json), and adheres -to the dataschema format when those respective attributes are present.
-Optional
data_[OPTIONAL] The event payload encoded as base64 data. This is used when the -data is in binary form.
-Optional
datacontenttype[OPTIONAL] Content type of data
value. This attribute enables data
to
-carry any type of content, whereby format and encoding might differ from that
-of the chosen event format. For example, an event rendered using the
-JSON envelope format might carry an XML payload
-in data
, and the consumer is informed by this attribute being set to
-"application/xml". The rules for how data
content is rendered for different
-datacontenttype
values are defined in the event format specifications; for
-example, the JSON event format defines the relationship in
-section 3.1.
Optional
dataschema[OPTIONAL] Identifies the schema that data
adheres to. Incompatible
-changes to the schema SHOULD be reflected by a different URI. See
-Versioning of Attributes in the Primer
-for more information.
-If present, MUST be a non-empty URI.
[REQUIRED] Identifies the event. Producers MUST ensure that source
+ id
-is unique for each distinct event. If a duplicate event is re-sent (e.g. due
-to a network error) it MAY have the same id
. Consumers MAY assume that
-Events with identical source
and id
are duplicates.
Non-empty string. Unique within producer.
-An event counter maintained by the producer
-
-A UUID
-
-[REQUIRED] Identifies the context in which an event happened. Often this
-will include information such as the type of the event source, the
-organization publishing the event or the process that produced the event. The
-exact syntax and semantics behind the data encoded in the URI is defined by
-the event producer.
-Producers MUST ensure that source
+ id
is unique for each distinct event.
-An application MAY assign a unique source
to each distinct producer, which
-makes it easy to produce unique IDs since no other producer will have the same
-source. The application MAY use UUIDs, URNs, DNS authorities or an
-application-specific scheme to create unique source
identifiers.
-A source MAY include more than one producer. In that case the producers MUST
-collaborate to ensure that source
+ id
is unique for each distinct event.
Non-empty URI-reference
-[REQUIRED] The version of the CloudEvents specification which the event
-uses. This enables the interpretation of the context. Compliant event
-producers MUST use a value of 1.0
when referring to this version of the
-specification.
MUST be a non-empty string.
-Optional
subject[OPTIONAL] This describes the subject of the event in the context of the
-event producer (identified by source
). In publish-subscribe scenarios, a
-subscriber will typically subscribe to events emitted by a source
, but the
-source
identifier alone might not be sufficient as a qualifier for any
-specific event if the source
context has internal sub-structure.
Identifying the subject of the event in context metadata (opposed to only in
-the data
payload) is particularly helpful in generic subscription filtering
-scenarios where middleware is unable to interpret the data
content. In the
-above example, the subscriber might only be interested in blobs with names
-ending with '.jpg' or '.jpeg' and the subject
attribute allows for
-constructing a simple and efficient string-suffix filter for that subset of
-events.
If present, MUST be a non-empty string.
-"https://example.com/storage/tenant/container"
-
-"mynewfile.jpg"
-
-Optional
time[OPTIONAL] Timestamp of when the occurrence happened. If the time of the
-occurrence cannot be determined then this attribute MAY be set to some other
-time (such as the current time) by the CloudEvents producer, however all
-producers for the same source
MUST be consistent in this respect. In other
-words, either they all use the actual time of the occurrence or they all use
-the same algorithm to determine the value used.
"2020-08-08T14:48:09.769Z"
-
-[REQUIRED] This attribute contains a value describing the type of event
-related to the originating occurrence. Often this attribute is used for
-routing, observability, policy enforcement, etc. The format of this is
-producer defined and might include information such as the version of the
-type
- see
-Versioning of Attributes in the Primer
-for more information.
MUST be a non-empty string
-SHOULD be prefixed with a reverse-DNS name. The prefixed domain dictates the - organization which defines the semantics of this event type.
-com.github.pull.create
-
-com.example.object.delete.v2
-
-Generated using TypeDoc
Optional
data[OPTIONAL] The event payload. This specification does not place any restriction -on the type of this information. It is encoded into a media format which is -specified by the datacontenttype attribute (e.g. application/json), and adheres -to the dataschema format when those respective attributes are present.
-Optional
data_[OPTIONAL] The event payload encoded as base64 data. This is used when the -data is in binary form.
-Optional
datacontenttype[OPTIONAL] Content type of data
value. This attribute enables data
to
-carry any type of content, whereby format and encoding might differ from that
-of the chosen event format. For example, an event rendered using the
-JSON envelope format might carry an XML payload
-in data
, and the consumer is informed by this attribute being set to
-"application/xml". The rules for how data
content is rendered for different
-datacontenttype
values are defined in the event format specifications; for
-example, the JSON event format defines the relationship in
-section 3.1.
Optional
dataschema[OPTIONAL] Identifies the schema that data
adheres to. Incompatible
-changes to the schema SHOULD be reflected by a different URI. See
-Versioning of Attributes in the Primer
-for more information.
-If present, MUST be a non-empty URI.
[REQUIRED] Identifies the context in which an event happened. Often this
-will include information such as the type of the event source, the
-organization publishing the event or the process that produced the event. The
-exact syntax and semantics behind the data encoded in the URI is defined by
-the event producer.
-Producers MUST ensure that source
+ id
is unique for each distinct event.
-An application MAY assign a unique source
to each distinct producer, which
-makes it easy to produce unique IDs since no other producer will have the same
-source. The application MAY use UUIDs, URNs, DNS authorities or an
-application-specific scheme to create unique source
identifiers.
-A source MAY include more than one producer. In that case the producers MUST
-collaborate to ensure that source
+ id
is unique for each distinct event.
Non-empty URI-reference
-Optional
subject[OPTIONAL] This describes the subject of the event in the context of the
-event producer (identified by source
). In publish-subscribe scenarios, a
-subscriber will typically subscribe to events emitted by a source
, but the
-source
identifier alone might not be sufficient as a qualifier for any
-specific event if the source
context has internal sub-structure.
Identifying the subject of the event in context metadata (opposed to only in
-the data
payload) is particularly helpful in generic subscription filtering
-scenarios where middleware is unable to interpret the data
content. In the
-above example, the subscriber might only be interested in blobs with names
-ending with '.jpg' or '.jpeg' and the subject
attribute allows for
-constructing a simple and efficient string-suffix filter for that subset of
-events.
If present, MUST be a non-empty string.
-"https://example.com/storage/tenant/container"
-
-"mynewfile.jpg"
-
-Optional
time[OPTIONAL] Timestamp of when the occurrence happened. If the time of the
-occurrence cannot be determined then this attribute MAY be set to some other
-time (such as the current time) by the CloudEvents producer, however all
-producers for the same source
MUST be consistent in this respect. In other
-words, either they all use the actual time of the occurrence or they all use
-the same algorithm to determine the value used.
"2020-08-08T14:48:09.769Z"
-
-[REQUIRED] This attribute contains a value describing the type of event
-related to the originating occurrence. Often this attribute is used for
-routing, observability, policy enforcement, etc. The format of this is
-producer defined and might include information such as the version of the
-type
- see
-Versioning of Attributes in the Primer
-for more information.
MUST be a non-empty string
-SHOULD be prefixed with a reverse-DNS name. The prefixed domain dictates the - organization which defines the semantics of this event type.
-com.github.pull.create
-
-com.example.object.delete.v2
-
-Generated using TypeDoc
[OPTIONAL] CloudEvents extension attributes.
-Optional
data[OPTIONAL] The event payload. This specification does not place any restriction -on the type of this information. It is encoded into a media format which is -specified by the datacontenttype attribute (e.g. application/json), and adheres -to the dataschema format when those respective attributes are present.
-Optional
data_[OPTIONAL] The event payload encoded as base64 data. This is used when the -data is in binary form.
-Optional
datacontenttype[OPTIONAL] Content type of data
value. This attribute enables data
to
-carry any type of content, whereby format and encoding might differ from that
-of the chosen event format. For example, an event rendered using the
-JSON envelope format might carry an XML payload
-in data
, and the consumer is informed by this attribute being set to
-"application/xml". The rules for how data
content is rendered for different
-datacontenttype
values are defined in the event format specifications; for
-example, the JSON event format defines the relationship in
-section 3.1.
Optional
dataschema[OPTIONAL] Identifies the schema that data
adheres to. Incompatible
-changes to the schema SHOULD be reflected by a different URI. See
-Versioning of Attributes in the Primer
-for more information.
-If present, MUST be a non-empty URI.
Optional
subject[OPTIONAL] This describes the subject of the event in the context of the
-event producer (identified by source
). In publish-subscribe scenarios, a
-subscriber will typically subscribe to events emitted by a source
, but the
-source
identifier alone might not be sufficient as a qualifier for any
-specific event if the source
context has internal sub-structure.
Identifying the subject of the event in context metadata (opposed to only in
-the data
payload) is particularly helpful in generic subscription filtering
-scenarios where middleware is unable to interpret the data
content. In the
-above example, the subscriber might only be interested in blobs with names
-ending with '.jpg' or '.jpeg' and the subject
attribute allows for
-constructing a simple and efficient string-suffix filter for that subset of
-events.
If present, MUST be a non-empty string.
-"https://example.com/storage/tenant/container"
-
-"mynewfile.jpg"
-
-Optional
time[OPTIONAL] Timestamp of when the occurrence happened. If the time of the
-occurrence cannot be determined then this attribute MAY be set to some other
-time (such as the current time) by the CloudEvents producer, however all
-producers for the same source
MUST be consistent in this respect. In other
-words, either they all use the actual time of the occurrence or they all use
-the same algorithm to determine the value used.
"2020-08-08T14:48:09.769Z"
-
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Optional
assignedOptional
assigneeOptional
assigneesOptional
candidateOptional
candidateOptional
candidateOptional
candidateOptional
dueOptional
followOptional
implementationOptional
includeAn array used to specify a list of variable names that should be included in the response when querying tasks. -This field allows users to selectively retrieve specific variables associated with the tasks returned in the search results.
-Optional
pageOptional
processOptional
processUsed to return a paginated result. Array of values that should be copied from sortValues of one of the tasks from the current search results page. -It enables the API to return a page of tasks that directly follow or are equal to the task identified by the provided values, with respect to the sorting order.
-Optional
sortOptional
stateOptional
taskOptional
taskAn array of filter clauses specifying the variables to filter for. -If defined, the query returns only tasks to which all clauses apply. -However, it's important to note that this filtering mechanism is -designed to work exclusively with truncated variables. This means -variables of a larger size are not compatible with this filter, and -attempts to use them may result in inaccurate or incomplete query results.
-Optional
tenantGenerated using TypeDoc
Optional
assignedOptional
assigneeOptional
assigneesOptional
candidateOptional
candidateOptional
candidateOptional
candidateOptional
dueOptional
followOptional
implementationOptional
includeAn array used to specify a list of variable names that should be included in the response when querying tasks. -This field allows users to selectively retrieve specific variables associated with the tasks returned in the search results.
-Optional
pageOptional
processOptional
processUsed to return a paginated result. Array of values that should be copied from sortValues of one of the tasks from the current search results page. -It enables the API to return a page of tasks that directly follow the task identified by the provided values, with respect to the sorting order.
-Optional
sortOptional
stateOptional
taskOptional
taskAn array of filter clauses specifying the variables to filter for. -If defined, the query returns only tasks to which all clauses apply. -However, it's important to note that this filtering mechanism is -designed to work exclusively with truncated variables. This means -variables of a larger size are not compatible with this filter, and -attempts to use them may result in inaccurate or incomplete query results.
-Optional
tenantGenerated using TypeDoc
Optional
assignedOptional
assigneeOptional
assigneesOptional
candidateOptional
candidateOptional
candidateOptional
candidateOptional
dueOptional
followOptional
implementationOptional
includeAn array used to specify a list of variable names that should be included in the response when querying tasks. -This field allows users to selectively retrieve specific variables associated with the tasks returned in the search results.
-Optional
pageOptional
processOptional
processUsed to return a paginated result. Array of values that should be copied from sortValues of one of the tasks from the current search results page. -It enables the API to return a page of tasks that directly precede or are equal to the task identified by the provided values, with respect to the sorting order.
-Optional
sortOptional
stateOptional
taskOptional
taskAn array of filter clauses specifying the variables to filter for. -If defined, the query returns only tasks to which all clauses apply. -However, it's important to note that this filtering mechanism is -designed to work exclusively with truncated variables. This means -variables of a larger size are not compatible with this filter, and -attempts to use them may result in inaccurate or incomplete query results.
-Optional
tenantGenerated using TypeDoc
Optional
assignedOptional
assigneeOptional
assigneesOptional
candidateOptional
candidateOptional
candidateOptional
candidateOptional
dueOptional
followOptional
implementationOptional
includeAn array used to specify a list of variable names that should be included in the response when querying tasks. -This field allows users to selectively retrieve specific variables associated with the tasks returned in the search results.
-Optional
pageOptional
processOptional
processUsed to return a paginated result. Array of values that should be copied from sortValues of one of the tasks from the current search results page. -It enables the API to return a page of tasks that directly precede the task identified by the provided values, with respect to the sorting order.
-Optional
sortOptional
stateOptional
taskOptional
taskAn array of filter clauses specifying the variables to filter for. -If defined, the query returns only tasks to which all clauses apply. -However, it's important to note that this filtering mechanism is -designed to work exclusively with truncated variables. This means -variables of a larger size are not compatible with this filter, and -attempts to use them may result in inaccurate or incomplete query results.
-Optional
tenantGenerated using TypeDoc
An array used to specify a list of variable names that should be included in the response when querying tasks. -This field allows users to selectively retrieve specific variables associated with the tasks returned in the search results.
-An array of filter clauses specifying the variables to filter for. -If defined, the query returns only tasks to which all clauses apply. -However, it's important to note that this filtering mechanism is -designed to work exclusively with truncated variables. This means -variables of a larger size are not compatible with this filter, and -attempts to use them may result in inaccurate or incomplete query results.
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Optional
customRest
...args: any[]Optional
inputRest
...args: any[]Generated using TypeDoc
the key of the ancestor scope the element instance should be created in; -set to -1 to create the new element instance within an existing element -instance of the flow scope
-the id of the element that should be activated
-instructions describing which variables should be created
-Generated using TypeDoc
Request object to send the broker to request jobs for the worker.
-Optional
fetchA list of variables to fetch as the job variables; if empty, all visible variables at -the time of activation for the scope of the job will be returned
-The maximum jobs to activate by this request
-The request will be completed when atleast one job is activated or after the requestTimeout. -if the requestTimeout = 0, the request will be completed after a default configured timeout in the broker. -To immediately complete the request when no job is activated set the requestTimeout to a negative value
-Optional
tenanta list of IDs of tenants for which to activate jobs
-The duration the broker allows for jobs activated by this call to complete -before timing them out releasing them for retry on the broker. -The broker checks time outs every 30 seconds, so the broker timeout is guaranteed in at-most timeout + 29s -be guaranteed.
-The job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition -type="payment-service" />)
-The name of the worker activating the jobs, mostly used for logging purposes
-Generated using TypeDoc
Generated using TypeDoc
Readonly
bpmnThe bpmn process ID of the job process definition
-Readonly
customA set of custom headers defined during modelling
-Readonly
deadlineWhen the job will timeout on the broker if it is not completed by this worker. -In epoch milliseconds
-Readonly
elementThe associated task element ID
-Readonly
elementThe unique key identifying the associated task, unique within the scope of the -process instance
-Readonly
keyThe key, a unique identifier for the job
-Readonly
processThe key of the job process definition
-Readonly
processThe version of the job process definition
-Readonly
processThe job's process instance key
-Readonly
retriesReadonly
tenantthe id of the tenant that owns the job
-Readonly
typeThe job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition -type="payment-service" />)
-Readonly
variablesAll visible variables in the task scope, computed at activation time, constrained by any -fetchVariables value in the ActivateJobRequest.
-Readonly
workerThe name of the worker that activated this job
-Generated using TypeDoc
Generated using TypeDoc
The name of the signal
-Optional
tenantthe id of the tenant that owns the signal.
-the signal variables as a JSON document; to be valid, the root of the document must be an -object, e.g. { "a": "foo" }. [ "foo" ] would not be valid.
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Optional
cacheOptional
cacheJust the UUID of the cluster
-Optional
clusterDefaults to bru-2
Generated using TypeDoc
Generated using TypeDoc
the BPMN process ID of the process definition
-Optional
tenantThe tenantId for a multi-tenant enabled cluster.
-JSON document that will instantiate the variables for the root variable scope of the -process instance.
-Optional
versionthe version of the process; if not specified it will use the latest version
-Generated using TypeDoc
the BPMN process ID of the process definition
-Optional
tenantthe tenant id of the process definition
-JSON document that will instantiate the variables for the root variable scope of the -process instance.
-Optional
versionthe version of the process; if not specified it will use the latest version
-Generated using TypeDoc
the BPMN process ID of the process definition
-Optional
startList of start instructions. If empty (default) the process instance -will start at the start event. If non-empty the process instance will apply start -instructions after it has been created
-Optional
tenantThe tenantId for a multi-tenant enabled cluster.
-JSON document that will instantiate the variables for the root variable scope of the -process instance.
-Optional
versionthe version of the process; if not specified it will use the latest version
-Generated using TypeDoc
the BPMN process ID of the process definition
-List of start instructions. If empty (default) the process instance -will start at the start event. If non-empty the process instance will apply start -instructions after it has been created
-Optional
tenantthe tenant id of the process definition
-JSON document that will instantiate the variables for the root variable scope of the -process instance.
-Optional
versionthe version of the process; if not specified it will use the latest version
-Generated using TypeDoc
Readonly
bpmnThe BPMN process ID of the process definition
-Readonly
processThe unique key identifying the process definition (e.g. returned from a process -in the DeployResourceResponse message)
-Readonly
processStringified JSON document that will instantiate the variables for the root variable scope of the
-process instance; it must be a JSON object, as variables will be mapped in a
-key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and
-"b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a
valid argument, as the root of the JSON document is an array and not an object.
Readonly
tenantthe tenant identifier of the created process instance
-Readonly
versionThe version of the process; set to -1 to use the latest version
-Generated using TypeDoc
the BPMN process ID of the process definition
-Optional
fetchlist of names of variables to be included in CreateProcessInstanceWithResultResponse.variables
.
-If empty, all visible variables in the root scope will be returned.
Optional
requesttimeout in milliseconds. the request will be closed if the process is not completed before the requestTimeout. -if requestTimeout = 0, uses the generic requestTimeout configured in the gateway.
-Optional
tenantThe tenantId for a multi-tenant enabled cluster.
-JSON document that will instantiate the variables for the root variable scope of the -process instance.
-Optional
versionthe version of the process; if not specified it will use the latest version
-Generated using TypeDoc
Optional
fetchlist of names of variables to be included in CreateProcessInstanceWithResultResponse.variables
.
-If empty, all visible variables in the root scope will be returned.
timeout in milliseconds. the request will be closed if the process is not completed before the requestTimeout. -if requestTimeout = 0, uses the generic requestTimeout configured in the gateway.
-Generated using TypeDoc
Readonly
bpmnthe BPMN process ID of the process definition which was used to create the process -instance
-Readonly
processthe key of the process definition which was used to create the process instance
-Readonly
processthe unique identifier of the created process instance; to be used wherever a request -needs a process instance key (e.g. CancelProcessInstanceRequest)
-Readonly
tenantthe tenant identifier of the process definition
-Readonly
variablesconsisting of all visible variables to the root scope
-Readonly
versionthe version of the process definition which was used to create the process instance
-Generated using TypeDoc
Readonly
bpmnthe BPMN process ID of the process definition which was used to create the process -instance
-Readonly
processthe key of the process definition which was used to create the process instance
-Readonly
processthe unique identifier of the created process instance; to be used wherever a request -needs a process instance key (e.g. CancelProcessInstanceRequest)
-Readonly
tenantthe tenant identifier of the process definition
-Readonly
variablesconsisting of all visible variables to the root scope
-Readonly
versionthe version of the process definition which was used to create the process instance
-Generated using TypeDoc
Generated using TypeDoc
the assigned decision key, which acts as a unique identifier for this -decision
-the assigned key of the decision requirements graph that this decision is -part of
-the dmn decision ID, as parsed during deployment; together with the -versions forms a unique identifier for a specific decision
-the dmn name of the decision, as parsed during deployment
-the dmn ID of the decision requirements graph that this decision is part -of, as parsed during deployment
-the tenant id of the deployed decision
-the assigned decision version
-Generated using TypeDoc
Generated using TypeDoc
the assigned decision requirements key, which acts as a unique identifier -for this decision requirements
-the dmn decision requirements ID, as parsed during deployment; together -with the versions forms a unique identifier for a specific decision
-the dmn name of the decision requirements, as parsed during deployment
-the resource name (see: Resource.name) from which this decision -requirements was parsed
-the tenant id of the deployed decision requirements
-the assigned decision requirements version
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
the ID of the decision which was evaluated
-the unique key identifying the decision which was evaluated (e.g. returned -from a decision in the DeployResourceResponse message)
-the name of the decision which was evaluated
-JSON document that will instantiate the result of the decision which was -evaluated; it will be a JSON object, as the result output will be mapped -in a key-value fashion, e.g. { "a": 1 }.
-the ID of the decision requirements graph that the decision which was -evaluated is part of.
-the unique key identifying the decision requirements graph that the -decision which was evaluated is part of.
-the version of the decision which was evaluated
-a list of decisions that were evaluated within the requested decision evaluation
-an optional string indicating the ID of the decision which -failed during evaluation
-an optional message describing why the decision which was evaluated failed
-Optional
tenantthe tenant identifier of the decision
-Generated using TypeDoc
the ID of the decision which was evaluated
-the unique key identifying the decision which was evaluated (e.g. returned -from a decision in the DeployResourceResponse message)
-the name of the decision which was evaluated
-JSON document that will instantiate the result of the decision which was -evaluated; it will be a JSON object, as the result output will be mapped -in a key-value fashion, e.g. { "a": 1 }.
-the type of the decision which was evaluated
-the version of the decision which was evaluated
-the decision inputs that were evaluated within this decision evaluation
-the decision rules that matched within this decision evaluation
-the tenant identifier of the evaluated decision
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Readonly
jobGenerated using TypeDoc
Generated using TypeDoc
Readonly
formthe form ID, as parsed during deployment; together with the -versions forms a unique identifier for a specific form
-Readonly
formthe assigned key, which acts as a unique identifier for this form
-Readonly
resourcethe resource name
-Readonly
tenantthe tenant id of the deployed form
-Readonly
versionthe assigned form version
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Readonly
bpmnThe bpmn process ID of the job process definition
-Readonly
customA set of custom headers defined during modelling
-Readonly
deadlineReadonly
elementThe associated task element ID
-Readonly
elementThe unique key identifying the associated task, unique within the scope of the -process instance
-Readonly
keyThe key, a unique identifier for the job
-Readonly
processThe version of the job process definition
-Readonly
processThe job's process instance key
-Readonly
retriesReadonly
typeThe job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition -type="payment-service" />)
-Readonly
variablesAll visible variables in the task scope, computed at activation time.
-Readonly
workerThe name of the worker that activated this job
-Generated using TypeDoc
Cancel the workflow.
-Complete the job with a success, optionally passing in a state update to merge -with the process variables on the broker.
-Optional
updatedVariables: WorkerOutputVariablesReport a business error (i.e. non-technical) that occurs while processing a job. -The error is handled in the process by an error catch event. -If there is no error catch event with the specified errorCode then an incident will be raised instead.
-Fail the job with an informative message as to the cause. Optionally, pass in a
-value remaining retries. If no value is passed for retries then the current retry
-count is decremented. Pass in 0
for retries to raise an incident in Operate. Optionally,
-specify a retry backoff period in milliseconds. Default is 0ms (immediate retry) if not
-specified.
Optional
retries: numberMark this job as forwarded to another system for completion. No action is taken by the broker. -This method releases worker capacity to handle another job.
-Generated using TypeDoc
Optional
retriesIf not specified, the library will decrement the "current remaining retries" count by one
-Optional
retryOptional backoff for subsequent retries, in milliseconds. If not specified, it is zero.
-Generated using TypeDoc
Generated using TypeDoc
the evaluated decision outputs
-the id of the matched rule
-the index of the matched rule
-Generated using TypeDoc
Optional
activateinstructions describing which elements should be activated in which scopes, -and which variables should be created
-the key of the process instance that should be modified
-Optional
terminateinstructions describing which elements should be terminated
-Generated using TypeDoc
OAuth Audience
-OAuth Endpoint URL
-Optional
cacheOverride default token cache directory
-Optional
cacheCache token in memory and on filesystem?
-Optional
customCustom TLS certificate for OAuth
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
the health of this partition
-the role of the broker for this partition
-Generated using TypeDoc
Generated using TypeDoc
future extensions might include
-Generated using TypeDoc
Readonly
bpmnthe bpmn process ID, as parsed during deployment; together with the version forms a -unique identifier for a specific process definition
-Readonly
processthe assigned key, which acts as a unique identifier for this process
-Readonly
resourcethe resource name (see: ProcessRequestObject.name) from which this process was -parsed
-the tenant identifier of the deployed process
-Readonly
versionthe assigned process version
-Generated using TypeDoc
Optional
nameGenerated using TypeDoc
The value to match with the field specified as "Subscription Correlation Key" in BPMN
-Optional
messageUnique ID for this message
-Should match the "Message Name" in a BPMN Message Catch
-Optional
tenantthe tenantId of the message
-Optional
timeThe number of seconds for the message to buffer on the broker, awaiting correlation. Omit or set to zero for no buffering.
-Optional
variablesGenerated using TypeDoc
Generated using TypeDoc
Optional
correlationOptional
messageUnique ID for this message
-Should match the "Message Name" in a BPMN Message Catch
-Optional
tenantthe tenantId for the message
-The number of seconds for the message to buffer on the broker, awaiting correlation. -Omit or set to zero for no buffering.
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Readonly
elementif true, the variables will be merged strictly into the local scope (as indicated by
- elementInstanceKey); this means the variables is not propagated to upper scopes.
- for example, let's say we have two scopes, '1' and '2', with each having effective variables as:
-1 => { "foo" : 2 }
, and 2 => { "bar" : 1 }
. if we send an update request with
-elementInstanceKey = 2, variables { "foo" : 5 }
, and local is true, then scope 1 will
-be unchanged, and scope 2 will now be { "bar" : 1, "foo" 5 }
. if local was false, however,
-then scope 1 would be { "foo": 5 }
, and scope 2 would be { "bar" : 1 }
.
Generated using TypeDoc
Readonly
elementif true, the variables will be merged strictly into the local scope (as indicated by
- elementInstanceKey); this means the variables is not propagated to upper scopes.
- for example, let's say we have two scopes, '1' and '2', with each having effective variables as:
-1 => { "foo" : 2 }
, and 2 => { "bar" : 1 }
. if we send an update request with
-elementInstanceKey = 2, variables { "foo" : 5 }
, and local is true, then scope 1 will
-be unchanged, and scope 2 will now be { "bar" : 1, "foo" 5 }
. if local was false, however,
-then scope 1 would be { "foo": 5 }
, and scope 2 would be { "bar" : 1 }
.
Generated using TypeDoc
a list of variables to fetch as the job variables; if empty, all visible variables at -the time of activation for the scope of the job will be returned
-a list of identifiers of tenants for which to stream jobs
-a job returned after this call will not be activated by another call until the -timeout (in ms) has been reached
-the job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition type="payment-service" />)
-the name of the worker activating the jobs, mostly used for logging purposes
-Generated using TypeDoc
Generated using TypeDoc
the error code that will be matched with an error catch event
-Optional
erroran optional error message that provides additional context
-the unique job identifier, as obtained when activating the job
-Optional
variablesJSON document that will instantiate the variables at the local scope of the error catch -event that catches the thrown error; it must be a JSON object, as variables will be mapped in a -key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and -"b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a -valid argument, as the root of the JSON document is an array and not an object.
-Generated using TypeDoc
Readonly
brokersReadonly
clusterReadonly
gatewayReadonly
partitionsReadonly
replicationGenerated using TypeDoc
Generated using TypeDoc
the id of the element in which scope the variables should be created; -leave empty to create the variables in the global scope of the process instance
-JSON document that will instantiate the variables for the root variable scope of the -process instance; it must be a JSON object, as variables will be mapped in a -key-value fashion. e.g. { "a": 1, "b": 2 } will create two variables, named "a" and -"b" respectively, with their associated values. [{ "a": 1, "b": 2 }] would not be a -valid argument, as the root of the JSON document is an array and not an object.
-Generated using TypeDoc
Optional
camundaOptional
connectionOptional
customSSLOptional
eagerOptional
hostnameOptional
logOptional
loglevelOptional
maxOptional
maxOptional
oOptional
onOptional
onOptional
portOptional
retryOptional
stdoutOptional
tenantOptional
useTLSGenerated using TypeDoc
Receives a JSON-stringified ZBLogMessage
-Receives a JSON-stringified ZBLogMessage
-Receives a JSON-stringified ZBLogMessage
-Generated using TypeDoc
Optional
longGenerated using TypeDoc
Generated using TypeDoc
Optional
colorOptional
coloriseOptional
idOptional
loglevelOptional
longOptional
pollOptional
stdoutOptional
taskGenerated using TypeDoc
Optional
colorOptional
loglevelOptional
longOptional
pollOptional
stdoutOptional
taskGenerated using TypeDoc
Optional
connectionIf your Grpc connection jitters, this is the window before the connectionError
-Optional
customProvide an annotated Dto class to control the serialisation of JSON numbers. This allows you to serialise -numbers as strings or BigInts to avoid precision loss. This also gives you design-time type safety.
-Rest
...args: any[]Optional
debugEnable debug tracking
-Optional
failIf a handler throws an unhandled exception, if this is set true, the process will be failed. Defaults to false.
-Optional
fetchConstrain payload to these keys only.
-Optional
idA custom id for the worker. If none is supplied, a UUID will be generated.
-Optional
inputProvide an annotated Dto class to control the serialisation of JSON numbers. This allows you to serialise -numbers as strings or BigInts to avoid precision loss. This also gives you design-time type safety.
-Rest
...args: any[]Optional
logOptional
loglevelA log level if you want it to differ from the ZBClient
-Optional
longA custom longpoll timeout. By default long polling is every 30 seconds.
-Optional
maxMax concurrent tasks for this worker. Default 32.
-Optional
onThis handler is called when the worker cannot connect to the broker, or loses its connection.
-Optional
onThis handler is called when the worker cannot connect to the broker, or loses its connection.
-Optional
onThis handler is called when the worker (re)establishes its connection to the broker
-Optional
pollPoll Interval in ms. Default 100.
-Optional
stdoutAn implementation of the ZBCustomLogger interface for logging
-A job handler - this must return a job action - e.g.: job.complete(), job.error() - in all code paths.
-The task type that this worker will request jobs for.
-Optional
timeoutMax seconds to allow before time out of a job given to this worker. Default: 30s. -The broker checks deadline timeouts every 30 seconds, so an
-Generated using TypeDoc
Optional
debugEnable debug tracking
-Optional
failIf a handler throws an unhandled exception, if this is set true, the process will be failed. Defaults to false.
-Optional
fetchConstrain payload to these keys only.
-Optional
maxMax concurrent tasks for this worker. Default 32.
-Optional
onThis handler is called when the worker cannot connect to the broker, or loses its connection.
-Optional
pollPoll Interval in ms. Default 100.
-Optional
timeoutMax seconds to allow before time out of a job given to this worker. Default: 30s. -The broker checks deadline timeouts every 30 seconds, so an
-Generated using TypeDoc
Readonly
bpmnThe bpmn process ID of the job process definition
-Cancel the workflow.
-Complete the job with a success, optionally passing in a state update to merge -with the process variables on the broker.
-Optional
updatedVariables: WorkerOutputVariablesReadonly
customA set of custom headers defined during modelling
-Readonly
deadlineReadonly
elementThe associated task element ID
-Readonly
elementThe unique key identifying the associated task, unique within the scope of the -process instance
-Report a business error (i.e. non-technical) that occurs while processing a job. -The error is handled in the process by an error catch event. -If there is no error catch event with the specified errorCode then an incident will be raised instead.
-Fail the job with an informative message as to the cause. Optionally, pass in a
-value remaining retries. If no value is passed for retries then the current retry
-count is decremented. Pass in 0
for retries to raise an incident in Operate. Optionally,
-specify a retry backoff period in milliseconds. Default is 0ms (immediate retry) if not
-specified.
Optional
retries: numberMark this job as forwarded to another system for completion. No action is taken by the broker. -This method releases worker capacity to handle another job.
-Readonly
keyThe key, a unique identifier for the job
-Readonly
processThe version of the job process definition
-Readonly
processThe job's process instance key
-Readonly
retriesReadonly
typeThe job type, as defined in the BPMN process (e.g. <zeebe:taskDefinition -type="payment-service" />)
-Readonly
variablesAll visible variables in the task scope, computed at activation time.
-Readonly
workerThe name of the worker that activated this job
-Generated using TypeDoc
Readonly
bpmnthe BPMN process ID of the process definition which was used to create the process -instance
-Readonly
processthe key of the process definition which was used to create the process instance
-Readonly
processthe unique identifier of the created process instance; to be used wherever a request -needs a process instance key (e.g. CancelProcessInstanceRequest)
-Readonly
tenantthe tenant identifier of the process definition
-Readonly
versionthe version of the process definition which was used to create the process instance
-Generated using TypeDoc
Optional
certOptional
privateOptional
rootOptional
verifyGenerated using TypeDoc
The base url for the Admin Console API.
-Credentials for Admin Console and Modeler API
-Credentials for Admin Console and Modeler API
-The audience parameter for an Admin Console OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-When using custom or self-signed certificates, provide the path to the certificate chain
-When using custom or self-signed certificates, provide the path to the private key
-In an environment using self-signed certificates, provide the path to the root certificate
-Custom user agent
-The base url for the Modeler API. Defaults to Camunda Saas - https://modeler.cloud.camunda.io/api
-The audience parameter for a Modeler OAuth token request. Defaults to api.cloud.camunda.io when connecting to Camunda SaaS, and '' otherwise
-Set to true to disable OAuth completely
-How soon in milliseconds before its expiration time a cached OAuth token should be considered expired. Defaults to 1000
-The OAuth token exchange endpoint url
-The base url for the Operate API
-The audience parameter for an Operate OAuth token request. Defaults to operate.camunda.io
-The base url for the Optimize API
-The audience parameter for an Optimize OAuth token request. Defaults to optimize.camunda.io
-Control TLS for Zeebe Grpc. Defaults to true. Set to false when using an unsecured gateway
-The base url for the Tasklist API
-The audience parameter for a Tasklist OAuth token request. Defaults to tasklist.camunda.io
-The tenant id when multi-tenancy is enabled
-The directory to cache OAuth tokens on-disk. Defaults to $HOME/.camunda
-Set to true to disable disk caching of OAuth tokens and use memory caching only
-Optional scope parameter for OAuth (needed by some OIDC)
-The audience parameter for a Zeebe OAuth token request. Defaults to zeebe.camunda.io
-The address for the Zeebe Gateway. Defaults to localhost:26500
-This is the client ID for the client credentials
-This is the client secret for the client credentials
-This channel argument controls the maximum number -of pings that can be sent when there is no other -data (data frame or header frame) to be sent. -GRPC Core will not continue sending pings if we -run over the limit. Setting it to 0 allows sending -pings without sending data.
-Minimum allowed time between a server receiving -successive ping frames without sending any data -frame. Int valued, milliseconds. Default: 90000
-Defaults to 90000.
-The time between the first and second connection attempts, -in ms. Defaults to 1000.
-This channel argument if set to 1 -(0 : false; 1 : true), allows keepalive pings -to be sent even if there are no calls in flight. -Defaults to 1.
-After waiting for a duration of this time, if the keepalive ping sender does not receive the ping ack, it will close the -transport. Int valued, milliseconds. Defaults to 120000.
-After a duration of this time the client/server pings its peer to see if the transport is still alive. -Int valued, milliseconds. Defaults to 360000.
-The maximum time between subsequent connection attempts, -in ms. Defaults to 10000.
-The minimum time between subsequent connection attempts, -in ms. Default is 1000ms, but this can cause an SSL Handshake failure. -This causes an intermittent failure in the Worker-LongPoll test when run -against Camunda Cloud. -Raised to 5000ms. -See: https://github.com/grpc/grpc/issues/8382#issuecomment-259482949
-Log level of Zeebe Client and Workers - 'DEBUG' | 'INFO' | 'NONE'. Defaults to 'INFO'
-Zeebe client log output can be human-readable 'SIMPLE' or structured 'JSON'. Defaults to 'SIMPLE'
-The gRPC channel can "jitter". This suppresses a connection error message if the channel comes back within this window in milliseconds. Defaults to 3000
-Immediately connect to the Zeebe Gateway (issues a silent topology request). Defaults to false
-This suppresses intermediate errors during initial connection negotiation. On Camunda SaaS this defaults to 6000, on Self-Managed to 0
-Maximum number of retries of network operations before failing. Defaults to -1 (infinite retries)
-When retrying failed network operations, retries back off to this maximum period. Defaults to 10s
-Automate retrying operations that fail due to network conditions or broker backpressure. Defaults to true
-How long in seconds the long poll Job Activation request is held open by a worker. Defaults to 60
-After a long poll Job Activation request, this is the cool-off period in milliseconds before the worker requests more work. Defaults to 300
-Optional
customSSLOptional
oOptional
tasktypeGenerated using TypeDoc
Optional
longOptional
pollGenerated using TypeDoc
Readonly
elementif true, the variables will be merged strictly into the local scope (as indicated by
- elementInstanceKey); this means the variables is not propagated to upper scopes.
- for example, let's say we have two scopes, '1' and '2', with each having effective variables as:
-1 => { "foo" : 2 }
, and 2 => { "bar" : 1 }
. if we send an update request with
-elementInstanceKey = 2, variables { "foo" : 5 }
, and local is true, then scope 1 will
-be unchanged, and scope 2 will now be { "bar" : 1, "foo" 5 }
. if local was false, however,
-then scope 1 would be { "foo": 5 }
, and scope 2 would be { "bar" : 1 }
.
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Optional
error: unknownGenerated using TypeDoc
Generated using TypeDoc
Optional
errorMessage: stringGenerated using TypeDoc
the unique key identifying the decision to be evaluated (e.g. returned -from a decision in the DeployResourceResponse message)
-Optional
tenantthe tenant identifier of the decision
-JSON document that will instantiate the variables for the decision to be -evaluated; it must be a JSON object, as variables will be mapped in a -key-value fashion, e.g. { "a": 1, "b": 2 } will create two variables, -named "a" and "b" respectively, with their associated values. -[{ "a": 1, "b": 2 }] would not be a valid argument, as the root of the -JSON document is an array and not an object.
-the ID of the decision to be evaluated
-Optional
tenantthe tenant identifier of the decision
-JSON document that will instantiate the variables for the decision to be -evaluated; it must be a JSON object, as variables will be mapped in a -key-value fashion, e.g. { "a": 1, "b": 2 } will create two variables, -named "a" and "b" respectively, with their associated values. -[{ "a": 1, "b": 2 }] would not be a valid argument, as the root of the -JSON document is an array and not an object.
-Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Generated using TypeDoc
Const
Decorate Dto bigint fields as @BigInt
to specify that the JSON number property should be parsed as a bigint.
class MyDto extends LosslessDto {
@Int64String
int64NumberField!: string
@BigIntValue
bigintField!: bigint
@ChildDto(MyChildDto)
childDtoField!: MyChildDto
normalField!: string
normalNumberField!: number
maybePresentField?: string
}
-
-Decorate a Dto object field as @ChildDto
to specify that the JSON object property should be parsed as a child Dto.
class MyChildDto extends LosslessDto {
someField!: string
}
class MyDto extends LosslessDto {
@Int64String
int64NumberField!: string
@BigIntValue
bigintField!: bigint
@ChildDto(MyChildDto)
childDtoField!: MyChildDto
normalField!: string
normalNumberField!: number
maybePresentField?: string
}
-
-Decorate Dto string fields as @Int64String
to specify that the JSON number property should be parsed as a string.
class MyDto extends LosslessDto {
@Int64String
int64NumberField!: string
@BigIntValue
bigintField!: bigint
@ChildDto(MyChildDto)
childDtoField!: MyChildDto
normalField!: string
normalNumberField!: number
maybePresentField?: string
}
-
-Generated using TypeDoc
Const
Prints to stdout
with newline. Multiple arguments can be passed, with the
-first used as the primary message and all additional used as substitution
-values similar to printf(3)
(the arguments are all passed to util.format()
).
const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout
-
-See util.format()
for more information.
Optional
message: anyRest
...optionalParams: any[]v0.1.100
-Prints to stderr
with newline. Multiple arguments can be passed, with the
-first used as the primary message and all additional used as substitution
-values similar to printf(3)
(the arguments are all passed to util.format()
).
const code = 5;
console.error('error #%d', code);
// Prints: error #5, to stderr
console.error('error', code);
// Prints: error 5, to stderr
-
-If formatting elements (e.g. %d
) are not found in the first string then util.inspect()
is called on each argument and the resulting string
-values are concatenated. See util.format()
for more information.
Optional
message: anyRest
...optionalParams: any[]v0.1.100
-Prints to stdout
with newline. Multiple arguments can be passed, with the
-first used as the primary message and all additional used as substitution
-values similar to printf(3)
(the arguments are all passed to util.format()
).
const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout
-
-See util.format()
for more information.
Optional
message: anyRest
...optionalParams: any[]v0.1.100
-Generated using TypeDoc
Generated using TypeDoc
Const
Generated using TypeDoc
Const
Generated using TypeDoc
Description
Create a new API client for a cluster. See the API Documentation for more details.
-