Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos #775

Merged
merged 1 commit into from
Aug 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion frontend/docs/pages/blog/multi-tenant-queues.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,7 @@ Next, let's assign task IDs according to the following algorithm:

**\*Note:** we are making a critical assumption that the number of unique group keys will always be below the `blockLength` , and increasing the blockLength in the future would be a bit involved. A blockLength of ~1 million gives us ~1 billion task executions. To increase the block length, it's recommended that you add an offset equal to the the maximum task id, and start assigning task ids from there. We will also (in the worst case) cap out at 1 billion executed tasks, though this can be fixed by reassigning IDs when close to this limit.\*

### SQL implemenation
### SQL implementation

To actually implement this, let's add a new set of tables to our queue implementation. We'll add a table for `task_groups`, which maintains the pointer `p(i)` from above, along with a table called `task_addr_ptrs` which maintains `p_max_assigned` from above:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { Tabs } from "nextra/components";

# The GROUP_ROUND_ROBIN Concurrency Limit Strategy in Hatchet

Hatchet's `GROUP_ROUND_ROBIN` concurrency limit strategy is an advanced way to manage resource contention in your workflows while ensuring fair distribution of resources across different groups of tennants, users, or other concurrency key. This strategy allows you to process workflow instances in a round-robin fashion within each group, as defined by a key function.
Hatchet's `GROUP_ROUND_ROBIN` concurrency limit strategy is an advanced way to manage resource contention in your workflows while ensuring fair distribution of resources across different groups of tenants, users, or other concurrency key. This strategy allows you to process workflow instances in a round-robin fashion within each group, as defined by a key function.

## How it works

Expand Down
2 changes: 1 addition & 1 deletion frontend/docs/pages/home/features/streaming.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ for await (const event of listener) {

## Streaming by Additional Metadata

Often it is helpful to stream from multiple workflows (i.e. child workflows spawned from a parent) to achieve this, you can specify an [additional meta](/features/additional-metadata) key-value pair before runing a workflow that can then be used to subscribe to all events from workflows that have the same key-value pair.
Often it is helpful to stream from multiple workflows (i.e. child workflows spawned from a parent) to achieve this, you can specify an [additional meta](/features/additional-metadata) key-value pair before running a workflow that can then be used to subscribe to all events from workflows that have the same key-value pair.

Since additinoal metadata is propegated from parent to child workflows, this can be used to track all events from a specific workflow run.

Expand Down
2 changes: 1 addition & 1 deletion frontend/docs/pages/home/features/timeouts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ worker.WorkflowJob{
This would set a timeout of 30 seconds for this specific step. If the step takes longer than 30 seconds to complete, it will fail and the workflow will be cancelled.

<Callout type="warning">
A timed out step does not gurantee that the step will be stopped immediately.
A timed out step does not guarantee that the step will be stopped immediately.
The step will be stopped as soon as the worker is able to stop the step. See
[cancellation](/features/cancellation) for more information.
</Callout>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ class PrintSchedule:
future_time = now + timedelta(seconds=15)
print(f"Scheduling for \t {future_time.strftime("%H:%M:%S")}")

# schdule the run in the future
# schedule the run in the future
hatchet.client.admin.schedule_workflow(
'PrintPrinter',
[future_time],
Expand Down
2 changes: 1 addition & 1 deletion frontend/docs/pages/sdks/python-sdk/run-workflow-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ workflowRun = hatchet.client.admin.run_workflow(

This method takes the following parameters:

- `workflow_name` (**required**): The name of the workflow to trigger. If you have not overriden the workflow name in the `hatchet.workflow` decorator, this should match the name of the workflow class.
- `workflow_name` (**required**): The name of the workflow to trigger. If you have not overridden the workflow name in the `hatchet.workflow` decorator, this should match the name of the workflow class.
- `input` (**required**): The input to the workflow. This should be a JSON-serializable dict.
- `options` (**optional**): Additional options to pass to the workflow. The current options are supported:
- `additional_metadata`: A dict of key-value strings to attach to the workflow run. This metadata will be shown in the Hatchet UI and will be available in API endpoints for listing/filtering.
Expand Down
2 changes: 1 addition & 1 deletion frontend/docs/pages/sdks/python-sdk/run-workflow-child.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ if __name__ == "__main__":

The `spawn_workflow` method takes the following parameters:

- `workflow_name` (**required**): The name of the workflow to trigger. If you have not overriden the workflow name in the `hatchet.workflow` decorator, this should match the name of the workflow class.
- `workflow_name` (**required**): The name of the workflow to trigger. If you have not overridden the workflow name in the `hatchet.workflow` decorator, this should match the name of the workflow class.
- `input` (**required**): The input to pass to the child workflow.
- `key` (**optional**): A caching key for the child workflow. If this is not set, the child workflow will be cached on the index that it was triggered at. The cache is used on retries of the parent workflow so that child workflows which were already triggered are skipped.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,6 @@ workflowRun = hatchet.client.admin.schedule_workflow(

This method takes the following parameters:

- `workflow_name` (**required**): The name of the workflow to trigger. If you have not overriden the workflow name in the `hatchet.workflow` decorator, this should match the name of the workflow class.
- `workflow_name` (**required**): The name of the workflow to trigger. If you have not overridden the workflow name in the `hatchet.workflow` decorator, this should match the name of the workflow class.
- `schedule_time` (**required**): The time at which the workflow should be scheduled to run. This should be a `datetime` object.
- `input` (**required**): The input to the workflow. This should be a JSON-serializable dict.
2 changes: 1 addition & 1 deletion internal/msgqueue/rabbitmq/rabbitmq.go
Original file line number Diff line number Diff line change
Expand Up @@ -476,7 +476,7 @@ func (t *MessageQueueImpl) subscribe(
}

if err := json.Unmarshal(rabbitMsg.Body, msg); err != nil {
t.l.Error().Msgf("error unmarshaling message: %v", err)
t.l.Error().Msgf("error unmarshalling message: %v", err)

// reject this message
if err := rabbitMsg.Reject(false); err != nil {
Expand Down
2 changes: 1 addition & 1 deletion pkg/client/types/file.go
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ func ParseYAML(ctx context.Context, yamlBytes []byte) (Workflow, error) {

err := yaml.Unmarshal(yamlBytes, &workflowFile)
if err != nil {
return workflowFile, fmt.Errorf("error unmarshaling workflow yaml: %w", err)
return workflowFile, fmt.Errorf("error unmarshalling workflow yaml: %w", err)
}

return workflowFile, nil
Expand Down
4 changes: 2 additions & 2 deletions pkg/repository/prisma/db/db_gen.go
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ model Tenant {
name String
slug String @unique

// wheather the user has opted out of analytics
// whether the user has opted out of analytics
analyticsOptOut Boolean @default(false)

// the parent controller partition, if exists
Expand Down Expand Up @@ -999,7 +999,7 @@ model WorkflowRun {
// a user-defined key for this workflow run
childKey String?

// (optional) additonal metadata for the workflow run
// (optional) additional metadata for the workflow run
additionalMetadata Json?

@@unique([parentId, parentStepRunId, childKey])
Expand Down
2 changes: 1 addition & 1 deletion pkg/worker/webhook_handler.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ func (w *Worker) WebhookHttpHandler(opts WebhookHandlerOptions, workflows ...wor

var action ActionPayload
if err := json.Unmarshal(data, &action); err != nil {
w.l.Error().Err(err).Msg("error unmarshaling action")
w.l.Error().Err(err).Msg("error unmarshalling action")
writer.WriteHeader(http.StatusInternalServerError)
_, _ = writer.Write([]byte(err.Error()))
return
Expand Down
4 changes: 2 additions & 2 deletions prisma/schema.prisma
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ model Tenant {
name String
slug String @unique

// wheather the user has opted out of analytics
// whether the user has opted out of analytics
analyticsOptOut Boolean @default(false)

// the parent controller partition, if exists
Expand Down Expand Up @@ -925,7 +925,7 @@ model WorkflowRun {
// a user-defined key for this workflow run
childKey String?

// (optional) additonal metadata for the workflow run
// (optional) additional metadata for the workflow run
additionalMetadata Json?

@@unique([parentId, parentStepRunId, childKey])
Expand Down
Loading