Skip to content

Commit

Permalink
remove references to depreciated function meta() in docs (#97)
Browse files Browse the repository at this point in the history
* remove references to depreciated function  meta() in docs

Signed-off-by: Jem Davies <jemsot@gmail.com>

* add caution box to website/docs/configuration/metadata.md regarding meta() + metadata() functions

Signed-off-by: Jem Davies <jemsot@gmail.com>

* remove reference to meta() on index.tsx

Signed-off-by: Jem Davies <jemsot@gmail.com>

---------

Signed-off-by: Jem Davies <jemsot@gmail.com>
  • Loading branch information
jem-davies committed Sep 8, 2024
1 parent a776641 commit ed53b15
Show file tree
Hide file tree
Showing 28 changed files with 53 additions and 47 deletions.
2 changes: 1 addition & 1 deletion internal/config/test/docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ pipeline:
output:
aws_s3:
bucket: TODO
path: '${! meta("kafka_topic") }/${! json("message.id") }.json'
path: '${! metadata("kafka_topic") }/${! json("message.id") }.json'
```
One way to write our unit tests for this config is to accompany it with a file of the same name and extension but suffixed with `_bento_test`, which in this case would be `foo_bento_test.yaml`.
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/amqp09/output.go
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ The fields 'key', 'exchange' and 'type' can be dynamically set using function in
Description("Set the priority of each message with a dynamic interpolated expression.").
Advanced().
Example("0").
Example(`${! meta("amqp_priority") }`).
Example(`${! metadata("amqp_priority") }`).
Example(`${! json("doc.priority") }`).
Default(""),
service.NewOutputMaxInFlightField(),
Expand Down
6 changes: 3 additions & 3 deletions internal/impl/azure/output_table_storage.go
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ properties:
Fields(
service.NewInterpolatedStringField(tsoFieldTableName).
Description("The table to store messages into.").
Example(`${! meta("kafka_topic") }`).Example(`${! json("table") }`),
Example(`${! metadata("kafka_topic") }`).Example(`${! json("table") }`),
service.NewInterpolatedStringField(tsoFieldPartitionKey).
Description("The partition key.").
Example(`${! json("date") }`).
Expand All @@ -124,12 +124,12 @@ properties:
Default(map[string]any{}),
service.NewInterpolatedStringEnumField(tsoFieldInsertType, `INSERT`, `INSERT_MERGE`, `INSERT_REPLACE`).
Description("Type of insert operation. Valid options are `INSERT`, `INSERT_MERGE` and `INSERT_REPLACE`").
Example(`${! json("operation") }`).Example(`${! meta("operation") }`).Example(`INSERT`).
Example(`${! json("operation") }`).Example(`${! metadata("operation") }`).Example(`INSERT`).
Advanced().Deprecated().
Default(""),
service.NewInterpolatedStringEnumField(tsoFieldTransactionType, `INSERT`, `INSERT_MERGE`, `INSERT_REPLACE`, `UPDATE_MERGE`, `UPDATE_REPLACE`, `DELETE`).
Description("Type of transaction operation.").
Example(`${! json("operation") }`).Example(`${! meta("operation") }`).Example(`INSERT`).
Example(`${! json("operation") }`).Example(`${! metadata("operation") }`).Example(`INSERT`).
Advanced().
Default("INSERT"),
service.NewOutputMaxInFlightField().
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ We will be considering alternative approaches in future so please [get in touch]
Field(service.NewURLField("url").Description("The base URL of the schema registry service.")).
Field(service.NewInterpolatedStringField("subject").Description("The schema subject to derive schemas from.").
Example("foo").
Example(`${! meta("kafka_topic") }`)).
Example(`${! metadata("kafka_topic") }`)).
Field(service.NewStringField("refresh_period").
Description("The period after which a schema is refreshed for each subject, this is done by polling the schema registry service.").
Default("10m").
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/kafka/output_kafka_franz.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ This output often out-performs the traditional ` + "`kafka`" + ` output as well
Advanced().Optional()).
Field(service.NewInterpolatedStringField("partition").
Description("An optional explicit partition to set for each message. This field is only relevant when the `partitioner` is set to `manual`. The provided interpolation string must be a valid integer.").
Example(`${! meta("partition") }`).
Example(`${! metadata("partition") }`).
Optional()).
Field(service.NewStringField("client_id").
Description("An identifier for the client connection.").
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/nats/output_jetstream.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ func natsJetStreamOutputConfig() *service.ConfigSpec {
Field(service.NewInterpolatedStringField("subject").
Description("A subject to write to.").
Example("foo.bar.baz").
Example(`${! meta("kafka_topic") }`).
Example(`${! metadata("kafka_topic") }`).
Example(`foo.${! json("meta.type") }`)).
Field(service.NewInterpolatedStringMapField("headers").
Description("Explicit message headers to add to messages.").
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/nats/processor_request_reply.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ You can access these metadata fields using [function interpolation](/docs/config
Field(service.NewInterpolatedStringField("subject").
Description("A subject to write to.").
Example("foo.bar.baz").
Example(`${! meta("kafka_topic") }`).
Example(`${! metadata("kafka_topic") }`).
Example(`foo.${! json("meta.type") }`)).
Field(service.NewStringField("inbox_prefix").
Description("Set an explicit inbox prefix for the response subject").
Expand Down
4 changes: 2 additions & 2 deletions internal/impl/pure/processor_cached.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ func newCachedProcessorConfigSpec() *service.ConfigSpec {
Description("A key to be resolved for each message, if the key already exists in the cache then the cached result is used, otherwise the processors are applied and the result is cached under this key. The key could be static and therefore apply generally to all messages or it could be an interpolated expression that is potentially unique for each message.").
Example("my_foo_result").
Example(`${! this.document.id }`).
Example(`${! meta("kafka_key") }`).
Example(`${! meta("kafka_topic") }`)).
Example(`${! metadata("kafka_key") }`).
Example(`${! metadata("kafka_topic") }`)).
Field(service.NewInterpolatedStringField("ttl").
Description("An optional expiry period to set for each cache entry. Some caches only have a general TTL and will therefore ignore this setting.").
Optional()).
Expand Down
2 changes: 1 addition & 1 deletion internal/impl/redis/processor.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ performed for each message and the message contents are replaced with the result
Version("1.0.0").
Example("scard").
Example("incrby").
Example(`${! meta("command") }`).
Example(`${! metadata("command") }`).
Optional()).
Field(service.NewBloblangField("args_mapping").
Description("A [Bloblang mapping](/docs/guides/bloblang/about) which should evaluate to an array of values matching in size to the number of arguments required for the specified Redis command.").
Expand Down
4 changes: 2 additions & 2 deletions website/cookbooks/joining_streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ input:
# because both topics are consumed independently and these processors
# only apply to the 'comments_retry' input.
- sleep:
duration: '${! 3600 - ( timestamp_unix() - meta("last_attempted").number() ) }s'
duration: '${! 3600 - ( timestamp_unix() - metadata("last_attempted") ) }s'
pipeline:
processors:
Expand All @@ -216,7 +216,7 @@ pipeline:
output:
kafka:
addresses: [ TODO ]
topic: '${!meta("output_topic")}'
topic: '${!metadata("output_topic")}'
cache_resources:
- label: hydration_cache
Expand Down
10 changes: 5 additions & 5 deletions website/cookbooks/kafka_topic_mirroring.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Using [string interpolation][bloblang.interpolation], we can then extract the or
output:
kafka_franz:
seed_brokers: [ TODO ]
topic: 'output-${! meta("kafka_topic") }'
topic: 'output-${! metadata("kafka_topic") }'
```

Recall from earlier that we also wanted to preserve our partition mapping when writing to new topics. Again, we can use metadata to retrieve the original partition of each message in the source topic. We'll use the `kafka_partition` metadata field in conjunction with setting `partitioner` to `manual` -- overriding any other fancy partitioning algorithm in favour of preserving our initial mapping. Combining again with [string interpolation][bloblang.interpolation], we get the following:
Expand All @@ -61,8 +61,8 @@ Recall from earlier that we also wanted to preserve our partition mapping when w
output:
kafka_franz:
seed_brokers: [ TODO ]
topic: 'output-${! meta("kafka_topic") }'
partition: ${! meta("kafka_partition") }
topic: 'output-${! metadata("kafka_topic") }'
partition: ${! metadata("kafka_partition") }
partitioner: manual
```

Expand All @@ -77,8 +77,8 @@ For completeness, we can also route all consumed events back to their original s
output:
kafka_franz:
seed_brokers: [ TODO ]
topic: ${! meta("kafka_topic") }
partition: ${! meta("kafka_partition") }
topic: ${! metadata("kafka_topic") }
partition: ${! metadata("kafka_partition") }
partitioner: manual
```

Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ pipeline:
output:
aws_s3:
bucket: TODO
path: '${! meta("kafka_topic") }/${! json("message.id") }.json'
path: '${! metadata("kafka_topic") }/${! json("message.id") }.json'
```
These are the main components within Bento and they provide the majority of useful behaviour.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/metrics/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ metrics:
meta = deleted()
# Re-add the `label` label with meows replaced with woofs
meta label = meta("label").replace("meow", "woof")
meta label = metadata("label").replace("meow", "woof")

# Delete all metric series that aren't in our list
root = if ![
Expand Down
4 changes: 2 additions & 2 deletions website/docs/components/outputs/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ output:

aws_s3:
bucket: TODO
path: '${! meta("kafka_topic") }/${! json("message.id") }.json'
path: '${! metadata("kafka_topic") }/${! json("message.id") }.json'

# Optional list of processing steps
processors:
Expand Down Expand Up @@ -60,7 +60,7 @@ For example, multiplexing against Kafka topics is a common pattern:
output:
kafka:
addresses: [ TODO:6379 ]
topic: ${! meta("target_topic") }
topic: ${! metadata("target_topic") }
```

Refer to the field documentation for a given output to see if it support interpolation.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/amqp_0_9.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@ Default: `""`
priority: "0"
priority: ${! meta("amqp_priority") }
priority: ${! metadata("amqp_priority") }
priority: ${! json("doc.priority") }
```
Expand Down
8 changes: 4 additions & 4 deletions website/docs/components/outputs/azure_table_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ output:
storage_access_key: ""
storage_connection_string: ""
storage_sas_token: ""
table_name: ${! meta("kafka_topic") } # No default (required)
table_name: ${! metadata("kafka_topic") } # No default (required)
partition_key: ""
row_key: ""
properties: {}
Expand All @@ -63,7 +63,7 @@ output:
storage_access_key: ""
storage_connection_string: ""
storage_sas_token: ""
table_name: ${! meta("kafka_topic") } # No default (required)
table_name: ${! metadata("kafka_topic") } # No default (required)
partition_key: ""
row_key: ""
properties: {}
Expand Down Expand Up @@ -169,7 +169,7 @@ Type: `string`
```yml
# Examples
table_name: ${! meta("kafka_topic") }
table_name: ${! metadata("kafka_topic") }
table_name: ${! json("table") }
```
Expand Down Expand Up @@ -228,7 +228,7 @@ Options: `INSERT`, `INSERT_MERGE`, `INSERT_REPLACE`, `UPDATE_MERGE`, `UPDATE_REP
transaction_type: ${! json("operation") }
transaction_type: ${! meta("operation") }
transaction_type: ${! metadata("operation") }
transaction_type: INSERT
```
Expand Down
6 changes: 3 additions & 3 deletions website/docs/components/outputs/kafka_franz.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ output:
seed_brokers: [] # No default (required)
topic: "" # No default (required)
key: "" # No default (optional)
partition: ${! meta("partition") } # No default (optional)
partition: ${! metadata("partition") } # No default (optional)
metadata:
include_prefixes: []
include_patterns: []
Expand All @@ -62,7 +62,7 @@ output:
topic: "" # No default (required)
key: "" # No default (optional)
partitioner: "" # No default (optional)
partition: ${! meta("partition") } # No default (optional)
partition: ${! metadata("partition") } # No default (optional)
client_id: bento
rack_id: ""
idempotent_write: true
Expand Down Expand Up @@ -162,7 +162,7 @@ Type: `string`
```yml
# Examples
partition: ${! meta("partition") }
partition: ${! metadata("partition") }
```

### `client_id`
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/outputs/nats_jetstream.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ Type: `string`
subject: foo.bar.baz
subject: ${! meta("kafka_topic") }
subject: ${! metadata("kafka_topic") }
subject: foo.${! json("meta.type") }
```
Expand Down
4 changes: 2 additions & 2 deletions website/docs/components/processors/cached.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,9 @@ key: my_foo_result
key: ${! this.document.id }
key: ${! meta("kafka_key") }
key: ${! metadata("kafka_key") }
key: ${! meta("kafka_topic") }
key: ${! metadata("kafka_topic") }
```

### `ttl`
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/processors/nats_request_reply.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,7 @@ Type: `string`
subject: foo.bar.baz
subject: ${! meta("kafka_topic") }
subject: ${! metadata("kafka_topic") }
subject: foo.${! json("meta.type") }
```
Expand Down
2 changes: 1 addition & 1 deletion website/docs/components/processors/redis.md
Original file line number Diff line number Diff line change
Expand Up @@ -330,7 +330,7 @@ command: scard
command: incrby
command: ${! meta("command") }
command: ${! metadata("command") }
```

### `args_mapping`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Type: `string`
subject: foo
subject: ${! meta("kafka_topic") }
subject: ${! metadata("kafka_topic") }
```

### `refresh_period`
Expand Down
4 changes: 2 additions & 2 deletions website/docs/configuration/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ pipeline:
output:
aws_s3:
bucket: TODO
path: '${! meta("kafka_topic") }/${! json("message.id") }.json'
path: '${! metadata("kafka_topic") }/${! json("message.id") }.json'
```
</TabItem>
Expand Down Expand Up @@ -62,7 +62,7 @@ pipeline:
output:
aws_s3:
bucket: TODO
path: '${! meta("kafka_topic") }/${! json("message.id") }.json'
path: '${! metadata("kafka_topic") }/${! json("message.id") }.json'

input_resources: []
cache_resources: []
Expand Down
4 changes: 2 additions & 2 deletions website/docs/configuration/interpolation.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ A common usecase for interpolated functions is dynamic routing at the output lev
output:
kafka:
addresses: [ TODO ]
topic: ${! meta("output_topic") }
key: ${! meta("key") }
topic: ${! metadata("output_topic") }
key: ${! metadata("key") }
```

### Coalesce and Mapping
Expand Down
12 changes: 9 additions & 3 deletions website/docs/configuration/metadata.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,17 @@ meta = @.filter(kv -> !kv.key.has_prefix("kafka_"))

## Using Metadata

:::caution
There are two functions to reference metadata: [`meta()`][meta] and [`metadata()`][metadata]. [`meta()`][meta] has been depreciated in favor of [`metadata()`][metadata].
:::

Metadata values can be referenced in any field that supports [interpolation functions][interpolation]. For example, you can route messages to Kafka topics using interpolation of metadata keys:

```yaml
output:
kafka:
addresses: [ TODO ]
topic: ${! meta("target_topic") }
topic: ${! metadata("target_topic") }
```
Bento also allows you to conditionally process messages based on their metadata with the [`switch` processor][processors.switch]:
Expand Down Expand Up @@ -76,7 +80,7 @@ For example, if we were sending messages to kafka using a metadata key `target_t
output:
kafka:
addresses: [ TODO ]
topic: ${! meta("target_topic") }
topic: ${! metadata("target_topic") }
metadata:
exclude_prefixes:
- target_topic
Expand All @@ -102,7 +106,7 @@ pipeline:
output:
kafka:
addresses: [ TODO ]
topic: ${! meta("_target_topic") }
topic: ${! metadata("_target_topic") }
metadata:
exclude_prefixes: [ "_" ]
```
Expand All @@ -111,3 +115,5 @@ output:
[processors.switch]: /docs/components/processors/switch
[processors.mapping]: /docs/components/processors/mapping
[guides.bloblang]: /docs/guides/bloblang/about
[meta]: /docs/guides/bloblang/functions#meta
[metadata]: /docs/guides/bloblang/functions#metadata
2 changes: 1 addition & 1 deletion website/docs/configuration/unit_testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ pipeline:
output:
aws_s3:
bucket: TODO
path: '${! meta("kafka_topic") }/${! json("message.id") }.json'
path: '${! metadata("kafka_topic") }/${! json("message.id") }.json'
```
One way to write our unit tests for this config is to accompany it with a file of the same name and extension but suffixed with `_bento_test`, which in this case would be `foo_bento_test.yaml`.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/guides/streams_mode/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ This can cause problems if your streams are short lived and uniquely named as th
```yaml
# Only register metrics for the stream `foo`. Others will be ignored.
metrics:
mapping: if meta("stream") != "foo" { deleted() }
mapping: if metadata("stream") != "foo" { deleted() }
prometheus: {}
```
Expand Down
Loading

0 comments on commit ed53b15

Please sign in to comment.