Skip to content
This repository has been archived by the owner on Mar 10, 2024. It is now read-only.

Commit

Permalink
Add task v5 using Node10 (#251)
Browse files Browse the repository at this point in the history
  • Loading branch information
qetza authored Feb 5, 2022
1 parent f219097 commit ee5215c
Show file tree
Hide file tree
Showing 12 changed files with 2,462 additions and 2 deletions.
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ If you want to use tokens in XML based configuration files to be replaced during
- replace tokens in your updated configuration file

## Release notes
**New in 4.3.0**
- Add task **5.0.0**
- **Breaking change**: Migrate task to Node10 execution handler needing agent `2.144.0` minimum ([#228](https://github.com/qetza/vsts-replacetokens-task/issues/228), [#230](https://github.com/qetza/vsts-replacetokens-task/issues/230)).

**New in 4.2.1**
- Task **4.1.1**
- Revert migrate tasks to Node10 execution handler ([#233](https://github.com/qetza/vsts-replacetokens-task/issues/233)).
Expand Down
93 changes: 93 additions & 0 deletions ReplaceTokens/ReplaceTokensV5/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
[![Donate](images/donate.png)](https://www.paypal.me/grouchon/5)

# Replace Tokens task
Azure Pipelines extension that replace tokens in **text** files with variable values.

## Usage
If you are using the UI, add a new task, select **Replace Tokens** from the **Utility** category and configure it as needed.

If your are using a YAML file, add a task with the following syntax:
```yaml
- task: qetza.replacetokens.replacetokens-task.replacetokens@3
displayName: 'Replace tokens'
inputs:
targetFiles: |
**/*.config
**/*.json => outputs/*.json
```
**Note:** the task will only work on text files, if you need to replace token in archive file you will need to first extract the files and after archive them back.
## Parameters
The parameters of the task are described bellow, in parenthesis is the YAML name:
- **Root directory** (rootDirectory): the base directory for searching files. If not specified the default working directory will be used.
- **Target files** (targetFiles): the absolute or relative newline-separated paths to the files to replace tokens. Wildcards can be used (eg: `**\*.config` for all _.config_ files in all sub folders).
> **Syntax**: {file path}[ => {output path}]
>
> - `web.config` will replace tokens in _web.config_ and update the file.
> - `web.tokenized.config => web.config` will replace tokens in _web.tokenized.config_ and save the result in _web.config_.
> - `config\web.tokenized.config => c:\config\web.config` will replace tokens in _config\web.tokenized.config_ and save the result in _c:\\config\web.config_.
>
> **Wildcard support**
> - `*.tokenized.config => *.config` will replace tokens in all _{filename}.tokenized.config_ target files and save the result in _{filename}.config_.
> - `**\*.tokenized.config => c:\tmp\*.config` will replace tokens in all _{filename}.tokenized.config_ target files and save the result in _c:\tmp\\{filename}.config_.
>
> Only the wildcard _*_ in the target file name will be used for replacement in the output.\
> Relative paths in the output pattern are relative to the target file path.\
>
> **Negative pattern**\
> If you want to use negative pattern in target file, use a semi-colon `;` to separate the including pattern and the negative patterns. When using output syntax, only the wildcard in the first pattern will be used for generating the output path.
> - `**\*.tokenized.config;!**\dev\*.config => c:\tmp\*.config` will replace tokens in all _{filename}.tokenized.config_ target files except those under a _dev_ directory and save the result in _c:\tmp\\{filename}.config_.

- **Files encoding** (encoding): the files encoding used for reading and writing. The 'auto' value will determine the encoding based on the Byte Order Mark (BOM) if present; otherwise it will use ascii. (allowed values: auto, ascii, utf-7, utf-8, utf-16le, utf-16be, win1252 and iso88591)
- **Write unicode BOM** (writeBOM): if checked writes an unicode Byte Order Mark (BOM).
- **Escape type** (escapeType): specify how to escape variable values. Value `auto` uses the file extension (`.json` and `.xml`) to determine the escaping and `none` as fallback. (allowed values: auto, none, json, xml and custom)
- **Escape character** (escapeChar): when using `custom` escape type, the escape character to use when escaping characters in the variable values.
- **Characters to escape** (charsToEscape): when using `custom` escape type, characters in variable values to escape before replacing tokens.
- **Verbosity** (verbosity): specify the level of log verbosity. (note: error and system debug are always on) (allowed values: normal, detailed and off)
- **Action on missing variable** (actionOnMissing): specify the action to take on a missing variable.
- _silently continue_ (continue): the task will continue without displaying any message.
- _log warning_ (warn): the task will continue but log a warning with the missing variable name.
- _fail_ (fail): the task will fail and log the missing variable name.
- **Keep token for missing variable** (keepToken): if checked tokens with missing variables will not be replaced by empty string.
- **Action on no file processed** (actionOnNoFiles): specify the action when no file was processed. (allowed values: continue, warn, fail)
- **Token pattern** (tokenPattern): specify the pattern of the tokens to search in the target files. (allowed values: default, rm, octopus, azpipelines, doublebraces and custom)
- **Token prefix** (tokenPrefix): when using `custom` token pattern, the prefix of the tokens to search in the target files.
- **Token suffix** (tokenSuffix): when using `custom` token pattern, the suffix of the tokens to search in the target files.
- **Use legacy pattern** (useLegacyPattern): if checked whitespaces between the token prefix/suffix and the variable name are not ignored.
- **Empty value** (emptyValue): the variable value that will be replaced with an empty string.
- **Default value** (defaultValue): the value to be used if a variable is not found. Do not set to disable default value feature. (to replace with an empty string set the default value to the _Empty value_)
- **Enable transformations** (enableTransforms): if checked transformations can be applied on variable values. The following transformations are available:
- _lower_: make variable value lower case. Example: `#{lower(MyVar)}#`
- _upper_: make variable value upper case. Example: `#{upper(MyVar)}#`
- _noescape_: disable variable value escaping. (this can be used if you want to inject raw JSON or XML for example). Example: `#{noescape(MyVar)}#`
- _base64_: encode variable value in BASE64. Example `#{base64(MyVar)}#`
- **Transform prefix** (transformPrefix): The prefix between transform name and token name. Default: `(`.
- **Transform suffix** (transformSuffix): The suffix after the token name. Default: `)`.
- **Variable files (JSON or YAML)** (variableFiles): the absolute or relative comma or newline-separated paths to the files containing additional variables. Wildcards can be used (eg: `vars\**\*.json` for all _.json_ files in all sub folders of _vars_). YAML files **must have** the `.yml`or `.yaml` extension otherwise the file is treated as JSON. Variables declared in files overrides variables defined in the pipeline.
- **Variable separator** (variableSeparator): the separtor to use in variable names for nested objects and arrays in variable files. Example: `{ 'My': { 'Value': ['Hello World!'] } }` will create a variable _My.Value.0_ with the value _Hello World!_.
- **Send anonymous usage telemetry** (enableTelemetry): if checked anonymous usage data (hashed collection and pipeline id, no file parameter values, no variable values) will be sent to the task author only to analyze task usage.

### Output variables
The task creates the following as output variables:
- **tokenReplacedCount**: the total number of tokens which were replaced by a variable.
- **tokenFoundCount**: the total number of of tokens which were found.
- **fileProcessedCount**: the total number of files which were processed.
- **transformExecutedCount**: the total number of transformations which were executed.
- **defaultValueCount**: the total number of default value used.

## Data/Telemetry
The Replace Tokens task for Azure Pipelines collects anonymous usage data and sends them to its author to help improve the product by default. If you don’t wish to send usage data, you can change your telemetry settings through _Send anonymous usage telemetry_ parameter or by setting a variable or environment variable `REPLACETOKENS_DISABLE_TELEMETRY` to `true`.

## Tips
If you want to use tokens in XML based configuration files to be replaced during deployment and also have those files usable for local development you can combine the [Replace Tokens task](https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens) with the [XDT tranform task](https://marketplace.visualstudio.com/items?itemName=qetza.xdttransform):
- create an XDT transformation file containing your tokens
- setup your configuration file with local developement values
- at deployment time
- inject your tokens in the configuration file by using your transformation file
- replace tokens in your updated configuration file

## Release notes
**New in 5.0.0**
- **Breaking change**: Migrate task to Node10 execution handler needing agent `2.144.0` minimum ([#228](https://github.com/qetza/vsts-replacetokens-task/issues/228), [#230](https://github.com/qetza/vsts-replacetokens-task/issues/230)).
Binary file added ReplaceTokens/ReplaceTokensV5/icon.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit ee5215c

Please sign in to comment.