Skip to content

Commit

Permalink
Merge branch 'main' into checkstyle-needs-braces
Browse files Browse the repository at this point in the history
  • Loading branch information
williamrandolph committed Jan 9, 2024
2 parents b76cb25 + 806642f commit 412ef6c
Show file tree
Hide file tree
Showing 2,371 changed files with 53,033 additions and 21,714 deletions.
41 changes: 41 additions & 0 deletions .buildkite/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Elasticsearch CI Pipelines

This directory contains pipeline definitions and scripts for running Elasticsearch CI on Buildkite.

## Directory Structure

- [pipelines](pipelines/) - pipeline definitions/yml
- [scripts](scripts/) - scripts used by pipelines, inside steps
- [hooks](hooks/) - [Buildkite hooks](https://buildkite.com/docs/agent/v3/hooks), where global env vars and secrets are set

## Pipeline Definitions

Pipelines are defined using YAML files residing in [pipelines](pipelines/). These are mostly static definitions that are used as-is, but there are a few dynamically-generated exceptions (see below).

### Dynamically Generated Pipelines

Pull request pipelines are generated dynamically based on labels, files changed, and other properties of pull requests.

Non-pull request pipelines that include BWC version matrices must also be generated whenever the [list of BWC versions](../.ci/bwcVersions) is updated.

#### Pull Request Pipelines

Pull request pipelines are generated dynamically at CI time based on numerous properties of the pull request. See [scripts/pull-request](scripts/pull-request) for details.

#### BWC Version Matrices

For pipelines that include BWC version matrices, you will see one or more template files (e.g. [periodic.template.yml](pipelines/periodic.template.yml)) and a corresponding generated file (e.g. [periodic.yml](pipelines/periodic.yml)). The generated file is the one that is actually used by Buildkite.

These files are updated by running:

```bash
./gradlew updateCIBwcVersions
```

This also runs automatically during release procedures.

You should always make changes to the template files, and run the above command to update the generated files.

## Node / TypeScript

Node (technically `bun`), TypeScript, and related files are currently used to generate pipelines for pull request CI. See [scripts/pull-request](scripts/pull-request) for details.
1 change: 0 additions & 1 deletion .buildkite/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
{
"name": "buildkite-pipelines",
"module": "index.ts",
"type": "module",
"devDependencies": {
"@types/node": "^20.6.0",
Expand Down
10 changes: 10 additions & 0 deletions .buildkite/pipelines/dra-workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,13 @@ steps:
image: family/elasticsearch-ubuntu-2204
machineType: custom-32-98304
buildDirectory: /dev/shm/bk
- wait
# The hadoop build depends on the ES artifact
# So let's trigger the hadoop build any time we build a new staging artifact
- trigger: elasticsearch-hadoop-dra-workflow
async: true
build:
branch: "${BUILDKITE_BRANCH}"
env:
DRA_WORKFLOW: staging
if: build.env('DRA_WORKFLOW') == 'staging'
3 changes: 2 additions & 1 deletion .buildkite/pipelines/periodic.template.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ steps:
- openjdk19
- openjdk20
- openjdk21
- openjdk22
GRADLE_TASK:
- checkPart1
- checkPart2
Expand Down Expand Up @@ -180,7 +181,7 @@ steps:
image: family/elasticsearch-ubuntu-2004
machineType: n2-standard-8
buildDirectory: /dev/shm/bk
if: build.branch == "main" || build.branch =~ /^[0-9]+\.[0-9]+\$/
if: build.branch == "main" || build.branch == "7.17"
- label: Check branch consistency
command: .ci/scripts/run-gradle.sh branchConsistency
timeout_in_minutes: 15
Expand Down
3 changes: 2 additions & 1 deletion .buildkite/pipelines/periodic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1194,6 +1194,7 @@ steps:
- openjdk19
- openjdk20
- openjdk21
- openjdk22
GRADLE_TASK:
- checkPart1
- checkPart2
Expand Down Expand Up @@ -1301,7 +1302,7 @@ steps:
image: family/elasticsearch-ubuntu-2004
machineType: n2-standard-8
buildDirectory: /dev/shm/bk
if: build.branch == "main" || build.branch =~ /^[0-9]+\.[0-9]+\$/
if: build.branch == "main" || build.branch == "7.17"
- label: Check branch consistency
command: .ci/scripts/run-gradle.sh branchConsistency
timeout_in_minutes: 15
Expand Down
68 changes: 60 additions & 8 deletions .buildkite/scripts/pull-request/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,7 @@ Each time a pull request build is triggered, such as via commit or comment, we u

The generator handles the following:

- `allow-labels` - only trigger a step if the PR has one of these labels
- `skip-labels` - don't trigger the step if the PR has one of these labels
- `excluded-regions` - don't trigger the step if **all** of the changes in the PR match these paths/regexes
- `included-regions` - trigger the step if **all** of the changes in the PR match these paths/regexes
- `trigger-phrase` - trigger this step, and ignore all other steps, if the build was triggered by a comment and that comment matches this regex
- Note that each step has an automatic phrase of `.*run\\W+elasticsearch-ci/<step-name>.*`
- Various configurations for filtering/activating steps based on labels, changed files, etc. See below.
- Replacing `$SNAPSHOT_BWC_VERSIONS` in pipelines with an array of versions from `.ci/snapshotBwcVersions`
- Duplicating any step with `bwc_template: true` for each BWC version in `.ci/bwcVersions`

Expand All @@ -21,18 +16,75 @@ The generator handles the following:

Pipelines are in [`.buildkite/pipelines`](../../pipelines/pull-request). They are automatically picked up and given a name based on their filename.


## Setup

- [Install bun](https://bun.sh/)
- `npm install -g bun` will work if you already have `npm`
- `cd .buildkite; bun install` to install dependencies

## Run tests
## Testing

Testing the pipeline generator is done mostly using snapshot tests, which generate pipeline objects using the pipeline configurations in `mocks/pipelines` and then compare them to previously-generated snapshots in `__snapshots__` to confirm that they are correct.

The mock pipeline configurations should, therefore, try to cover all of the various features of the generator (allow-labels, skip-labels, etc).

Snapshots are generated/managed automatically whenever you create a new test that has a snapshot test condition. These are very similar to Jest snapshots.

### Run tests

```bash
cd .buildkite
bun test
```

If you need to regenerate the snapshots, run `bun test --update-snapshots`.

## Pipeline Configuration

The `config:` property at the top of pipelines inside `.buildkite/pipelines/pull-request` is a custom property used by our pipeline generator. It is not used by Buildkite.

All of the pipelines in this directory are evaluated whenever CI for a pull request is started, and the steps are filtered and combined into one pipeline based on the properties in `config:` and the state of the pull request.

The various configurations available mirror what we were using in our Jenkins pipelines.

### Config Properties

#### `allow-labels`

- Type: `string|string[]`
- Example: `["test-full-bwc"]`

Only trigger a step if the PR has one of these labels.

#### `skip-labels`

- Type: `string|string[]`
- Example: `>test-mute`

Don't trigger the step if the PR has one of these labels.

#### `excluded-regions`

- Type: `string|string[]` - must be JavaScript regexes
- Example: `["^docs/.*", "^x-pack/docs/.*"]`

Exclude the pipeline if all of the changed files in the PR match at least one regex. E.g. for the example above, don't run the step if all of the changed files are docs changes.

#### `included-regions`

- Type: `string|string[]` - must be JavaScript regexes
- Example: `["^docs/.*", "^x-pack/docs/.*"]`

Only include the pipeline if all of the changed files in the PR match at least one regex. E.g. for the example above, only run the step if all of the changed files are docs changes.

This is particularly useful for having a step that only runs, for example, when all of the other steps get filtered out because of the `excluded-regions` property.

#### `trigger-phrase`

- Type: `string` - must be a JavaScript regex
- Example: `"^run\\W+elasticsearch-ci/test-full-bwc.*"`
- Default: `.*run\\W+elasticsearch-ci/<step-name>.*` (`<step-name>` is generated from the filename of the yml file).

Trigger this step, and ignore all other steps, if the build was triggered by a comment and that comment matches this regex.

Note that the entire build itself is triggered via [`.buildkite/pull-requests.json`](../pull-requests.json). So, a comment has to first match the trigger configured there.
13 changes: 13 additions & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -26,3 +26,16 @@ x-pack/plugin/core/src/main/java/org/elasticsearch/xpack/core/security/authz/sto
# APM Data index templates, etc.
x-pack/plugin/apm-data/src/main/resources @elastic/apm-server
x-pack/plugin/apm-data/src/yamlRestTest/resources @elastic/apm-server

# Delivery
gradle @elastic/es-delivery
build-conventions @elastic/es-delivery
build-tools @elastic/es-delivery
build-tools-internal @elastic/es-delivery
*.gradle @elastic/es-delivery
.buildkite @elastic/es-delivery
.ci @elastic/es-delivery
.idea @elastic/es-delivery
distribution/src @elastic/es-delivery
distribution/packages/src @elastic/es-delivery
distribution/docker/src @elastic/es-delivery
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,9 @@
import org.elasticsearch.compute.data.BlockFactory;
import org.elasticsearch.compute.data.BooleanBlock;
import org.elasticsearch.compute.data.BytesRefBlock;
import org.elasticsearch.compute.data.DoubleArrayVector;
import org.elasticsearch.compute.data.DoubleBlock;
import org.elasticsearch.compute.data.ElementType;
import org.elasticsearch.compute.data.IntBlock;
import org.elasticsearch.compute.data.LongArrayVector;
import org.elasticsearch.compute.data.LongBlock;
import org.elasticsearch.compute.data.Page;
import org.elasticsearch.compute.operator.AggregationOperator;
Expand Down Expand Up @@ -66,7 +64,10 @@ public class AggregatorBenchmark {
private static final int OP_COUNT = 1024;
private static final int GROUPS = 5;

private static final BigArrays BIG_ARRAYS = BigArrays.NON_RECYCLING_INSTANCE; // TODO real big arrays?
private static final BlockFactory blockFactory = BlockFactory.getInstance(
new NoopCircuitBreaker("noop"),
BigArrays.NON_RECYCLING_INSTANCE // TODO real big arrays?
);

private static final String LONGS = "longs";
private static final String INTS = "ints";
Expand Down Expand Up @@ -116,8 +117,7 @@ public class AggregatorBenchmark {
@Param({ VECTOR_LONGS, HALF_NULL_LONGS, VECTOR_DOUBLES, HALF_NULL_DOUBLES })
public String blockType;

private static Operator operator(String grouping, String op, String dataType) {
DriverContext driverContext = driverContext();
private static Operator operator(DriverContext driverContext, String grouping, String op, String dataType) {
if (grouping.equals("none")) {
return new AggregationOperator(
List.of(supplier(op, dataType, 0).aggregatorFactory(AggregatorMode.SINGLE).apply(driverContext)),
Expand Down Expand Up @@ -154,25 +154,25 @@ private static Operator operator(String grouping, String op, String dataType) {

private static AggregatorFunctionSupplier supplier(String op, String dataType, int dataChannel) {
return switch (op) {
case COUNT -> CountAggregatorFunction.supplier(BIG_ARRAYS, List.of(dataChannel));
case COUNT -> CountAggregatorFunction.supplier(List.of(dataChannel));
case COUNT_DISTINCT -> switch (dataType) {
case LONGS -> new CountDistinctLongAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel), 3000);
case DOUBLES -> new CountDistinctDoubleAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel), 3000);
case LONGS -> new CountDistinctLongAggregatorFunctionSupplier(List.of(dataChannel), 3000);
case DOUBLES -> new CountDistinctDoubleAggregatorFunctionSupplier(List.of(dataChannel), 3000);
default -> throw new IllegalArgumentException("unsupported data type [" + dataType + "]");
};
case MAX -> switch (dataType) {
case LONGS -> new MaxLongAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel));
case DOUBLES -> new MaxDoubleAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel));
case LONGS -> new MaxLongAggregatorFunctionSupplier(List.of(dataChannel));
case DOUBLES -> new MaxDoubleAggregatorFunctionSupplier(List.of(dataChannel));
default -> throw new IllegalArgumentException("unsupported data type [" + dataType + "]");
};
case MIN -> switch (dataType) {
case LONGS -> new MinLongAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel));
case DOUBLES -> new MinDoubleAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel));
case LONGS -> new MinLongAggregatorFunctionSupplier(List.of(dataChannel));
case DOUBLES -> new MinDoubleAggregatorFunctionSupplier(List.of(dataChannel));
default -> throw new IllegalArgumentException("unsupported data type [" + dataType + "]");
};
case SUM -> switch (dataType) {
case LONGS -> new SumLongAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel));
case DOUBLES -> new SumDoubleAggregatorFunctionSupplier(BIG_ARRAYS, List.of(dataChannel));
case LONGS -> new SumLongAggregatorFunctionSupplier(List.of(dataChannel));
case DOUBLES -> new SumDoubleAggregatorFunctionSupplier(List.of(dataChannel));
default -> throw new IllegalArgumentException("unsupported data type [" + dataType + "]");
};
default -> throw new IllegalArgumentException("unsupported op [" + op + "]");
Expand Down Expand Up @@ -432,24 +432,24 @@ private static void checkUngrouped(String prefix, String op, String dataType, Pa
}
}

private static Page page(String grouping, String blockType) {
Block dataBlock = dataBlock(blockType);
private static Page page(BlockFactory blockFactory, String grouping, String blockType) {
Block dataBlock = dataBlock(blockFactory, blockType);
if (grouping.equals("none")) {
return new Page(dataBlock);
}
List<Block> blocks = groupingBlocks(grouping, blockType);
return new Page(Stream.concat(blocks.stream(), Stream.of(dataBlock)).toArray(Block[]::new));
}

private static Block dataBlock(String blockType) {
private static Block dataBlock(BlockFactory blockFactory, String blockType) {
return switch (blockType) {
case VECTOR_LONGS -> new LongArrayVector(LongStream.range(0, BLOCK_LENGTH).toArray(), BLOCK_LENGTH).asBlock();
case VECTOR_DOUBLES -> new DoubleArrayVector(
case VECTOR_LONGS -> blockFactory.newLongArrayVector(LongStream.range(0, BLOCK_LENGTH).toArray(), BLOCK_LENGTH).asBlock();
case VECTOR_DOUBLES -> blockFactory.newDoubleArrayVector(
LongStream.range(0, BLOCK_LENGTH).mapToDouble(l -> Long.valueOf(l).doubleValue()).toArray(),
BLOCK_LENGTH
).asBlock();
case MULTIVALUED_LONGS -> {
var builder = LongBlock.newBlockBuilder(BLOCK_LENGTH);
var builder = blockFactory.newLongBlockBuilder(BLOCK_LENGTH);
builder.beginPositionEntry();
for (int i = 0; i < BLOCK_LENGTH; i++) {
builder.appendLong(i);
Expand All @@ -462,15 +462,15 @@ private static Block dataBlock(String blockType) {
yield builder.build();
}
case HALF_NULL_LONGS -> {
var builder = LongBlock.newBlockBuilder(BLOCK_LENGTH);
var builder = blockFactory.newLongBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
builder.appendLong(i);
builder.appendNull();
}
yield builder.build();
}
case HALF_NULL_DOUBLES -> {
var builder = DoubleBlock.newBlockBuilder(BLOCK_LENGTH);
var builder = blockFactory.newDoubleBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
builder.appendDouble(i);
builder.appendNull();
Expand Down Expand Up @@ -502,7 +502,7 @@ private static Block groupingBlock(String grouping, String blockType) {
};
return switch (grouping) {
case LONGS -> {
var builder = LongBlock.newBlockBuilder(BLOCK_LENGTH);
var builder = blockFactory.newLongBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
for (int v = 0; v < valuesPerGroup; v++) {
builder.appendLong(i % GROUPS);
Expand All @@ -511,7 +511,7 @@ private static Block groupingBlock(String grouping, String blockType) {
yield builder.build();
}
case INTS -> {
var builder = IntBlock.newBlockBuilder(BLOCK_LENGTH);
var builder = blockFactory.newIntBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
for (int v = 0; v < valuesPerGroup; v++) {
builder.appendInt(i % GROUPS);
Expand All @@ -520,7 +520,7 @@ private static Block groupingBlock(String grouping, String blockType) {
yield builder.build();
}
case DOUBLES -> {
var builder = DoubleBlock.newBlockBuilder(BLOCK_LENGTH);
var builder = blockFactory.newDoubleBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
for (int v = 0; v < valuesPerGroup; v++) {
builder.appendDouble(i % GROUPS);
Expand All @@ -529,7 +529,7 @@ private static Block groupingBlock(String grouping, String blockType) {
yield builder.build();
}
case BOOLEANS -> {
BooleanBlock.Builder builder = BooleanBlock.newBlockBuilder(BLOCK_LENGTH);
BooleanBlock.Builder builder = blockFactory.newBooleanBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
for (int v = 0; v < valuesPerGroup; v++) {
builder.appendBoolean(i % 2 == 1);
Expand All @@ -538,7 +538,7 @@ private static Block groupingBlock(String grouping, String blockType) {
yield builder.build();
}
case BYTES_REFS -> {
BytesRefBlock.Builder builder = BytesRefBlock.newBlockBuilder(BLOCK_LENGTH);
BytesRefBlock.Builder builder = blockFactory.newBytesRefBlockBuilder(BLOCK_LENGTH);
for (int i = 0; i < BLOCK_LENGTH; i++) {
for (int v = 0; v < valuesPerGroup; v++) {
builder.appendBytesRef(bytesGroup(i % GROUPS));
Expand Down Expand Up @@ -574,8 +574,9 @@ private static void run(String grouping, String op, String blockType, int opCoun
default -> throw new IllegalArgumentException();
};

Operator operator = operator(grouping, op, dataType);
Page page = page(grouping, blockType);
DriverContext driverContext = driverContext();
Operator operator = operator(driverContext, grouping, op, dataType);
Page page = page(driverContext.blockFactory(), grouping, blockType);
for (int i = 0; i < opCount; i++) {
operator.addInput(page);
}
Expand All @@ -584,9 +585,6 @@ private static void run(String grouping, String op, String blockType, int opCoun
}

static DriverContext driverContext() {
return new DriverContext(
BigArrays.NON_RECYCLING_INSTANCE,
BlockFactory.getInstance(new NoopCircuitBreaker("noop"), BigArrays.NON_RECYCLING_INSTANCE)
);
return new DriverContext(BigArrays.NON_RECYCLING_INSTANCE, blockFactory);
}
}
Loading

0 comments on commit 412ef6c

Please sign in to comment.