Skip to content

Commit

Permalink
Improve registry file migration performance (elastic#20717)
Browse files Browse the repository at this point in the history
## What does this PR do?

Ensure that fsync is called only once after the migration of old state entries is complete.

## Why is it important?

The registry uses a checkpoint-predicate that when true writes all state to
disk and calls fsync. The checkpoint operation is supposed to be disabled
when migration the old registry file.
During migration, the old state will be directly copied (after cleanup
and schema changes are applied). The old state will only be deleted
after the migration of all states is complete.

Unfortunately, the checkpoint predicate did return true, instead of
false, which did trigger a checkpoint operation per state to be
migrated. The fix disables fsync, and now finalizes the migration by
calling Checkpoint directly.

The PR also provides a benchmark (each "op" is one migration attempt). Before this fix (go test did kill the run after 10min for 10k entries):

```
BenchmarkMigration0To1/1-32  	             286	   4203353 ns/op
BenchmarkMigration0To1/10-32 	              34	  35730680 ns/op
BenchmarkMigration0To1/100-32         	       2	 720890839 ns/op
BenchmarkMigration0To1/1000-32        	       1	31633569085 ns/op
... test timed out after 10min
```

Benchmark results with the fix (migration 100k entries took ~7.6s):

```
BenchmarkMigration0To1/1-32  	             274	   4371400 ns/op
BenchmarkMigration0To1/10-32 	             259	   4639209 ns/op
BenchmarkMigration0To1/100-32         	     100	  13374147 ns/op
BenchmarkMigration0To1/1000-32        	      13	 104220944 ns/op
BenchmarkMigration0To1/10000-32       	       2	 916656798 ns/op
BenchmarkMigration0To1/100000-32      	       1	7616648790 ns/op
PASS
```

Closes elastic#20705

(cherry picked from commit 03748b3)
  • Loading branch information
Steffen Siering authored and kvch committed Aug 25, 2020
1 parent 2e81f11 commit abdea5c
Show file tree
Hide file tree
Showing 6 changed files with 202 additions and 8 deletions.
38 changes: 38 additions & 0 deletions CHANGELOG.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,44 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d
- Fixed `cloudfoundry.access` to have the correct `cloudfoundry.app.id` contents. {pull}17847[17847]
- Fixing `ingress_controller.` fields to be of type keyword instead of text. {issue}17834[17834]
- Fixed typo in log message. {pull}17897[17897]
- Fix Cisco ASA ASA 3020** and 106023 messages {pull}17964[17964]
- Unescape file name from SQS message. {pull}18370[18370]
- Improve cisco asa and ftd pipelines' failure handler to avoid mapping temporary fields. {issue}18391[18391] {pull}18392[18392]
- Fix source.address not being set for nginx ingress_controller {pull}18511[18511]
- Fix PANW module wrong mappings for bytes and packets counters. {issue}18522[18522] {pull}18525[18525]
- Fixed ingestion of some Cisco ASA and FTD messages when a hostname was used instead of an IP for NAT fields. {issue}14034[14034] {pull}18376[18376]
- Fix a rate limit related issue in httpjson input for Okta module. {issue}18530[18530] {pull}18534[18534]
- Fix `googlecloud.audit` pipeline to only take in fields that are explicitly defined by the dataset. {issue}18465[18465] {pull}18472[18472]
- Fix `o365.audit` failing to ingest events when ip address is surrounded by square brackets. {issue}18587[18587] {pull}18591[18591]
- Fix Kubernetes Watcher goroutine leaks when input config is invalid and `input.reload` is enabled. {issue}18629[18629] {pull}18630[18630]
- Okta module now sets the Elasticsearch `_id` field to the Okta UUID value contained in each system log to minimize the possibility of duplicating events. {pull}18953[18953]
- Fix improper nesting of session_issuer object in aws cloudtrail fileset. {issue}18894[18894] {pull}18915[18915]
- Fix `o365` module ignoring `var.api` settings. {pull}18948[18948]
- Fix `netflow` module to support 7 bytepad for IPFIX template. {issue}18098[18098]
- Fix Cisco ASA dissect pattern for 313008 & 313009 messages. {pull}19149[19149]
- Fix date and timestamp formats for fortigate module {pull}19316[19316]
- Fix memory leak in tcp and unix input sources. {pull}19459[19459]
- Add missing `default_field: false` to aws filesets fields.yml. {pull}19568[19568]
- Fix tls mapping in suricata module {issue}19492[19492] {pull}19494[19494]
- Update container name for the azure filesets. {pull}19899[19899]
- Fix bug with empty filter values in system/service {pull}19812[19812]
- Fix S3 input to trim delimiter /n from each log line. {pull}19972[19972]
- Ignore missing in Zeek module when dropping unecessary fields. {pull}19984[19984]
- Fix auditd module syscall table for ppc64 and ppc64le. {pull}20052[20052]
- Fix Filebeat OOMs on very long lines {issue}19500[19500], {pull}19552[19552]
- Fix s3 input parsing json file without expand_event_list_from_field. {issue}19902[19902] {pull}19962[19962] {pull}20370[20370]
- Fix millisecond timestamp normalization issues in CrowdStrike module {issue}20035[20035], {pull}20138[20138]
- Fix support for message code 106100 in Cisco ASA and FTD. {issue}19350[19350] {pull}20245[20245]
- Fix event.outcome logic for azure/siginlogs fileset {pull}20254[20254]
- Fix `fortinet` setting `event.timezone` to the system one when no `tz` field present {pull}20273[20273]
- Fix `okta` geoip lookup in pipeline for `destination.ip` {pull}20454[20454]
- Fix mapping exception in the `googlecloud/audit` dataset pipeline. {issue}18465[18465] {pull}20465[20465]
- Fix `cisco` asa and ftd parsing of messages 106102 and 106103. {pull}20469[20469]
- Improve validation checks for Azure configuration {issue}20369[20369] {pull}20389[20389]
- Fix event.kind for system/syslog pipeline {issue}20365[20365] {pull}20390[20390]
- Clone value when copy fields in processors to avoid crash. {issue}19206[19206] {pull}20500[20500]
- Fix event.type for zeek/ssl and duplicate event.category for zeek/connection {pull}20696[20696]
- Fix long registry migration times. {pull}20717[20717] {issue}20705[20705]

*Heartbeat*

Expand Down
16 changes: 10 additions & 6 deletions filebeat/registrar/migrate.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ import (
helper "github.com/elastic/beats/v7/libbeat/common/file"
"github.com/elastic/beats/v7/libbeat/logp"
"github.com/elastic/beats/v7/libbeat/paths"
"github.com/elastic/beats/v7/libbeat/statestore"
"github.com/elastic/beats/v7/libbeat/statestore/backend/memlog"
)

Expand Down Expand Up @@ -214,17 +213,15 @@ func (m *Migrator) updateToVersion1(regHome string) error {
registryBackend, err := memlog.New(logp.NewLogger("migration"), memlog.Settings{
Root: m.dataPath,
FileMode: m.permissions,
Checkpoint: func(_ uint64) bool { return true },
Checkpoint: func(sz uint64) bool { return false },
IgnoreVersionCheck: true,
})
if err != nil {
return errors.Wrap(err, "failed to create new registry backend")
}
defer registryBackend.Close()

reg := statestore.NewRegistry(registryBackend)
defer reg.Close()

store, err := reg.Get("filebeat")
store, err := registryBackend.Access("filebeat")
if err != nil {
return errors.Wrap(err, "failed to open filebeat registry store")
}
Expand All @@ -234,6 +231,13 @@ func (m *Migrator) updateToVersion1(regHome string) error {
return errors.Wrap(err, "failed to migrate registry states")
}

if checkpointer, ok := store.(interface{ Checkpoint() error }); ok {
err := checkpointer.Checkpoint()
if err != nil {
return fmt.Errorf("failed to fsync filebeat storage state: %w", err)
}
}

if err := os.Remove(origDataFile); err != nil {
return errors.Wrapf(err, "migration complete but failed to remove original data file: %v", origDataFile)
}
Expand Down
141 changes: 141 additions & 0 deletions filebeat/registrar/migrate_bench_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
// Licensed to Elasticsearch B.V. under one or more contributor
// license agreements. See the NOTICE file distributed with
// this work for additional information regarding copyright
// ownership. Elasticsearch B.V. licenses this file to you under
// the Apache License, Version 2.0 (the "License"); you may
// not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

// +build linux darwin

package registrar

import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"

"github.com/elastic/beats/v7/filebeat/input/file"
libfile "github.com/elastic/beats/v7/libbeat/common/file"
)

var keep bool

func init() {
flag.BoolVar(&keep, "keep", false, "do not delete test directories")
}

func BenchmarkMigration0To1(b *testing.B) {
for _, entries := range []int{1, 10, 100, 1000, 10000, 100000} {
b.Run(fmt.Sprintf("%v", entries), func(b *testing.B) {
b.StopTimer()

dataHome := tempDir(b)
registryHome := filepath.Join(dataHome, "filebeat")
mkDir(b, registryHome)

metaPath := filepath.Join(registryHome, "meta.json")
dataPath := filepath.Join(registryHome, "data.json")

states := make([]file.State, entries)
for i := range states {
states[i] = file.State{
Id: fmt.Sprintf("123455-%v", i),
Source: fmt.Sprintf("/path/to/test/file-%v.log", i),
FileStateOS: libfile.StateOS{
Inode: uint64(i),
Device: 123455,
},
}
}

for i := 0; i < b.N; i++ {
b.StopTimer()
clearDir(b, registryHome)
// cleanup older run

writeFile(b, metaPath, []byte(`{"version": "0"}`))
func() {
f, err := os.Create(dataPath)
if err != nil {
b.Fatal(err)
}
defer f.Close()

enc := json.NewEncoder(f)
if err := enc.Encode(states); err != nil {
b.Fatal(err)
}
}()

migrator := &Migrator{
dataPath: dataHome,
permissions: 0600,
}

b.StartTimer()
if err := migrator.updateToVersion1(registryHome); err != nil {
b.Fatal(err)
}
}
})
}
}

func tempDir(t testing.TB) string {
cwd, err := os.Getwd()
if err != nil {
t.Fatal(err)
}

path, err := ioutil.TempDir(cwd, "")
if err != nil {
t.Fatal(err)
}

if !keep {
t.Cleanup(func() {
os.RemoveAll(path)
})
}
return path
}

func mkDir(t testing.TB, path string) {
if err := os.MkdirAll(path, 0700); err != nil {
t.Fatal(err)
}
}

func clearDir(t testing.TB, path string) {
old, err := ioutil.ReadDir(path)
if err != nil {
t.Fatal(err)
}
for _, info := range old {
if err := os.RemoveAll(info.Name()); err != nil {
t.Fatal(err)
}
}
}

func writeFile(t testing.TB, path string, contents []byte) {
t.Helper()
err := ioutil.WriteFile(path, contents, 0600)
if err != nil {
t.Fatal(err)
}
}
3 changes: 2 additions & 1 deletion filebeat/registrar/registrar.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ import (
"github.com/elastic/beats/v7/libbeat/logp"
"github.com/elastic/beats/v7/libbeat/monitoring"
"github.com/elastic/beats/v7/libbeat/statestore"
"github.com/elastic/beats/v7/libbeat/statestore/backend"
)

type Registrar struct {
Expand Down Expand Up @@ -300,7 +301,7 @@ func readStatesFrom(store *statestore.Store) ([]file.State, error) {
return states, nil
}

func writeStates(store *statestore.Store, states []file.State) error {
func writeStates(store backend.Store, states []file.State) error {
for i := range states {
key := fileStatePrefix + states[i].Id
if err := store.Set(key, states[i]); err != nil {
Expand Down
2 changes: 1 addition & 1 deletion filebeat/tests/system/test_registrar_upgrade.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,5 +95,5 @@ def validate_if_registry_is_moved_under_folder(self):
assert os.path.isdir(migrated_registry_dir)
assert os.path.isdir(migrated_registry_dir + "/filebeat")
assert os.path.isfile(migrated_registry_dir + "/filebeat/log.json")
assert os.path.isfile(migrated_registry_dir + "/filebeat/1.json")
assert os.path.isfile(migrated_registry_dir + "/filebeat/2.json")
assert os.path.isfile(migrated_registry_dir + "/filebeat/active.dat")
10 changes: 10 additions & 0 deletions libbeat/statestore/backend/memlog/store.go
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,16 @@ func (s *store) Remove(key string) error {
return s.logOperation(&opRemove{K: key})
}

// Checkpoint triggers a state checkpoint operation. All state will be written
// to a new transaction data file and fsync'ed. The log file will be reset after
// a successful write.
func (s *store) Checkpoint() error {
s.lock.Lock()
defer s.lock.Unlock()

return s.disk.WriteCheckpoint(s.mem.table)
}

// lopOperation ensures that the diskstore reflects the recent changes to the
// in memory store by either triggering a checkpoint operations or adding the
// operation type to the update log file.
Expand Down

0 comments on commit abdea5c

Please sign in to comment.