Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client fs streaming API loses data when a file is rotated and a log line is split between two files #11653

Open
jstachowiak opened this issue Dec 9, 2021 · 5 comments
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/allocation API theme/logging type/bug

Comments

@jstachowiak
Copy link

Nomad version

Nomad v1.2.2 (78b8c171a211f967a8b297a88a7e844b3543f2b0)

Operating system and Environment details

Ubuntu 20.04.3 LTS 4.19.128-microsoft-standard

Issue

When you stream the contents of a log file using /client/fs/stream/:alloc_id or logs with /client/fs/logs/:alloc_id a data loss occurs. This happens when a log line is being split between two files and the contents are streamed from a file that is currently being rotated. Both files in the file system have the data but the API only sends a frame with the data from the next file, so we lose the first part of the split log line. This makes it very hard to reliably ship logs to a log aggregation system.

Files in alloc/logs:

983044 httpd.stdout.0
41000 httpd.stdout.1

Captured frames:

{ File: "alloc/logs/httpd.stdout.0", Offset: 65536, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 131072, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 196608, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 262144, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 270337, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 335873, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 401409, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 466945, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 475138, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 540674, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 606210, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 671746, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 679939, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 745475, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 811011, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 876547, ... }
{ File: "alloc/logs/httpd.stdout.0", Offset: 819204, ... }
-> MISSING FRAMES
{ File: "alloc/logs/httpd.stdout.1", Offset: 40961, ... }
{ File: "alloc/logs/httpd.stdout.1", Offset: 41000, ... }

In my case, every log line is 204800 bytes (+1 for EOL). Log lines are never batched before being flushed if the frame size has not been hit.

  1. How much data did we lose?
    204801 - 40961 (second part of the split log line in "httpd.stdout.1") = 163840
  2. How do we know that frames were not sent for "httpd.stdout.0"?
    163840 + 819204 (last streamed offset) = 983044 (file size)

Reproduction steps

  1. Create a task that outputs lines longer than lineScanLimit = 32 * 1024.
  2. Start capturing log frames, e.g., using curl.
curl -sSN "http://localhost:4646/v1/client/fs/logs/:alloc_id?task=:task&type=stdout&follow=true" > nomad.stream
  1. Create enough logs to force rotation.

Expected Result

The stream should contain complete logs.

Actual Result

Missing frames.

Job file (if appropriate)

Nomad Server logs (if appropriate)

Nomad Client logs (if appropriate)

@tgross
Copy link
Member

tgross commented Dec 9, 2021

Hi @jstachowiak! Thanks for the detailed report. The log streaming API mostly reuses a lot of the code for streaming files, (ref fs_endpoint.go#L642), so that's the first place for us to look for where the frame might be getting dropped.

Were you able to confirm that the logs actually made it to disk? If they don't show up in the streaming API that's a bug for sure but if they don't show up on disk either that's an even bigger bug.

@tgross tgross moved this from Needs Triage to Needs Roadmapping in Nomad - Community Issues Triage Dec 9, 2021
@jstachowiak
Copy link
Author

Hello @tgross, thanks for a quick reply! Yes, I can confirm the logs are saved to disk.

@arkadiuss
Copy link
Contributor

Hello @tgross!
I'm also interested in resolving the issue. I looked through the code and did a small test. Here are findings.

From the code perspective it seems that issue is caused by the incorrect assumption that creation of the log next file means that the stream from previous one is completed. In client/fs_endpoint.go file logic is as follow:

#591 eofCancelCh = blockUntilNextLog(ctx, fs, logPath, task, logType, idx+1) - creation of a channel that emits event when a file with bigger number is created
#595 f.streamFile(ctx, openOffset, p, 0, fs, framer, eofCancelCh) - channel is passed to the streamFile function
#716 
select {
case <-changes.Modified: ..
case <-changes.Deleted: ...
case <-changes.Truncated: ...
case <-framer.ExitCh(): ...
case <-ctx.Done():  ...
case err, ok := <-eofCancelCh: ...
return err
}
- events Modified and eofCancelCh are treated equally and there is a great chance that the second one will come first. 

As far as I understand, events come from file watcher so there is no way to synchronize them while writing to a file.

I did a small test and added a 5 seconds delay in line 812 and then the endpoint worked correctly and @jstachowiak 's test passed. That's definitely not a solution but confirms the findings.

I will be happy to discuss and work with you on possible solutions.

@tgross
Copy link
Member

tgross commented Dec 21, 2021

Thanks @arkadiuss, that's helpful!

Given that we've received the truncate event here, I'm wondering if it might make sense to add an attempt to read one more frame of data at fs_endpoint.go#L722. But I'd want to look at TestFS_streamFile_Truncate to see why we're not catching this problem there. If you were interested in looking into this in more detail (if we don't get to it sooner rather than later), that'd be a good place to start and I'd be happy to review any PR you submit.

@tgross tgross added stage/accepted Confirmed, and intend to work on. No timeline committment though. and removed stage/needs-investigation labels Dec 21, 2021
@arkadiuss
Copy link
Contributor

What a synchronization! Thank you for the response. Performing one more read is the solution proposed in above MR. However it's not a truncate event that is received, because nothing is removed from file, only the next one is started. Generally truncate event probably should never happen as far as I correctly understand logging. We received Modified event and cancel and random one is taken.

I didn't touch tests yet, will do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/allocation API theme/logging type/bug
Projects
Status: Needs Roadmapping
Development

No branches or pull requests

3 participants