-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[receiver/filelog] Fix issue where flushed tokens could be truncated #37596
Merged
+282
−4
Merged
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
# Use this changelog template to create an entry for release notes. | ||
|
||
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' | ||
change_type: bug_fix | ||
|
||
# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver) | ||
component: filelogreceiver | ||
|
||
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). | ||
note: Fix issue where flushed tokens could be truncated. | ||
|
||
# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists. | ||
issues: [35042] | ||
|
||
# (Optional) One or more lines of additional information to render under the primary note. | ||
# These lines will be padded with 2 spaces and then inserted directly into the document. | ||
# Use pipe (|) for multiline entries. | ||
subtext: | ||
|
||
# If your change doesn't affect end users or the exported elements of any package, | ||
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label. | ||
# Optional: The change log or logs in which this entry should be included. | ||
# e.g. '[user]' or '[user, api]' | ||
# Include 'user' if the change is relevant to end users. | ||
# Include 'api' if there is a change to a library API. | ||
# Default: '[user]' | ||
change_logs: [] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
// Copyright The OpenTelemetry Authors | ||
// SPDX-License-Identifier: Apache-2.0 | ||
|
||
package tokenlen // import "github.com/open-telemetry/opentelemetry-collector-contrib/pkg/stanza/tokenlen" | ||
|
||
import "bufio" | ||
|
||
// State tracks the potential length of a token before any terminator checking | ||
type State struct { | ||
MinimumLength int | ||
} | ||
|
||
// Func wraps a bufio.SplitFunc to track potential token lengths | ||
// Records the length of the data before delegating to the wrapped function | ||
func (s *State) Func(splitFunc bufio.SplitFunc) bufio.SplitFunc { | ||
if s == nil { | ||
return splitFunc | ||
} | ||
|
||
return func(data []byte, atEOF bool) (advance int, token []byte, err error) { | ||
// Note the potential token length but don't update state until we know | ||
// whether or not a token is actually returned | ||
potentialLen := len(data) | ||
|
||
advance, token, err = splitFunc(data, atEOF) | ||
if advance == 0 && token == nil && err == nil { | ||
// The splitFunc is asking for more data. Remember how much | ||
// we saw previously so the buffer can be sized appropriately. | ||
s.MinimumLength = potentialLen | ||
} else { | ||
// A token was returned. This state represented that token, so clear it. | ||
s.MinimumLength = 0 | ||
} | ||
return advance, token, err | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,91 @@ | ||
// Copyright The OpenTelemetry Authors | ||
// SPDX-License-Identifier: Apache-2.0 | ||
|
||
package tokenlen | ||
|
||
import ( | ||
"bufio" | ||
"testing" | ||
|
||
"github.com/stretchr/testify/require" | ||
) | ||
|
||
func TestTokenLenState_Func(t *testing.T) { | ||
cases := []struct { | ||
name string | ||
input []byte | ||
atEOF bool | ||
expectedLen int | ||
expectedToken []byte | ||
expectedAdv int | ||
expectedErr error | ||
}{ | ||
{ | ||
name: "no token yet", | ||
input: []byte("partial"), | ||
atEOF: false, | ||
expectedLen: len("partial"), | ||
}, | ||
{ | ||
name: "complete token", | ||
input: []byte("complete\ntoken"), | ||
atEOF: false, | ||
expectedLen: 0, // should clear state after finding token | ||
expectedToken: []byte("complete"), | ||
expectedAdv: len("complete\n"), | ||
}, | ||
{ | ||
name: "growing token", | ||
input: []byte("growing"), | ||
atEOF: false, | ||
expectedLen: len("growing"), | ||
}, | ||
{ | ||
name: "flush at EOF", | ||
input: []byte("flush"), | ||
atEOF: true, | ||
expectedLen: 0, // should clear state after flushing | ||
expectedToken: []byte("flush"), | ||
expectedAdv: len("flush"), | ||
}, | ||
} | ||
|
||
for _, tc := range cases { | ||
t.Run(tc.name, func(t *testing.T) { | ||
state := &State{} | ||
splitFunc := state.Func(bufio.ScanLines) | ||
|
||
adv, token, err := splitFunc(tc.input, tc.atEOF) | ||
require.Equal(t, tc.expectedErr, err) | ||
require.Equal(t, tc.expectedToken, token) | ||
require.Equal(t, tc.expectedAdv, adv) | ||
require.Equal(t, tc.expectedLen, state.MinimumLength) | ||
}) | ||
} | ||
} | ||
|
||
func TestTokenLenState_GrowingToken(t *testing.T) { | ||
state := &State{} | ||
splitFunc := state.Func(bufio.ScanLines) | ||
|
||
// First call with partial token | ||
adv, token, err := splitFunc([]byte("part"), false) | ||
require.NoError(t, err) | ||
require.Nil(t, token) | ||
require.Equal(t, 0, adv) | ||
require.Equal(t, len("part"), state.MinimumLength) | ||
|
||
// Second call with longer partial token | ||
adv, token, err = splitFunc([]byte("partial"), false) | ||
require.NoError(t, err) | ||
require.Nil(t, token) | ||
require.Equal(t, 0, adv) | ||
require.Equal(t, len("partial"), state.MinimumLength) | ||
|
||
// Final call with complete token | ||
adv, token, err = splitFunc([]byte("partial\ntoken"), false) | ||
require.NoError(t, err) | ||
require.Equal(t, []byte("partial"), token) | ||
require.Equal(t, len("partial\n"), adv) | ||
require.Equal(t, 0, state.MinimumLength) // State should be cleared after emitting token | ||
} |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please correct me if I'm wrong, but I'm confused. Wouldn't the flush function timeout, regardless of buffer resizing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The timer would indeed be expired. What this change does though, is makes sure that the token we're passing into the flush function is at least as long as the one we passed in previously.
Example:
Poll 1: Buffer starts at 16kB, grows to 20kB, finds no terminator. Flush hasn't expired, so we return. (At this point, the tokenlen function has captured knowledge that there is 20kB of unterminated content at the EOF.
Poll 2: We see that there was 20kB unterminated content left after the previous poll, so we initialize the scanner with a 20kB buffer. This means it will read the entire token on the first try. The timer has expired, so we flush the token.
The fix here is that we are avoiding the situation where Poll 2 starts with a 16kB buffer and ends up flushing only a 16kB token.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay now I understand.
So, instead of resizing the buffer for x times, we configure it to be of "potential length" at the beginning, and we will atleast read "potential length" amount of data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's the idea. I think the name of the field could be clearer.
MinimumLength
is better because it conveys that we know the next token will be at least that long. I'll push the update shortly.