-
-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce cmap for distributing mutexes per documents #1051
Conversation
WalkthroughThe pull request introduces a concurrent map implementation in Go within the Changes
Possibly related PRs
Suggested labels
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (7)
server/backend/sync/pubsub.go (1)
75-78
: Add godoc for exported Closed methodThe implementation is correct, but as this is an exported method, it should include godoc documentation explaining its purpose and thread-safety guarantees.
Add documentation:
+// Closed returns whether this subscription is closed. +// This method is thread-safe and can be called concurrently. func (s *Subscription) Closed() bool { return s.closed.Load() }server/rpc/interceptors/default.go (1)
48-48
: LGTM with suggestion for detailed debug logging.The simplified error logging is clean and includes essential information. However, consider adding trace-level logging for request details to aid debugging when needed.
func (i *DefaultInterceptor) WrapUnary(next connect.UnaryFunc) connect.UnaryFunc { return func( ctx context.Context, req connect.AnyRequest, ) (connect.AnyResponse, error) { start := gotime.Now() resp, err := next(ctx, req) reqLogger := logging.From(ctx) + // Log request details at trace level for debugging + reqLogger.Tracef("RPC details: %q, request: %+v", req.Spec().Procedure, req) if err != nil { reqLogger.Warnf("RPC : %q %s => %q", req.Spec().Procedure, gotime.Since(start), err) return nil, connecthelper.ToStatusError(err) } return resp, nil } }pkg/cmap/cmap_test.go (3)
19-27
: Group local imports separately.Consider grouping the local import
github.com/yorkie-team/yorkie/pkg/cmap
separately from third-party imports.import ( "fmt" + "math/rand" + "sync" + "testing" + "time" + "github.com/stretchr/testify/assert" - "github.com/yorkie-team/yorkie/pkg/cmap" - "math/rand" - "sync" - "testing" - "time" + + "github.com/yorkie-team/yorkie/pkg/cmap" )🧰 Tools
🪛 golangci-lint
21-21: File is not
goimports
-ed with -local github.com/yorkie-team/yorkie(goimports)
29-76
: Consider adding more test cases for edge scenarios.The current test coverage is good for basic operations, but consider adding:
- Edge cases:
- Testing with nil values
- Testing with zero values
- Capacity-related tests:
- Testing with empty map
- Testing map growth
- Negative test cases:
- Deleting non-existent keys
- Multiple deletions of the same key
Would you like me to provide example implementations for these additional test cases?
91-103
: Improve random number generation and callback parameter usage.
- While using
math/rand
is acceptable for test data generation, consider seeding it for reproducible tests:rand.Seed(time.Now().UnixNano())
- The
value
parameter in the Delete callback is unused.- m.Delete(key, func(value int, exists bool) bool { + m.Delete(key, func(_ int, exists bool) bool {🧰 Tools
🪛 golangci-lint
[high] 91-91: G404: Use of weak random number generator (math/rand or math/rand/v2 instead of crypto/rand)
(gosec)
[high] 94-94: G404: Use of weak random number generator (math/rand or math/rand/v2 instead of crypto/rand)
(gosec)
[warning] 100-100: unused-parameter: parameter 'value' seems to be unused, consider removing or renaming it as _
(revive)
🪛 GitHub Check: build
[failure] 91-91:
G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)
[failure] 94-94:
G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)server/packs/packs.go (1)
170-175
: LGTM! Consider adding mutex contention metrics.The addition of elapsed time logging is valuable for monitoring synchronization performance. This aligns well with the PR's focus on improving mutex handling.
Consider also logging the mutex wait time separately to specifically track the impact of the new distributed mutex implementation:
logging.From(ctx).Infof( - "SYNC: '%s' is synced by '%s', push: %d, pull: %s, elapsed: %s", + "SYNC: '%s' is synced by '%s', push: %d, pull: %s, elapsed: %s, mutex_wait: %s", docInfo.Key, clientInfo.Key, len(pushedChanges), pullLog, gotime.Since(start), + mutexWaitTime, )pkg/cmap/cmap.go (1)
24-27
: Consider usingsync.Map
for improved concurrency performanceThis implementation uses a single mutex to protect the entire map, which might become a bottleneck under high concurrency. Consider using Go's
sync.Map
or implementing sharded locks to enhance performance in highly concurrent scenarios.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (9)
- pkg/cmap/cmap.go (1 hunks)
- pkg/cmap/cmap_test.go (1 hunks)
- server/backend/sync/memory/pubsub.go (4 hunks)
- server/backend/sync/pubsub.go (3 hunks)
- server/config.go (1 hunks)
- server/logging/logging.go (0 hunks)
- server/packs/packs.go (1 hunks)
- server/rpc/interceptors/default.go (1 hunks)
- server/rpc/interceptors/yorkie.go (1 hunks)
💤 Files with no reviewable changes (1)
- server/logging/logging.go
🧰 Additional context used
🪛 golangci-lint
pkg/cmap/cmap.go
[warning] 17-17: package-comments: should have a package comment
(revive)
pkg/cmap/cmap_test.go
21-21: File is not
goimports
-ed with -local github.com/yorkie-team/yorkie(goimports)
[high] 91-91: G404: Use of weak random number generator (math/rand or math/rand/v2 instead of crypto/rand)
(gosec)
[high] 94-94: G404: Use of weak random number generator (math/rand or math/rand/v2 instead of crypto/rand)
(gosec)
[high] 175-175: G404: Use of weak random number generator (math/rand or math/rand/v2 instead of crypto/rand)
(gosec)
[warning] 100-100: unused-parameter: parameter 'value' seems to be unused, consider removing or renaming it as _
(revive)
server/backend/sync/memory/pubsub.go
[warning] 48-48: exported: comment on exported method Subscriptions.Values should be of the form "Values ..."
(revive)
[warning] 133-133: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
🪛 GitHub Check: build
pkg/cmap/cmap.go
[failure] 17-17:
package-comments: should have a package comment (revive)pkg/cmap/cmap_test.go
[failure] 91-91:
G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)
[failure] 94-94:
G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)
[failure] 175-175:
G404: Use of weak random number generator (math/rand instead of crypto/rand) (gosec)server/backend/sync/memory/pubsub.go
[failure] 48-48:
exported: comment on exported method Subscriptions.Values should be of the form "Values ..." (revive)
🔇 Additional comments (11)
server/backend/sync/pubsub.go (3)
20-21
: LGTM: Import follows standard Go conventionsThe addition of the sync/atomic package is appropriate for the atomic operations being introduced.
70-73
: LGTM: Thread-safe closure implementationThe use of CompareAndSwap ensures that the events channel is closed exactly once, preventing potential panic from multiple closures in concurrent scenarios.
32-32
: Verify atomic.Bool usage across the codebaseLet's ensure the atomic.Bool field is consistently accessed through atomic operations throughout the codebase.
Also applies to: 70-73, 75-78
✅ Verification successful
Let me gather more specific information about the atomic.Bool usage.
Atomic operations are properly implemented and used consistently
The atomic.Bool field 'closed' is correctly accessed through atomic operations throughout the codebase:
- CompareAndSwap is used in Close() to ensure thread-safe closure
- Load is used in Closed() method for safe state checking
- No direct field access bypassing atomic operations was found
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for any direct field access that might bypass atomic operations rg -g '*.go' "subscription.*closed\s*[^.]|subscription.*closed\s*=\s*" --type go # Search for any error handling paths that might affect subscription closure ast-grep --pattern 'if err != nil { $$$ subscription.$$$ $$$ }'Length of output: 256
Script:
#!/bin/bash # Search for all atomic.Bool operations on closed field rg -g '*.go' "closed\.(Store|Load|Swap|CompareAndSwap)" -B 2 -A 2 # Search for subscription struct definition and methods ast-grep --pattern 'type subscription struct { $$$ }' # Look for subscription close/shutdown related methods ast-grep --pattern 'func ($_ *subscription) Close() { $$$ }'Length of output: 732
server/rpc/interceptors/default.go (1)
47-50
: Verify impact on performance monitoring.With the removal of
SlowThreshold
, we've lost automatic detection of slow RPC calls. Given that this PR introduces concurrent document handling, monitoring performance becomes more critical.Consider implementing metrics collection for RPC durations. This could be done using Prometheus metrics or similar monitoring solutions. For example:
// Example metric var rpcDurationHistogram = prometheus.NewHistogramVec( prometheus.HistogramOpts{ Name: "yorkie_rpc_duration_seconds", Help: "RPC latency distributions.", Buckets: prometheus.ExponentialBuckets(0.001, 2, 10), // Start at 1ms }, []string{"procedure", "status"}, )Let me verify if metrics are already implemented:
✅ Verification successful
Based on the search results, I can see that Yorkie already has comprehensive metrics implementation using Prometheus, including RPC duration tracking. Let me verify the specific metrics for RPC calls:
Let me try a different search pattern to find the metrics implementation:
Let me check one more time for RPC metrics in the Prometheus implementation:
RPC performance monitoring is already implemented via Prometheus metrics
The codebase already has comprehensive RPC performance monitoring through Prometheus metrics:
- RPC completion tracking via
serverHandledCounter
that counts all RPCs with labels for type, service, method, and status code- Specific duration tracking for operations:
pushPullResponseSeconds
histogram for measuring response timespushPullSnapshotDurationSeconds
histogram for snapshot creation timesThe removal of
SlowThreshold
does not impact monitoring capabilities as these metrics provide better granularity for tracking performance, including:
- Total RPC counts by method and status
- Response time distributions via histograms
- Detailed labels for project, hostname, and operation types
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for metric implementations rg -i "prometheus|metric|histogram|counter" --type goLength of output: 34696
Script:
#!/bin/bash # Search for RPC duration metrics implementation ast-grep --pattern 'serverHandledCounter := promauto.With(reg).NewCounterVec(prometheus.CounterOpts{ $$$ })' # Also check for any histogram metrics that might track duration ast-grep --pattern 'NewHistogram(prometheus.HistogramOpts{ $$$ })'Length of output: 183
Script:
#!/bin/bash # Search for RPC duration/latency related metrics rg -A 5 "NewHistogram.*duration|NewHistogram.*latency|NewHistogram.*seconds" --type go # Also check the interceptor implementation where the warning is logged cat server/rpc/interceptors/default.goLength of output: 2848
Script:
#!/bin/bash # Check the Prometheus metrics implementation cat server/profiling/prometheus/metrics.goLength of output: 13321
server/rpc/interceptors/yorkie.go (1)
185-185
: Consider the impact of removing project name from logs.While simplifying the logging context is generally good, removing the project name (
prj
field) might impact the ability to debug project-specific issues, especially in a multi-project environment. The project context is still available in the code but no longer in the logs.Let's verify the logging changes across the codebase:
Consider keeping the project name in logs for better observability:
-ctx = logging.With(ctx, logging.New(i.requestID.next())) +ctx = logging.With(ctx, logging.New(i.requestID.next(), logging.NewField("prj", project.Name)))server/config.go (2)
59-59
: Verify impact of increased snapshot interval.The snapshot interval has been tripled from 1000ms to 3000ms. This reduces I/O pressure but increases the potential data loss window in case of failures.
#!/bin/bash # Search for tests validating recovery scenarios with different snapshot intervals rg -l "DefaultSnapshotInterval|SnapshotInterval.*test" --type go # Look for recovery-related documentation fd -e md -e txt . | xargs rg "snapshot.*interval|recovery.*time" -i
58-58
: Verify performance impact of increased snapshot threshold.The snapshot threshold has been doubled from 500 to 1000 operations. While this can reduce I/O overhead, it may increase memory usage and recovery time as more operations need to be replayed.
server/backend/sync/memory/pubsub.go (4)
94-99
: Subscription initialization inUpsert
is correctThe use of
Upsert
in theSubscribe
method properly initializes a newSubscriptions
instance when one doesn't exist and returns the existing one when it does. This effectively handles concurrent subscriptions per document key.
129-135
: Correct handling of subscription removalIn the
Unsubscribe
method, after deleting a subscription, you check if there are no remaining subscriptions for the document. If so, you remove the document key fromsubscriptionsMap
. This is efficiently handled using the concurrent map'sDelete
method.🧰 Tools
🪛 golangci-lint
[warning] 133-133: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
201-209
:ClientIDs
method accurately retrieves subscriber IDsThe
ClientIDs
method correctly fetches all subscriber IDs for a given document key by iterating over the subscriptions. This ensures that the list of active clients is always up to date.
162-172
: Ensure thread-safe iteration over subscriptions inPublish
When iterating over
subs.Values()
in thePublish
method, confirm thatValues()
provides a snapshot safe for concurrent iteration. Sincesubs.internalMap
is a concurrent map,Values()
should handle concurrent modifications appropriately.You can verify the thread safety by reviewing the
cmap.Map
implementation:✅ Verification successful
Let me gather more context about the Map implementation to ensure thread safety.
Thread-safe iteration confirmed in
Publish
methodThe
Values()
method incmap.Map
is thread-safe as it:
- Acquires a read lock using
RLock()
before accessing the map- Creates and returns a new slice with copied values
- Properly releases the lock using
defer RUnlock()
This ensures that concurrent modifications won't affect the iteration in the
Publish
method since it operates on a safe snapshot of the values.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Confirm that cmap.Map's Values() method is thread-safe. # Expected Result: The Values() method should handle concurrent access safely. # Locate the Values() method in the cmap package and display its documentation or implementation details. rg -A 5 'func \(.+\*Map\[.+\]\) Values\(' pkg/cmap/Length of output: 288
Script:
#!/bin/bash # Check the full implementation of the Map type and its methods rg -A 20 'type Map\[' pkg/cmap/Length of output: 774
Script:
#!/bin/bash # Let's check the full implementation of the Map type to understand its thread safety cat pkg/cmap/cmap.goLength of output: 2707
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (4)
server/backend/sync/pubsub.go (1)
Line range hint
28-34
: Consider documenting thread-safety guaranteesSince this type is part of a concurrent system, consider adding documentation comments that explicitly state the thread-safety guarantees of the Subscription struct and its methods. This will help users of this package understand the safe usage patterns.
server/backend/sync/memory/pubsub.go (2)
Line range hint
162-190
: Consider adding error handling for edge cases.The Publish method could benefit from additional error handling:
- Consider tracking failed deliveries and implementing a retry mechanism
- Add metrics for timeout occurrences to monitor system health
- Consider implementing a circuit breaker pattern for subscribers that consistently timeout
Would you like me to provide an example implementation of these improvements?
Event publish timeout should be configurable
The codebase shows various configurable timeouts in
config.sample.yml
for different components (MongoDB connections, webhooks, etc.), but the event publish timeout in pubsub is hardcoded. This timeout should also be configurable through the server configuration for consistency with other timeout settings and to allow administrators to tune it based on their deployment needs.
server/backend/sync/memory/pubsub.go
: Add configuration option for publish timeout (currently hardcoded to 100ms)server/config.sample.yml
: Add new configuration field under Backend section for publish timeout🔗 Analysis chain
Line range hint
179-189
: Consider increasing the event publish timeout.The current 100ms timeout for publishing events might be too aggressive, especially under high load or network latency. Consider making this timeout configurable or increasing it to a more conservative value (e.g., 500ms or 1s).
-case <-gotime.After(100 * gotime.Millisecond): +// TODO: Make this configurable via server config +case <-gotime.After(500 * gotime.Millisecond):
Let me search for more context around timeouts and configurations in the codebase.
Let me check the sample configuration file to see if there are any timeout-related configurations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for other timeout configurations in the codebase for consistency rg -g '!vendor/*' -g '*.go' 'gotime\.After\(' -A 1Length of output: 205
Script:
#!/bin/bash # Search for timeout-related configurations and constants rg -g '!vendor/*' -g '*.go' '(timeout|Timeout|TIMEOUT).*=.*(\d+).*(\*\s*)?((Milli)?[Ss]econd|ms|s)' # Search for configuration structs that might contain timeout settings ast-grep --pattern 'type $_ struct { $$$ Timeout $$$ $$$ }' # Look for any server configuration files fd -e yaml -e yml -e json configLength of output: 752
Script:
#!/bin/bash # Check the sample configuration file cat server/config.sample.yml # Look for any other timeout-related code in the pubsub file rg -g '!vendor/*' 'timeout' server/backend/sync/memory/pubsub.go -B 2 -A 2Length of output: 4148
pkg/cmap/cmap_test.go (1)
32-79
: Consider adding edge cases to strengthen the test suite.While the basic operations are well-tested, consider adding these edge cases:
- Test with empty string/zero value keys
- Test maximum capacity scenarios
- Test deletion of non-existent keys
- Test upsert with nil function handler
Would you like me to provide implementation examples for these test cases?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (10)
- pkg/cmap/cmap.go (1 hunks)
- pkg/cmap/cmap_test.go (1 hunks)
- server/backend/sync/memory/pubsub.go (4 hunks)
- server/backend/sync/pubsub.go (3 hunks)
- server/config.go (1 hunks)
- server/config.sample.yml (1 hunks)
- server/logging/logging.go (0 hunks)
- server/packs/packs.go (1 hunks)
- server/rpc/interceptors/default.go (1 hunks)
- server/rpc/interceptors/yorkie.go (1 hunks)
💤 Files with no reviewable changes (1)
- server/logging/logging.go
🚧 Files skipped from review as they are similar to previous changes (5)
- pkg/cmap/cmap.go
- server/config.go
- server/packs/packs.go
- server/rpc/interceptors/default.go
- server/rpc/interceptors/yorkie.go
🧰 Additional context used
🪛 golangci-lint
pkg/cmap/cmap_test.go
[high] 86-86: G115: integer overflow conversion int -> uint64
(gosec)
[warning] 111-111: unused-parameter: parameter 'value' seems to be unused, consider removing or renaming it as _
(revive)
[warning] 187-187: unused-parameter: parameter 'value' seems to be unused, consider removing or renaming it as _
(revive)
server/backend/sync/memory/pubsub.go
[warning] 133-133: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
🔇 Additional comments (10)
server/backend/sync/pubsub.go (4)
20-21
: LGTM: Import added correctlyThe sync/atomic import is appropriately added for the atomic operations introduced in this change.
32-32
: Initialize atomic.Bool in NewSubscription constructorThe atomic.Bool field needs proper initialization in the NewSubscription constructor.
70-73
: LGTM: Thread-safe closure implementationThe use of CompareAndSwap ensures that the events channel is closed exactly once in a thread-safe manner, preventing potential panic from multiple closures.
75-77
: Verify usage patterns of Closed() methodThe implementation is correct, but ensure that callers handle the race condition where the subscription might be closed between checking Closed() and using the subscription.
server/config.sample.yml (1)
54-57
: Verify the impact of increased snapshot thresholds.The changes to
SnapshotThreshold
(500→1000) andSnapshotInterval
(1000→3000) will reduce snapshot frequency. While this may improve performance, please consider:
- Memory impact from storing more changes between snapshots
- Recovery time implications due to fewer snapshots
- Data durability trade-offs
Consider:
- Adding comments in the config file explaining the performance vs. durability trade-offs
- Documenting recommended values for different deployment scenarios (e.g., memory-constrained vs. performance-focused)
- Running performance tests to validate the benefits of these increased thresholds
✅ Verification successful
Let me analyze the test files to understand the impact better.
The increased snapshot thresholds are well-justified and safe to use
Based on the design documentation and test implementation:
- The increased values (SnapshotThreshold: 1000, SnapshotInterval: 3000) are the new defaults in the codebase, as confirmed in
server/config.go
.- The design document explains that snapshots are used to avoid synchronizing too many changes one by one, and the system ensures no changes are lost during synchronization.
- The implementation includes comprehensive tests that verify snapshot behavior with concurrent changes and text operations.
- The system maintains proper synchronization by tracking ServerSeq and SyncedSeqs, ensuring clients can always sync correctly regardless of snapshot frequency.
The memory and recovery concerns raised in the review are mitigated by the robust synchronization mechanism that ensures data consistency and durability.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for related documentation that might need updates rg -g '!{*.sum,*.mod,vendor}' -i 'snapshotthreshold|snapshotinterval' # Look for any performance tests that should be updated rg -g '!{*.sum,*.mod,vendor}' -i 'TestSnapshot|BenchmarkSnapshot'Length of output: 4529
Script:
#!/bin/bash # Check snapshot_test.go implementation cat test/integration/snapshot_test.go # Check the design document for snapshot rationale cat design/retention.mdLength of output: 11052
server/backend/sync/memory/pubsub.go (4)
32-40
: LGTM! Well-structured concurrent implementation.The change from a regular map to a concurrent map implementation aligns well with the PR's objective of improving concurrency handling. The type definition and constructor are clean and properly encapsulated.
48-50
: Update method comment to follow Go conventions.The comment for the exported method
Values
should start with the method name.
133-135
: Rename unused parameter 'subs' to '_' in anonymous function.The parameter
subs
in the Delete callback is unused.🧰 Tools
🪛 golangci-lint
[warning] 133-133: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
70-77
: LGTM! Solid concurrent implementation.The transition to using
cmap
for managing document-specific mutexes is well-implemented and achieves the PR's objective of improving concurrency. The implementation is thread-safe and maintains proper synchronization.pkg/cmap/cmap_test.go (1)
150-158
: Add timeout to prevent potential infinite loop.The previous review comment about adding timeout to prevent infinite loops is still valid. Please implement the suggested timeout mechanism.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Go Benchmark
Benchmark suite | Current: b82acca | Previous: 0e820e7 | Ratio |
---|---|---|---|
BenchmarkDocument/constructor_test |
1489 ns/op 1337 B/op 24 allocs/op |
1495 ns/op 1337 B/op 24 allocs/op |
1.00 |
BenchmarkDocument/constructor_test - ns/op |
1489 ns/op |
1495 ns/op |
1.00 |
BenchmarkDocument/constructor_test - B/op |
1337 B/op |
1337 B/op |
1 |
BenchmarkDocument/constructor_test - allocs/op |
24 allocs/op |
24 allocs/op |
1 |
BenchmarkDocument/status_test |
954.3 ns/op 1305 B/op 22 allocs/op |
958.9 ns/op 1305 B/op 22 allocs/op |
1.00 |
BenchmarkDocument/status_test - ns/op |
954.3 ns/op |
958.9 ns/op |
1.00 |
BenchmarkDocument/status_test - B/op |
1305 B/op |
1305 B/op |
1 |
BenchmarkDocument/status_test - allocs/op |
22 allocs/op |
22 allocs/op |
1 |
BenchmarkDocument/equals_test |
7793 ns/op 7529 B/op 134 allocs/op |
7745 ns/op 7529 B/op 134 allocs/op |
1.01 |
BenchmarkDocument/equals_test - ns/op |
7793 ns/op |
7745 ns/op |
1.01 |
BenchmarkDocument/equals_test - B/op |
7529 B/op |
7529 B/op |
1 |
BenchmarkDocument/equals_test - allocs/op |
134 allocs/op |
134 allocs/op |
1 |
BenchmarkDocument/nested_update_test |
16938 ns/op 12395 B/op 264 allocs/op |
17147 ns/op 12395 B/op 264 allocs/op |
0.99 |
BenchmarkDocument/nested_update_test - ns/op |
16938 ns/op |
17147 ns/op |
0.99 |
BenchmarkDocument/nested_update_test - B/op |
12395 B/op |
12395 B/op |
1 |
BenchmarkDocument/nested_update_test - allocs/op |
264 allocs/op |
264 allocs/op |
1 |
BenchmarkDocument/delete_test |
25720 ns/op 15924 B/op 347 allocs/op |
24031 ns/op 15923 B/op 347 allocs/op |
1.07 |
BenchmarkDocument/delete_test - ns/op |
25720 ns/op |
24031 ns/op |
1.07 |
BenchmarkDocument/delete_test - B/op |
15924 B/op |
15923 B/op |
1.00 |
BenchmarkDocument/delete_test - allocs/op |
347 allocs/op |
347 allocs/op |
1 |
BenchmarkDocument/object_test |
8729 ns/op 7073 B/op 122 allocs/op |
9698 ns/op 7073 B/op 122 allocs/op |
0.90 |
BenchmarkDocument/object_test - ns/op |
8729 ns/op |
9698 ns/op |
0.90 |
BenchmarkDocument/object_test - B/op |
7073 B/op |
7073 B/op |
1 |
BenchmarkDocument/object_test - allocs/op |
122 allocs/op |
122 allocs/op |
1 |
BenchmarkDocument/array_test |
30458 ns/op 12203 B/op 278 allocs/op |
29731 ns/op 12203 B/op 278 allocs/op |
1.02 |
BenchmarkDocument/array_test - ns/op |
30458 ns/op |
29731 ns/op |
1.02 |
BenchmarkDocument/array_test - B/op |
12203 B/op |
12203 B/op |
1 |
BenchmarkDocument/array_test - allocs/op |
278 allocs/op |
278 allocs/op |
1 |
BenchmarkDocument/text_test |
31912 ns/op 15324 B/op 492 allocs/op |
31806 ns/op 15326 B/op 492 allocs/op |
1.00 |
BenchmarkDocument/text_test - ns/op |
31912 ns/op |
31806 ns/op |
1.00 |
BenchmarkDocument/text_test - B/op |
15324 B/op |
15326 B/op |
1.00 |
BenchmarkDocument/text_test - allocs/op |
492 allocs/op |
492 allocs/op |
1 |
BenchmarkDocument/text_composition_test |
30581 ns/op 18718 B/op 504 allocs/op |
30359 ns/op 18716 B/op 504 allocs/op |
1.01 |
BenchmarkDocument/text_composition_test - ns/op |
30581 ns/op |
30359 ns/op |
1.01 |
BenchmarkDocument/text_composition_test - B/op |
18718 B/op |
18716 B/op |
1.00 |
BenchmarkDocument/text_composition_test - allocs/op |
504 allocs/op |
504 allocs/op |
1 |
BenchmarkDocument/rich_text_test |
84349 ns/op 40181 B/op 1183 allocs/op |
83790 ns/op 40181 B/op 1183 allocs/op |
1.01 |
BenchmarkDocument/rich_text_test - ns/op |
84349 ns/op |
83790 ns/op |
1.01 |
BenchmarkDocument/rich_text_test - B/op |
40181 B/op |
40181 B/op |
1 |
BenchmarkDocument/rich_text_test - allocs/op |
1183 allocs/op |
1183 allocs/op |
1 |
BenchmarkDocument/counter_test |
18596 ns/op 11874 B/op 258 allocs/op |
18495 ns/op 11875 B/op 258 allocs/op |
1.01 |
BenchmarkDocument/counter_test - ns/op |
18596 ns/op |
18495 ns/op |
1.01 |
BenchmarkDocument/counter_test - B/op |
11874 B/op |
11875 B/op |
1.00 |
BenchmarkDocument/counter_test - allocs/op |
258 allocs/op |
258 allocs/op |
1 |
BenchmarkDocument/text_edit_gc_100 |
1325162 ns/op 872555 B/op 17282 allocs/op |
1327603 ns/op 872564 B/op 17282 allocs/op |
1.00 |
BenchmarkDocument/text_edit_gc_100 - ns/op |
1325162 ns/op |
1327603 ns/op |
1.00 |
BenchmarkDocument/text_edit_gc_100 - B/op |
872555 B/op |
872564 B/op |
1.00 |
BenchmarkDocument/text_edit_gc_100 - allocs/op |
17282 allocs/op |
17282 allocs/op |
1 |
BenchmarkDocument/text_edit_gc_1000 |
50642929 ns/op 50547546 B/op 186740 allocs/op |
50540616 ns/op 50547506 B/op 186742 allocs/op |
1.00 |
BenchmarkDocument/text_edit_gc_1000 - ns/op |
50642929 ns/op |
50540616 ns/op |
1.00 |
BenchmarkDocument/text_edit_gc_1000 - B/op |
50547546 B/op |
50547506 B/op |
1.00 |
BenchmarkDocument/text_edit_gc_1000 - allocs/op |
186740 allocs/op |
186742 allocs/op |
1.00 |
BenchmarkDocument/text_split_gc_100 |
1936612 ns/op 1589114 B/op 15950 allocs/op |
1928853 ns/op 1589046 B/op 15951 allocs/op |
1.00 |
BenchmarkDocument/text_split_gc_100 - ns/op |
1936612 ns/op |
1928853 ns/op |
1.00 |
BenchmarkDocument/text_split_gc_100 - B/op |
1589114 B/op |
1589046 B/op |
1.00 |
BenchmarkDocument/text_split_gc_100 - allocs/op |
15950 allocs/op |
15951 allocs/op |
1.00 |
BenchmarkDocument/text_split_gc_1000 |
115631552 ns/op 141483028 B/op 186148 allocs/op |
116015213 ns/op 141482178 B/op 186148 allocs/op |
1.00 |
BenchmarkDocument/text_split_gc_1000 - ns/op |
115631552 ns/op |
116015213 ns/op |
1.00 |
BenchmarkDocument/text_split_gc_1000 - B/op |
141483028 B/op |
141482178 B/op |
1.00 |
BenchmarkDocument/text_split_gc_1000 - allocs/op |
186148 allocs/op |
186148 allocs/op |
1 |
BenchmarkDocument/text_delete_all_10000 |
16692909 ns/op 10212315 B/op 55681 allocs/op |
15507955 ns/op 10216022 B/op 55685 allocs/op |
1.08 |
BenchmarkDocument/text_delete_all_10000 - ns/op |
16692909 ns/op |
15507955 ns/op |
1.08 |
BenchmarkDocument/text_delete_all_10000 - B/op |
10212315 B/op |
10216022 B/op |
1.00 |
BenchmarkDocument/text_delete_all_10000 - allocs/op |
55681 allocs/op |
55685 allocs/op |
1.00 |
BenchmarkDocument/text_delete_all_100000 |
289940612 ns/op 142980312 B/op 561734 allocs/op |
297897892 ns/op 142990452 B/op 561728 allocs/op |
0.97 |
BenchmarkDocument/text_delete_all_100000 - ns/op |
289940612 ns/op |
297897892 ns/op |
0.97 |
BenchmarkDocument/text_delete_all_100000 - B/op |
142980312 B/op |
142990452 B/op |
1.00 |
BenchmarkDocument/text_delete_all_100000 - allocs/op |
561734 allocs/op |
561728 allocs/op |
1.00 |
BenchmarkDocument/text_100 |
218176 ns/op 120491 B/op 5182 allocs/op |
228747 ns/op 120491 B/op 5182 allocs/op |
0.95 |
BenchmarkDocument/text_100 - ns/op |
218176 ns/op |
228747 ns/op |
0.95 |
BenchmarkDocument/text_100 - B/op |
120491 B/op |
120491 B/op |
1 |
BenchmarkDocument/text_100 - allocs/op |
5182 allocs/op |
5182 allocs/op |
1 |
BenchmarkDocument/text_1000 |
2368809 ns/op 1171277 B/op 51086 allocs/op |
2473427 ns/op 1171280 B/op 51086 allocs/op |
0.96 |
BenchmarkDocument/text_1000 - ns/op |
2368809 ns/op |
2473427 ns/op |
0.96 |
BenchmarkDocument/text_1000 - B/op |
1171277 B/op |
1171280 B/op |
1.00 |
BenchmarkDocument/text_1000 - allocs/op |
51086 allocs/op |
51086 allocs/op |
1 |
BenchmarkDocument/array_1000 |
1218438 ns/op 1091607 B/op 11833 allocs/op |
1282049 ns/op 1091671 B/op 11833 allocs/op |
0.95 |
BenchmarkDocument/array_1000 - ns/op |
1218438 ns/op |
1282049 ns/op |
0.95 |
BenchmarkDocument/array_1000 - B/op |
1091607 B/op |
1091671 B/op |
1.00 |
BenchmarkDocument/array_1000 - allocs/op |
11833 allocs/op |
11833 allocs/op |
1 |
BenchmarkDocument/array_10000 |
13228530 ns/op 9800641 B/op 120300 allocs/op |
13215334 ns/op 9799207 B/op 120294 allocs/op |
1.00 |
BenchmarkDocument/array_10000 - ns/op |
13228530 ns/op |
13215334 ns/op |
1.00 |
BenchmarkDocument/array_10000 - B/op |
9800641 B/op |
9799207 B/op |
1.00 |
BenchmarkDocument/array_10000 - allocs/op |
120300 allocs/op |
120294 allocs/op |
1.00 |
BenchmarkDocument/array_gc_100 |
147306 ns/op 133282 B/op 1266 allocs/op |
156392 ns/op 133293 B/op 1267 allocs/op |
0.94 |
BenchmarkDocument/array_gc_100 - ns/op |
147306 ns/op |
156392 ns/op |
0.94 |
BenchmarkDocument/array_gc_100 - B/op |
133282 B/op |
133293 B/op |
1.00 |
BenchmarkDocument/array_gc_100 - allocs/op |
1266 allocs/op |
1267 allocs/op |
1.00 |
BenchmarkDocument/array_gc_1000 |
1398114 ns/op 1159815 B/op 12883 allocs/op |
1463856 ns/op 1159746 B/op 12883 allocs/op |
0.96 |
BenchmarkDocument/array_gc_1000 - ns/op |
1398114 ns/op |
1463856 ns/op |
0.96 |
BenchmarkDocument/array_gc_1000 - B/op |
1159815 B/op |
1159746 B/op |
1.00 |
BenchmarkDocument/array_gc_1000 - allocs/op |
12883 allocs/op |
12883 allocs/op |
1 |
BenchmarkDocument/counter_1000 |
200645 ns/op 193337 B/op 5773 allocs/op |
215980 ns/op 193336 B/op 5773 allocs/op |
0.93 |
BenchmarkDocument/counter_1000 - ns/op |
200645 ns/op |
215980 ns/op |
0.93 |
BenchmarkDocument/counter_1000 - B/op |
193337 B/op |
193336 B/op |
1.00 |
BenchmarkDocument/counter_1000 - allocs/op |
5773 allocs/op |
5773 allocs/op |
1 |
BenchmarkDocument/counter_10000 |
2154791 ns/op 2088251 B/op 59780 allocs/op |
2246746 ns/op 2088270 B/op 59780 allocs/op |
0.96 |
BenchmarkDocument/counter_10000 - ns/op |
2154791 ns/op |
2246746 ns/op |
0.96 |
BenchmarkDocument/counter_10000 - B/op |
2088251 B/op |
2088270 B/op |
1.00 |
BenchmarkDocument/counter_10000 - allocs/op |
59780 allocs/op |
59780 allocs/op |
1 |
BenchmarkDocument/object_1000 |
1375224 ns/op 1428324 B/op 9851 allocs/op |
1461406 ns/op 1428325 B/op 9851 allocs/op |
0.94 |
BenchmarkDocument/object_1000 - ns/op |
1375224 ns/op |
1461406 ns/op |
0.94 |
BenchmarkDocument/object_1000 - B/op |
1428324 B/op |
1428325 B/op |
1.00 |
BenchmarkDocument/object_1000 - allocs/op |
9851 allocs/op |
9851 allocs/op |
1 |
BenchmarkDocument/object_10000 |
15359342 ns/op 12169405 B/op 100574 allocs/op |
15438010 ns/op 12167664 B/op 100569 allocs/op |
0.99 |
BenchmarkDocument/object_10000 - ns/op |
15359342 ns/op |
15438010 ns/op |
0.99 |
BenchmarkDocument/object_10000 - B/op |
12169405 B/op |
12167664 B/op |
1.00 |
BenchmarkDocument/object_10000 - allocs/op |
100574 allocs/op |
100569 allocs/op |
1.00 |
BenchmarkDocument/tree_100 |
1018167 ns/op 943956 B/op 6103 allocs/op |
1076353 ns/op 943959 B/op 6103 allocs/op |
0.95 |
BenchmarkDocument/tree_100 - ns/op |
1018167 ns/op |
1076353 ns/op |
0.95 |
BenchmarkDocument/tree_100 - B/op |
943956 B/op |
943959 B/op |
1.00 |
BenchmarkDocument/tree_100 - allocs/op |
6103 allocs/op |
6103 allocs/op |
1 |
BenchmarkDocument/tree_1000 |
72820803 ns/op 86460578 B/op 60117 allocs/op |
80019784 ns/op 86460131 B/op 60117 allocs/op |
0.91 |
BenchmarkDocument/tree_1000 - ns/op |
72820803 ns/op |
80019784 ns/op |
0.91 |
BenchmarkDocument/tree_1000 - B/op |
86460578 B/op |
86460131 B/op |
1.00 |
BenchmarkDocument/tree_1000 - allocs/op |
60117 allocs/op |
60117 allocs/op |
1 |
BenchmarkDocument/tree_10000 |
9333987892 ns/op 8580661104 B/op 600213 allocs/op |
9742284446 ns/op 8580671312 B/op 600247 allocs/op |
0.96 |
BenchmarkDocument/tree_10000 - ns/op |
9333987892 ns/op |
9742284446 ns/op |
0.96 |
BenchmarkDocument/tree_10000 - B/op |
8580661104 B/op |
8580671312 B/op |
1.00 |
BenchmarkDocument/tree_10000 - allocs/op |
600213 allocs/op |
600247 allocs/op |
1.00 |
BenchmarkDocument/tree_delete_all_1000 |
73768014 ns/op 87510358 B/op 75270 allocs/op |
80514715 ns/op 87509822 B/op 75271 allocs/op |
0.92 |
BenchmarkDocument/tree_delete_all_1000 - ns/op |
73768014 ns/op |
80514715 ns/op |
0.92 |
BenchmarkDocument/tree_delete_all_1000 - B/op |
87510358 B/op |
87509822 B/op |
1.00 |
BenchmarkDocument/tree_delete_all_1000 - allocs/op |
75270 allocs/op |
75271 allocs/op |
1.00 |
BenchmarkDocument/tree_edit_gc_100 |
3762754 ns/op 4147263 B/op 15147 allocs/op |
3959259 ns/op 4147267 B/op 15147 allocs/op |
0.95 |
BenchmarkDocument/tree_edit_gc_100 - ns/op |
3762754 ns/op |
3959259 ns/op |
0.95 |
BenchmarkDocument/tree_edit_gc_100 - B/op |
4147263 B/op |
4147267 B/op |
1.00 |
BenchmarkDocument/tree_edit_gc_100 - allocs/op |
15147 allocs/op |
15147 allocs/op |
1 |
BenchmarkDocument/tree_edit_gc_1000 |
295898020 ns/op 383745112 B/op 154851 allocs/op |
325942172 ns/op 383745032 B/op 154860 allocs/op |
0.91 |
BenchmarkDocument/tree_edit_gc_1000 - ns/op |
295898020 ns/op |
325942172 ns/op |
0.91 |
BenchmarkDocument/tree_edit_gc_1000 - B/op |
383745112 B/op |
383745032 B/op |
1.00 |
BenchmarkDocument/tree_edit_gc_1000 - allocs/op |
154851 allocs/op |
154860 allocs/op |
1.00 |
BenchmarkDocument/tree_split_gc_100 |
2492008 ns/op 2413031 B/op 11131 allocs/op |
2656182 ns/op 2413103 B/op 11131 allocs/op |
0.94 |
BenchmarkDocument/tree_split_gc_100 - ns/op |
2492008 ns/op |
2656182 ns/op |
0.94 |
BenchmarkDocument/tree_split_gc_100 - B/op |
2413031 B/op |
2413103 B/op |
1.00 |
BenchmarkDocument/tree_split_gc_100 - allocs/op |
11131 allocs/op |
11131 allocs/op |
1 |
BenchmarkDocument/tree_split_gc_1000 |
184333765 ns/op 222253424 B/op 122002 allocs/op |
197448676 ns/op 222251418 B/op 122005 allocs/op |
0.93 |
BenchmarkDocument/tree_split_gc_1000 - ns/op |
184333765 ns/op |
197448676 ns/op |
0.93 |
BenchmarkDocument/tree_split_gc_1000 - B/op |
222253424 B/op |
222251418 B/op |
1.00 |
BenchmarkDocument/tree_split_gc_1000 - allocs/op |
122002 allocs/op |
122005 allocs/op |
1.00 |
BenchmarkRPC/client_to_server |
423074960 ns/op 20117770 B/op 228889 allocs/op |
422627832 ns/op 20390970 B/op 228888 allocs/op |
1.00 |
BenchmarkRPC/client_to_server - ns/op |
423074960 ns/op |
422627832 ns/op |
1.00 |
BenchmarkRPC/client_to_server - B/op |
20117770 B/op |
20390970 B/op |
0.99 |
BenchmarkRPC/client_to_server - allocs/op |
228889 allocs/op |
228888 allocs/op |
1.00 |
BenchmarkRPC/client_to_client_via_server |
787222211 ns/op 41632304 B/op 481456 allocs/op |
779765374 ns/op 42446712 B/op 481573 allocs/op |
1.01 |
BenchmarkRPC/client_to_client_via_server - ns/op |
787222211 ns/op |
779765374 ns/op |
1.01 |
BenchmarkRPC/client_to_client_via_server - B/op |
41632304 B/op |
42446712 B/op |
0.98 |
BenchmarkRPC/client_to_client_via_server - allocs/op |
481456 allocs/op |
481573 allocs/op |
1.00 |
BenchmarkRPC/attach_large_document |
2265517653 ns/op 3036269968 B/op 14894 allocs/op |
2059624724 ns/op 3044348528 B/op 14967 allocs/op |
1.10 |
BenchmarkRPC/attach_large_document - ns/op |
2265517653 ns/op |
2059624724 ns/op |
1.10 |
BenchmarkRPC/attach_large_document - B/op |
3036269968 B/op |
3044348528 B/op |
1.00 |
BenchmarkRPC/attach_large_document - allocs/op |
14894 allocs/op |
14967 allocs/op |
1.00 |
BenchmarkRPC/adminCli_to_server |
526421220 ns/op 35964048 B/op 289543 allocs/op |
521526794 ns/op 35957852 B/op 289567 allocs/op |
1.01 |
BenchmarkRPC/adminCli_to_server - ns/op |
526421220 ns/op |
521526794 ns/op |
1.01 |
BenchmarkRPC/adminCli_to_server - B/op |
35964048 B/op |
35957852 B/op |
1.00 |
BenchmarkRPC/adminCli_to_server - allocs/op |
289543 allocs/op |
289567 allocs/op |
1.00 |
BenchmarkLocker |
64.59 ns/op 16 B/op 1 allocs/op |
65.21 ns/op 16 B/op 1 allocs/op |
0.99 |
BenchmarkLocker - ns/op |
64.59 ns/op |
65.21 ns/op |
0.99 |
BenchmarkLocker - B/op |
16 B/op |
16 B/op |
1 |
BenchmarkLocker - allocs/op |
1 allocs/op |
1 allocs/op |
1 |
BenchmarkLockerParallel |
39.48 ns/op 0 B/op 0 allocs/op |
39.84 ns/op 0 B/op 0 allocs/op |
0.99 |
BenchmarkLockerParallel - ns/op |
39.48 ns/op |
39.84 ns/op |
0.99 |
BenchmarkLockerParallel - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkLockerParallel - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkLockerMoreKeys |
151.7 ns/op 15 B/op 0 allocs/op |
158.3 ns/op 15 B/op 0 allocs/op |
0.96 |
BenchmarkLockerMoreKeys - ns/op |
151.7 ns/op |
158.3 ns/op |
0.96 |
BenchmarkLockerMoreKeys - B/op |
15 B/op |
15 B/op |
1 |
BenchmarkLockerMoreKeys - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkChange/Push_10_Changes |
4432544 ns/op 143192 B/op 1570 allocs/op |
4373326 ns/op 143526 B/op 1570 allocs/op |
1.01 |
BenchmarkChange/Push_10_Changes - ns/op |
4432544 ns/op |
4373326 ns/op |
1.01 |
BenchmarkChange/Push_10_Changes - B/op |
143192 B/op |
143526 B/op |
1.00 |
BenchmarkChange/Push_10_Changes - allocs/op |
1570 allocs/op |
1570 allocs/op |
1 |
BenchmarkChange/Push_100_Changes |
15881880 ns/op 700277 B/op 8186 allocs/op |
15875665 ns/op 698091 B/op 8188 allocs/op |
1.00 |
BenchmarkChange/Push_100_Changes - ns/op |
15881880 ns/op |
15875665 ns/op |
1.00 |
BenchmarkChange/Push_100_Changes - B/op |
700277 B/op |
698091 B/op |
1.00 |
BenchmarkChange/Push_100_Changes - allocs/op |
8186 allocs/op |
8188 allocs/op |
1.00 |
BenchmarkChange/Push_1000_Changes |
126841697 ns/op 6568828 B/op 77154 allocs/op |
126600791 ns/op 6808710 B/op 77157 allocs/op |
1.00 |
BenchmarkChange/Push_1000_Changes - ns/op |
126841697 ns/op |
126600791 ns/op |
1.00 |
BenchmarkChange/Push_1000_Changes - B/op |
6568828 B/op |
6808710 B/op |
0.96 |
BenchmarkChange/Push_1000_Changes - allocs/op |
77154 allocs/op |
77157 allocs/op |
1.00 |
BenchmarkChange/Pull_10_Changes |
3645675 ns/op 123584 B/op 1402 allocs/op |
3577763 ns/op 124220 B/op 1403 allocs/op |
1.02 |
BenchmarkChange/Pull_10_Changes - ns/op |
3645675 ns/op |
3577763 ns/op |
1.02 |
BenchmarkChange/Pull_10_Changes - B/op |
123584 B/op |
124220 B/op |
0.99 |
BenchmarkChange/Pull_10_Changes - allocs/op |
1402 allocs/op |
1403 allocs/op |
1.00 |
BenchmarkChange/Pull_100_Changes |
5145760 ns/op 350880 B/op 4947 allocs/op |
5077550 ns/op 352349 B/op 4947 allocs/op |
1.01 |
BenchmarkChange/Pull_100_Changes - ns/op |
5145760 ns/op |
5077550 ns/op |
1.01 |
BenchmarkChange/Pull_100_Changes - B/op |
350880 B/op |
352349 B/op |
1.00 |
BenchmarkChange/Pull_100_Changes - allocs/op |
4947 allocs/op |
4947 allocs/op |
1 |
BenchmarkChange/Pull_1000_Changes |
10294687 ns/op 2220351 B/op 42667 allocs/op |
10130236 ns/op 2221996 B/op 42666 allocs/op |
1.02 |
BenchmarkChange/Pull_1000_Changes - ns/op |
10294687 ns/op |
10130236 ns/op |
1.02 |
BenchmarkChange/Pull_1000_Changes - B/op |
2220351 B/op |
2221996 B/op |
1.00 |
BenchmarkChange/Pull_1000_Changes - allocs/op |
42667 allocs/op |
42666 allocs/op |
1.00 |
BenchmarkSnapshot/Push_3KB_snapshot |
18035709 ns/op 812353 B/op 8189 allocs/op |
17952455 ns/op 813945 B/op 8187 allocs/op |
1.00 |
BenchmarkSnapshot/Push_3KB_snapshot - ns/op |
18035709 ns/op |
17952455 ns/op |
1.00 |
BenchmarkSnapshot/Push_3KB_snapshot - B/op |
812353 B/op |
813945 B/op |
1.00 |
BenchmarkSnapshot/Push_3KB_snapshot - allocs/op |
8189 allocs/op |
8187 allocs/op |
1.00 |
BenchmarkSnapshot/Push_30KB_snapshot |
128826293 ns/op 7126599 B/op 78468 allocs/op |
130190260 ns/op 7279284 B/op 78731 allocs/op |
0.99 |
BenchmarkSnapshot/Push_30KB_snapshot - ns/op |
128826293 ns/op |
130190260 ns/op |
0.99 |
BenchmarkSnapshot/Push_30KB_snapshot - B/op |
7126599 B/op |
7279284 B/op |
0.98 |
BenchmarkSnapshot/Push_30KB_snapshot - allocs/op |
78468 allocs/op |
78731 allocs/op |
1.00 |
BenchmarkSnapshot/Pull_3KB_snapshot |
7153561 ns/op 1137793 B/op 19417 allocs/op |
7030494 ns/op 1139543 B/op 19418 allocs/op |
1.02 |
BenchmarkSnapshot/Pull_3KB_snapshot - ns/op |
7153561 ns/op |
7030494 ns/op |
1.02 |
BenchmarkSnapshot/Pull_3KB_snapshot - B/op |
1137793 B/op |
1139543 B/op |
1.00 |
BenchmarkSnapshot/Pull_3KB_snapshot - allocs/op |
19417 allocs/op |
19418 allocs/op |
1.00 |
BenchmarkSnapshot/Pull_30KB_snapshot |
17468942 ns/op 9295109 B/op 187554 allocs/op |
18055765 ns/op 9297912 B/op 187556 allocs/op |
0.97 |
BenchmarkSnapshot/Pull_30KB_snapshot - ns/op |
17468942 ns/op |
18055765 ns/op |
0.97 |
BenchmarkSnapshot/Pull_30KB_snapshot - B/op |
9295109 B/op |
9297912 B/op |
1.00 |
BenchmarkSnapshot/Pull_30KB_snapshot - allocs/op |
187554 allocs/op |
187556 allocs/op |
1.00 |
BenchmarkSplayTree/stress_test_100000 |
0.1987 ns/op 0 B/op 0 allocs/op |
0.1888 ns/op 0 B/op 0 allocs/op |
1.05 |
BenchmarkSplayTree/stress_test_100000 - ns/op |
0.1987 ns/op |
0.1888 ns/op |
1.05 |
BenchmarkSplayTree/stress_test_100000 - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/stress_test_100000 - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSplayTree/stress_test_200000 |
0.3889 ns/op 0 B/op 0 allocs/op |
0.3875 ns/op 0 B/op 0 allocs/op |
1.00 |
BenchmarkSplayTree/stress_test_200000 - ns/op |
0.3889 ns/op |
0.3875 ns/op |
1.00 |
BenchmarkSplayTree/stress_test_200000 - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/stress_test_200000 - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSplayTree/stress_test_300000 |
0.5884 ns/op 0 B/op 0 allocs/op |
0.5859 ns/op 0 B/op 0 allocs/op |
1.00 |
BenchmarkSplayTree/stress_test_300000 - ns/op |
0.5884 ns/op |
0.5859 ns/op |
1.00 |
BenchmarkSplayTree/stress_test_300000 - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/stress_test_300000 - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSplayTree/random_access_100000 |
0.01254 ns/op 0 B/op 0 allocs/op |
0.01642 ns/op 0 B/op 0 allocs/op |
0.76 |
BenchmarkSplayTree/random_access_100000 - ns/op |
0.01254 ns/op |
0.01642 ns/op |
0.76 |
BenchmarkSplayTree/random_access_100000 - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/random_access_100000 - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSplayTree/random_access_200000 |
0.02356 ns/op 0 B/op 0 allocs/op |
0.02319 ns/op 0 B/op 0 allocs/op |
1.02 |
BenchmarkSplayTree/random_access_200000 - ns/op |
0.02356 ns/op |
0.02319 ns/op |
1.02 |
BenchmarkSplayTree/random_access_200000 - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/random_access_200000 - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSplayTree/random_access_300000 |
0.03588 ns/op 0 B/op 0 allocs/op |
0.03513 ns/op 0 B/op 0 allocs/op |
1.02 |
BenchmarkSplayTree/random_access_300000 - ns/op |
0.03588 ns/op |
0.03513 ns/op |
1.02 |
BenchmarkSplayTree/random_access_300000 - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/random_access_300000 - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSplayTree/editing_trace_bench |
0.002083 ns/op 0 B/op 0 allocs/op |
0.001784 ns/op 0 B/op 0 allocs/op |
1.17 |
BenchmarkSplayTree/editing_trace_bench - ns/op |
0.002083 ns/op |
0.001784 ns/op |
1.17 |
BenchmarkSplayTree/editing_trace_bench - B/op |
0 B/op |
0 B/op |
1 |
BenchmarkSplayTree/editing_trace_bench - allocs/op |
0 allocs/op |
0 allocs/op |
1 |
BenchmarkSync/memory_sync_10_test |
6707 ns/op 1286 B/op 38 allocs/op |
6872 ns/op 1286 B/op 38 allocs/op |
0.98 |
BenchmarkSync/memory_sync_10_test - ns/op |
6707 ns/op |
6872 ns/op |
0.98 |
BenchmarkSync/memory_sync_10_test - B/op |
1286 B/op |
1286 B/op |
1 |
BenchmarkSync/memory_sync_10_test - allocs/op |
38 allocs/op |
38 allocs/op |
1 |
BenchmarkSync/memory_sync_100_test |
51157 ns/op 8645 B/op 273 allocs/op |
51754 ns/op 8633 B/op 272 allocs/op |
0.99 |
BenchmarkSync/memory_sync_100_test - ns/op |
51157 ns/op |
51754 ns/op |
0.99 |
BenchmarkSync/memory_sync_100_test - B/op |
8645 B/op |
8633 B/op |
1.00 |
BenchmarkSync/memory_sync_100_test - allocs/op |
273 allocs/op |
272 allocs/op |
1.00 |
BenchmarkSync/memory_sync_1000_test |
584077 ns/op 74490 B/op 2129 allocs/op |
584212 ns/op 74371 B/op 2124 allocs/op |
1.00 |
BenchmarkSync/memory_sync_1000_test - ns/op |
584077 ns/op |
584212 ns/op |
1.00 |
BenchmarkSync/memory_sync_1000_test - B/op |
74490 B/op |
74371 B/op |
1.00 |
BenchmarkSync/memory_sync_1000_test - allocs/op |
2129 allocs/op |
2124 allocs/op |
1.00 |
BenchmarkSync/memory_sync_10000_test |
6485338 ns/op 755685 B/op 21148 allocs/op |
7032975 ns/op 736254 B/op 20315 allocs/op |
0.92 |
BenchmarkSync/memory_sync_10000_test - ns/op |
6485338 ns/op |
7032975 ns/op |
0.92 |
BenchmarkSync/memory_sync_10000_test - B/op |
755685 B/op |
736254 B/op |
1.03 |
BenchmarkSync/memory_sync_10000_test - allocs/op |
21148 allocs/op |
20315 allocs/op |
1.04 |
BenchmarkTextEditing |
5858181142 ns/op 3982603120 B/op 20647692 allocs/op |
5697854160 ns/op 3982591504 B/op 20647690 allocs/op |
1.03 |
BenchmarkTextEditing - ns/op |
5858181142 ns/op |
5697854160 ns/op |
1.03 |
BenchmarkTextEditing - B/op |
3982603120 B/op |
3982591504 B/op |
1.00 |
BenchmarkTextEditing - allocs/op |
20647692 allocs/op |
20647690 allocs/op |
1.00 |
BenchmarkTree/10000_vertices_to_protobuf |
3494125 ns/op 6262997 B/op 70025 allocs/op |
3396128 ns/op 6262975 B/op 70025 allocs/op |
1.03 |
BenchmarkTree/10000_vertices_to_protobuf - ns/op |
3494125 ns/op |
3396128 ns/op |
1.03 |
BenchmarkTree/10000_vertices_to_protobuf - B/op |
6262997 B/op |
6262975 B/op |
1.00 |
BenchmarkTree/10000_vertices_to_protobuf - allocs/op |
70025 allocs/op |
70025 allocs/op |
1 |
BenchmarkTree/10000_vertices_from_protobuf |
160237023 ns/op 442172597 B/op 290039 allocs/op |
152400489 ns/op 442171612 B/op 290052 allocs/op |
1.05 |
BenchmarkTree/10000_vertices_from_protobuf - ns/op |
160237023 ns/op |
152400489 ns/op |
1.05 |
BenchmarkTree/10000_vertices_from_protobuf - B/op |
442172597 B/op |
442171612 B/op |
1.00 |
BenchmarkTree/10000_vertices_from_protobuf - allocs/op |
290039 allocs/op |
290052 allocs/op |
1.00 |
BenchmarkTree/20000_vertices_to_protobuf |
7824034 ns/op 12716929 B/op 140028 allocs/op |
7567709 ns/op 12716968 B/op 140028 allocs/op |
1.03 |
BenchmarkTree/20000_vertices_to_protobuf - ns/op |
7824034 ns/op |
7567709 ns/op |
1.03 |
BenchmarkTree/20000_vertices_to_protobuf - B/op |
12716929 B/op |
12716968 B/op |
1.00 |
BenchmarkTree/20000_vertices_to_protobuf - allocs/op |
140028 allocs/op |
140028 allocs/op |
1 |
BenchmarkTree/20000_vertices_from_protobuf |
689876606 ns/op 1697276832 B/op 580091 allocs/op |
680598159 ns/op 1697272760 B/op 580091 allocs/op |
1.01 |
BenchmarkTree/20000_vertices_from_protobuf - ns/op |
689876606 ns/op |
680598159 ns/op |
1.01 |
BenchmarkTree/20000_vertices_from_protobuf - B/op |
1697276832 B/op |
1697272760 B/op |
1.00 |
BenchmarkTree/20000_vertices_from_protobuf - allocs/op |
580091 allocs/op |
580091 allocs/op |
1 |
BenchmarkTree/30000_vertices_to_protobuf |
12305456 ns/op 19318256 B/op 210030 allocs/op |
12002060 ns/op 19324629 B/op 210030 allocs/op |
1.03 |
BenchmarkTree/30000_vertices_to_protobuf - ns/op |
12305456 ns/op |
12002060 ns/op |
1.03 |
BenchmarkTree/30000_vertices_to_protobuf - B/op |
19318256 B/op |
19324629 B/op |
1.00 |
BenchmarkTree/30000_vertices_to_protobuf - allocs/op |
210030 allocs/op |
210030 allocs/op |
1 |
BenchmarkTree/30000_vertices_from_protobuf |
1668075716 ns/op 3752052392 B/op 870049 allocs/op |
1650342928 ns/op 3752052408 B/op 870051 allocs/op |
1.01 |
BenchmarkTree/30000_vertices_from_protobuf - ns/op |
1668075716 ns/op |
1650342928 ns/op |
1.01 |
BenchmarkTree/30000_vertices_from_protobuf - B/op |
3752052392 B/op |
3752052408 B/op |
1.00 |
BenchmarkTree/30000_vertices_from_protobuf - allocs/op |
870049 allocs/op |
870051 allocs/op |
1.00 |
This comment was automatically generated by workflow using github-action-benchmark.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1051 +/- ##
==========================================
+ Coverage 46.44% 46.66% +0.22%
==========================================
Files 81 82 +1
Lines 11887 11959 +72
==========================================
+ Hits 5521 5581 +60
- Misses 5806 5816 +10
- Partials 560 562 +2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (5)
server/backend/sync/memory/pubsub.go (4)
31-39
: LGTM! Consider enhancing type documentation.The transition to
cmap.Map
for concurrent operations is a good improvement. Consider adding more detailed documentation for the exportedSubscriptions
type to explain its thread-safety guarantees and typical usage patterns.Add documentation like:
-// Subscriptions is a map of Subscriptions. +// Subscriptions provides thread-safe operations for managing document subscriptions. +// It internally uses a concurrent map to ensure safe access from multiple goroutines. type Subscriptions struct {
42-44
: LGTM! Consider documenting thread-safety.The Set method correctly uses cmap operations. Consider documenting its thread-safety guarantees.
Add documentation like:
-// Set adds the given subscription. +// Set adds the given subscription in a thread-safe manner.
47-49
: Update method documentation for clarity and thread-safety.The Values method implementation is correct, but its documentation should be more descriptive.
-// Values returns the values of these subscriptions. +// Values returns a slice containing all current subscriptions in a thread-safe manner. +// Note: The returned slice is a snapshot and may not reflect subsequent modifications.
93-98
: Consider handling potential subscription limit.The Upsert operation creates new subscriptions without any limit check. Consider implementing a maximum limit per document to prevent resource exhaustion.
Consider adding a limit check before creating new subscriptions:
const maxSubscriptionsPerDoc = 1000 // Adjust based on requirements subs := m.subscriptionsMap.Upsert(docKey, func(exists bool, subs *Subscriptions) *Subscriptions { if !exists { return newSubscriptions() } if subs.Len() >= maxSubscriptionsPerDoc { // Handle limit exceeded return subs } return subs })pkg/cmap/cmap_test.go (1)
32-79
: Consider adding edge cases to strengthen test coverage.While the basic operations are well tested, consider adding these scenarios:
- Zero/nil value handling
- Large value handling
- Concurrent upsert operations
- Delete of non-existent keys
Would you like me to help generate additional test cases?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (10)
- pkg/cmap/cmap.go (1 hunks)
- pkg/cmap/cmap_test.go (1 hunks)
- server/backend/sync/memory/pubsub.go (4 hunks)
- server/backend/sync/pubsub.go (4 hunks)
- server/config.go (1 hunks)
- server/config.sample.yml (1 hunks)
- server/logging/logging.go (0 hunks)
- server/packs/packs.go (1 hunks)
- server/rpc/interceptors/default.go (1 hunks)
- server/rpc/interceptors/yorkie.go (1 hunks)
💤 Files with no reviewable changes (1)
- server/logging/logging.go
🚧 Files skipped from review as they are similar to previous changes (6)
- pkg/cmap/cmap.go
- server/config.go
- server/config.sample.yml
- server/packs/packs.go
- server/rpc/interceptors/default.go
- server/rpc/interceptors/yorkie.go
🧰 Additional context used
🪛 golangci-lint
pkg/cmap/cmap_test.go
[high] 86-86: G115: integer overflow conversion int -> uint64
(gosec)
[warning] 111-111: unused-parameter: parameter 'value' seems to be unused, consider removing or renaming it as _
(revive)
[warning] 187-187: unused-parameter: parameter 'value' seems to be unused, consider removing or renaming it as _
(revive)
server/backend/sync/memory/pubsub.go
[warning] 132-132: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
🔇 Additional comments (5)
server/backend/sync/pubsub.go (3)
20-22
: LGTM! Clean import additionsThe new imports are necessary for mutex support and time operations. Good use of the
gotime
alias to avoid confusion with the localtime
package.
73-80
: LGTM! Thread-safe channel closureThe implementation correctly prevents race conditions and double-close of the channel. This would be even more efficient with atomic.Bool as suggested above.
82-90
: LGTM! Thread-safe event publishing with timeoutThe implementation correctly handles concurrent publishing and prevents deadlocks when subscribers are slow.
server/backend/sync/memory/pubsub.go (2)
132-134
:⚠️ Potential issueRename unused parameter to underscore.
The unused parameter in the Delete callback should be renamed.
-m.subscriptionsMap.Delete(docKey, func(subs *Subscriptions, exists bool) bool { +m.subscriptionsMap.Delete(docKey, func(_ *Subscriptions, exists bool) bool {Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint
[warning] 132-132: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
177-184
: 🛠️ Refactor suggestionEnhance error handling in Publish.
The current implementation logs a warning when publish fails but continues with other subscribers. Consider if failed publishes should affect the overall operation or if retries should be implemented.
Let's verify if there are any existing retry mechanisms in the codebase:
Consider implementing a retry mechanism:
if ok := sub.Publish(event); !ok { // Implement exponential backoff retry for retries := 0; retries < maxRetries; retries++ { time.Sleep(backoff(retries)) if ok := sub.Publish(event); ok { break } } if !ok { logging.From(ctx).Warnf( `Publish(%s,%s) to %s failed after retries`, docKey, publisherID, sub.Subscriber(), ) } }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (5)
pkg/cmap/cmap.go (3)
17-18
: Enhance package documentation.While the package comment exists, it could be more descriptive about the package's purpose and thread-safety guarantees. Consider expanding it to:
// Package cmap provides a thread-safe concurrent map implementation using sync.RWMutex. // It supports generic key-value pairs where keys must be comparable.
24-28
: Enhance Map struct documentation.The struct comment should provide more details about thread safety and usage. Consider:
// Map is a thread-safe map implementation protected by sync.RWMutex. // It provides concurrent access to its contents through various methods // that handle appropriate locking and unlocking.
84-84
: Add periods to method comments for consistency.Method comments should end with periods for consistency with Go documentation standards:
-// Has checks if a key exists in the map +// Has checks if a key exists in the map. -// Len returns the number of items in the map +// Len returns the number of items in the map. -// Keys returns a slice of all keys in the map +// Keys returns a slice of all keys in the map. -// Values returns a slice of all values in the map +// Values returns a slice of all values in the map.Also applies to: 93-93, 101-101, 113-113
pkg/cmap/cmap_test.go (1)
89-213
: Add cleanup to prevent resource leaks in tests.The concurrent tests create large maps and spawn multiple goroutines but don't clean up resources. While Go's garbage collector will eventually handle this, it's best practice to clean up test resources explicitly.
Consider adding cleanup in each test:
t.Run("concurrent access", func(t *testing.T) { + t.Cleanup(func() { + // Clear the map to help GC + for _, k := range m.Keys() { + m.Delete(k, func(_ int, _ bool) bool { return true }) + } + }) m := cmap.New[int, int]() // ... rest of the test })Apply similar cleanup to other concurrent tests.
🧰 Tools
🪛 golangci-lint
[warning] 111-111: unused-parameter: parameter 'val' seems to be unused, consider removing or renaming it as _
(revive)
[warning] 187-187: unused-parameter: parameter 'val' seems to be unused, consider removing or renaming it as _
(revive)
server/backend/sync/memory/pubsub.go (1)
56-64
: Consider implementing the batch publish optimizationThe TODO comments outline a well-thought-out solution for reducing lock contention and handling redundant events. The proposed approach using time windows and batch processing could significantly improve performance.
Would you like me to help implement this optimization? I can:
- Create a dedicated event collector with configurable time windows
- Implement deduplication logic for same-type events
- Set up a background goroutine for batch publishing
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (10)
- pkg/cmap/cmap.go (1 hunks)
- pkg/cmap/cmap_test.go (1 hunks)
- server/backend/sync/memory/pubsub.go (3 hunks)
- server/backend/sync/pubsub.go (4 hunks)
- server/config.go (1 hunks)
- server/config.sample.yml (1 hunks)
- server/logging/logging.go (0 hunks)
- server/packs/packs.go (1 hunks)
- server/rpc/interceptors/default.go (1 hunks)
- server/rpc/interceptors/yorkie.go (1 hunks)
💤 Files with no reviewable changes (1)
- server/logging/logging.go
🚧 Files skipped from review as they are similar to previous changes (6)
- server/backend/sync/pubsub.go
- server/config.go
- server/config.sample.yml
- server/packs/packs.go
- server/rpc/interceptors/default.go
- server/rpc/interceptors/yorkie.go
🧰 Additional context used
🪛 golangci-lint
pkg/cmap/cmap_test.go
[warning] 111-111: unused-parameter: parameter 'val' seems to be unused, consider removing or renaming it as _
(revive)
[warning] 187-187: unused-parameter: parameter 'val' seems to be unused, consider removing or renaming it as _
(revive)
server/backend/sync/memory/pubsub.go
[warning] 172-172: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
🔇 Additional comments (3)
server/backend/sync/memory/pubsub.go (3)
108-108
: Well-implemented concurrency improvement using cmapThe implementation effectively addresses the PR's objective of distributing mutexes per document:
- Each document now has its own mutex through the cmap implementation
- The
Upsert
operation safely handles the creation of new subscription groups- The concurrent map eliminates the need for manual mutex management
This should significantly reduce contention when handling subscription requests across different documents.
Also applies to: 114-114, 132-137
172-174
:⚠️ Potential issueRename unused parameter 'subs' to '_'
The callback parameter
subs
is unused. Following Go conventions, rename it to_
to indicate it's intentionally ignored.-m.subscriptionsMap.Delete(docKey, func(subs *Subscriptions, exists bool) bool { +m.subscriptionsMap.Delete(docKey, func(_ *Subscriptions, exists bool) bool { return exists })Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint
[warning] 172-172: unused-parameter: parameter 'subs' seems to be unused, consider removing or renaming it as _
(revive)
31-34
: Verify impact of exporting the Subscriptions typeThe
Subscriptions
type is now exported (changed from lowercase to uppercase). While this change aligns with the concurrent map implementation, please ensure this doesn't unintentionally expose internal implementation details or break API compatibility.
What this PR does / why we need it:
Introduce cmap for distributing mutexes per documents
This commit modifies how subscription and unsubscription requests are handled by distributing the Mutex per documents. Previously, all requests competed for a single Mutex on the server, leading to bottlenecks. This change introduces document-specific mutexes while retaining a Mutex for the document list, enhancing concurrency and overall performance.
It may also be beneficial to explore throttling for same events or sharding techniques.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation:
Checklist:
Summary by CodeRabbit
New Features
Bug Fixes
Configuration Updates
Documentation
PushPull
function and logging mechanisms.Style