-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DEBUG-2647 Run multiple Ruby micro-benchmark files #3810
Changes from 3 commits
8bec446
80def75
2840239
d401d5b
9015359
af9da85
9d052cb
4e83fae
42b07cd
ede6784
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# Used to quickly run benchmark under RSpec as part of the usual test suite, to validate it didn't bitrot | ||
VALIDATE_BENCHMARK_MODE = ENV['VALIDATE_BENCHMARK'] == 'true' | ||
|
||
return unless __FILE__ == $PROGRAM_NAME || VALIDATE_BENCHMARK_MODE | ||
|
||
require 'open3' | ||
|
||
class GemLoadingBenchmark | ||
def benchmark_gem_loading | ||
# This benchmark needs to be run in a clean environment where datadog is | ||
# not loaded yet. | ||
# | ||
# Now that this benchmark is in its own file, it does not need | ||
# to spawn a subprocess IF we would always execute this benchmark | ||
# file by itself. | ||
output, status = Open3.capture2e('bundle', 'exec', 'ruby', stdin_data: <<-RUBY) | ||
raise "Datadog is already loaded" if defined?(::Datadog::Core) | ||
|
||
lib = File.expand_path('../lib', '#{__dir__}') | ||
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib) | ||
|
||
VALIDATE_BENCHMARK_MODE = #{VALIDATE_BENCHMARK_MODE} | ||
require 'benchmark/ips' | ||
|
||
Benchmark.ips do |x| | ||
# Gem loading is quite slower than the other microbenchmarks | ||
benchmark_time = VALIDATE_BENCHMARK_MODE ? { time: 0.001, warmup: 0 } : { time: 60, warmup: 5 } | ||
x.config(**benchmark_time) | ||
|
||
x.report("Gem loading") do | ||
pid = fork { require 'datadog' } | ||
|
||
_, status = Process.wait2(pid) | ||
raise unless status.success? | ||
end | ||
|
||
x.save! "#{__FILE__}-results.json" unless VALIDATE_BENCHMARK_MODE | ||
x.compare! | ||
end | ||
RUBY | ||
|
||
print output | ||
|
||
raise "Benchmark failed with status #{status}: #{output}" unless status.success? | ||
end | ||
end | ||
|
||
puts "Current pid is #{Process.pid}" | ||
|
||
GemLoadingBenchmark.new.instance_exec do | ||
benchmark_gem_loading | ||
end | ||
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
#!/bin/sh | ||
|
||
# This script is invoked by benchmarking-platform shell scripts | ||
# to run all of the benchmarks defined in the tracer. | ||
|
||
set -ex | ||
|
||
for file in \ | ||
`dirname "$0"`/tracing_trace.rb \ | ||
`dirname "$0"`/gem_loading.rb \ | ||
`dirname "$0"`/profiler_*.rb; | ||
do | ||
bundle exec ruby "$file" | ||
done | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. So I don't think it's blocking for this PR, but I wonder if we should try to put in some kind of check for files that aren't run. E.g. something along the lines of this check? Otherwise, it can be a bit confusing that if anyone adds a new profiler benchmark, it gets automatically picked up, but if you add any other kind of benchmark, it doesn't? Alternatively, should we just run every There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was actually not sure what the right approach to invoking "all" benchmarks was. Let me enumerate the files explicitly in this PR and try to improve the situation later. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Move of supporting files to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sounds reasonable -- now there's less magic, but it's a bit more consistent 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget to update https://github.com/DataDog/dd-trace-rb/blob/master/spec/datadog/tracing/validate_benchmarks_spec.rb#L12 :)
(Technically this is not a tracing-specific thing but I think it's fine to keep it there for now? -- as long as we test it somewhere)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a readme to the benchmarks directory with this point as a reminder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And moved the file to top level since it validates all benchmarks, not just tracing.