Skip to content

Conversation

@gabrielbosio
Copy link
Collaborator

No description provided.

- Pre-allocate Vec in MaybeRelocatableVisitor::visit_seq using size_hint
- Pre-allocate HashMap in ReferenceIdsVisitor::visit_map using size_hint
- Remove redundant .to_string() call when value is already a String

This reduces allocation overhead during JSON program deserialization.
Benchmark shows ~5% improvement in initialize time.
@gabrielbosio gabrielbosio added the performance Performance-related improvements or regressions label Jan 23, 2026
@github-actions
Copy link

github-actions bot commented Jan 23, 2026

**Hyper Thereading Benchmark results**




hyperfine -r 2 -n "hyper_threading_main threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_main' -n "hyper_threading_pr threads: 1" 'RAYON_NUM_THREADS=1 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 1
  Time (mean ± σ):     23.244 s ±  0.100 s    [User: 22.403 s, System: 0.839 s]
  Range (min … max):   23.173 s … 23.315 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 1
  Time (mean ± σ):     23.007 s ±  0.085 s    [User: 22.206 s, System: 0.798 s]
  Range (min … max):   22.946 s … 23.067 s    2 runs
 
Summary
  hyper_threading_pr threads: 1 ran
    1.01 ± 0.01 times faster than hyper_threading_main threads: 1




hyperfine -r 2 -n "hyper_threading_main threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_main' -n "hyper_threading_pr threads: 2" 'RAYON_NUM_THREADS=2 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 2
  Time (mean ± σ):     12.458 s ±  0.047 s    [User: 22.463 s, System: 0.851 s]
  Range (min … max):   12.425 s … 12.491 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 2
  Time (mean ± σ):     12.484 s ±  0.186 s    [User: 22.419 s, System: 0.860 s]
  Range (min … max):   12.353 s … 12.616 s    2 runs
 
Summary
  hyper_threading_main threads: 2 ran
    1.00 ± 0.02 times faster than hyper_threading_pr threads: 2




hyperfine -r 2 -n "hyper_threading_main threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_main' -n "hyper_threading_pr threads: 4" 'RAYON_NUM_THREADS=4 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 4
  Time (mean ± σ):      9.835 s ±  0.246 s    [User: 34.949 s, System: 1.061 s]
  Range (min … max):    9.661 s … 10.009 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 4
  Time (mean ± σ):     10.038 s ±  0.222 s    [User: 35.072 s, System: 1.027 s]
  Range (min … max):    9.881 s … 10.195 s    2 runs
 
Summary
  hyper_threading_main threads: 4 ran
    1.02 ± 0.03 times faster than hyper_threading_pr threads: 4




hyperfine -r 2 -n "hyper_threading_main threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_main' -n "hyper_threading_pr threads: 6" 'RAYON_NUM_THREADS=6 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 6
  Time (mean ± σ):      9.574 s ±  0.007 s    [User: 35.116 s, System: 1.010 s]
  Range (min … max):    9.569 s …  9.579 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 6
  Time (mean ± σ):      9.767 s ±  0.472 s    [User: 35.465 s, System: 1.011 s]
  Range (min … max):    9.433 s … 10.100 s    2 runs
 
Summary
  hyper_threading_main threads: 6 ran
    1.02 ± 0.05 times faster than hyper_threading_pr threads: 6




hyperfine -r 2 -n "hyper_threading_main threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_main' -n "hyper_threading_pr threads: 8" 'RAYON_NUM_THREADS=8 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 8
  Time (mean ± σ):      9.540 s ±  0.006 s    [User: 35.616 s, System: 1.065 s]
  Range (min … max):    9.536 s …  9.544 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 8
  Time (mean ± σ):      9.568 s ±  0.061 s    [User: 35.860 s, System: 1.076 s]
  Range (min … max):    9.524 s …  9.611 s    2 runs
 
Summary
  hyper_threading_main threads: 8 ran
    1.00 ± 0.01 times faster than hyper_threading_pr threads: 8




hyperfine -r 2 -n "hyper_threading_main threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_main' -n "hyper_threading_pr threads: 16" 'RAYON_NUM_THREADS=16 ./hyper_threading_pr'
Benchmark 1: hyper_threading_main threads: 16
  Time (mean ± σ):      9.573 s ±  0.252 s    [User: 35.817 s, System: 1.143 s]
  Range (min … max):    9.395 s …  9.752 s    2 runs
 
Benchmark 2: hyper_threading_pr threads: 16
  Time (mean ± σ):      9.589 s ±  0.075 s    [User: 35.819 s, System: 1.120 s]
  Range (min … max):    9.536 s …  9.642 s    2 runs
 
Summary
  hyper_threading_main threads: 16 ran
    1.00 ± 0.03 times faster than hyper_threading_pr threads: 16


@github-actions
Copy link

github-actions bot commented Jan 23, 2026

Benchmark Results for unmodified programs 🚀

Command Mean [s] Min [s] Max [s] Relative
base big_factorial 1.986 ± 0.070 1.955 2.181 1.01 ± 0.04
head big_factorial 1.968 ± 0.006 1.954 1.975 1.00
Command Mean [s] Min [s] Max [s] Relative
base big_fibonacci 1.925 ± 0.013 1.910 1.950 1.00
head big_fibonacci 1.927 ± 0.005 1.917 1.934 1.00 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base blake2s_integration_benchmark 6.939 ± 0.119 6.843 7.141 1.00
head blake2s_integration_benchmark 6.940 ± 0.042 6.873 7.037 1.00 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base compare_arrays_200000 2.049 ± 0.006 2.042 2.061 1.00
head compare_arrays_200000 2.057 ± 0.014 2.038 2.079 1.00 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base dict_integration_benchmark 1.347 ± 0.007 1.338 1.363 1.00
head dict_integration_benchmark 1.376 ± 0.007 1.367 1.393 1.02 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base field_arithmetic_get_square_benchmark 1.152 ± 0.005 1.147 1.165 1.00
head field_arithmetic_get_square_benchmark 1.163 ± 0.004 1.154 1.167 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base integration_builtins 7.020 ± 0.033 6.978 7.090 1.00
head integration_builtins 7.075 ± 0.053 7.020 7.209 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base keccak_integration_benchmark 7.044 ± 0.016 7.017 7.075 1.00
head keccak_integration_benchmark 7.143 ± 0.069 7.090 7.333 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base linear_search 2.039 ± 0.023 2.025 2.098 1.00
head linear_search 2.054 ± 0.021 2.036 2.113 1.01 ± 0.02
Command Mean [s] Min [s] Max [s] Relative
base math_cmp_and_pow_integration_benchmark 1.426 ± 0.006 1.419 1.434 1.00
head math_cmp_and_pow_integration_benchmark 1.451 ± 0.006 1.445 1.463 1.02 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base math_integration_benchmark 1.384 ± 0.012 1.369 1.416 1.00
head math_integration_benchmark 1.410 ± 0.006 1.402 1.421 1.02 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base memory_integration_benchmark 1.142 ± 0.010 1.135 1.164 1.00
head memory_integration_benchmark 1.156 ± 0.011 1.146 1.184 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base operations_with_data_structures_benchmarks 1.475 ± 0.005 1.469 1.489 1.00
head operations_with_data_structures_benchmarks 1.502 ± 0.009 1.490 1.521 1.02 ± 0.01
Command Mean [ms] Min [ms] Max [ms] Relative
base pedersen 513.0 ± 1.7 511.2 515.9 1.00
head pedersen 514.8 ± 2.1 512.1 519.1 1.00 ± 0.01
Command Mean [ms] Min [ms] Max [ms] Relative
base poseidon_integration_benchmark 587.0 ± 1.6 584.4 589.8 1.00
head poseidon_integration_benchmark 594.3 ± 4.0 589.6 601.9 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base secp_integration_benchmark 1.749 ± 0.049 1.726 1.888 1.00 ± 0.03
head secp_integration_benchmark 1.745 ± 0.005 1.737 1.752 1.00
Command Mean [ms] Min [ms] Max [ms] Relative
base set_integration_benchmark 651.3 ± 2.2 649.2 655.0 1.00
head set_integration_benchmark 657.5 ± 5.1 652.5 667.0 1.01 ± 0.01
Command Mean [s] Min [s] Max [s] Relative
base uint256_integration_benchmark 3.947 ± 0.010 3.934 3.966 1.00
head uint256_integration_benchmark 4.021 ± 0.028 3.971 4.075 1.02 ± 0.01

@codecov
Copy link

codecov bot commented Jan 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.99%. Comparing base (9f72561) to head (d7952be).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2309   +/-   ##
=======================================
  Coverage   95.99%   95.99%           
=======================================
  Files         104      104           
  Lines       36914    36916    +2     
=======================================
+ Hits        35437    35439    +2     
  Misses       1477     1477           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance-related improvements or regressions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant