-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Effects: double translation of functions and dynamic switching between direct-style and CPS code #1461
base: master
Are you sure you want to change the base?
Conversation
I think this PR is best reviewed as a whole, not commit by commit. |
It looks like these two effect handler benchmarks are slower with this PR, 18 % and 8 % slower, respectively. I need to spend some time on it to understand why.
|
Chameneos runs are too short. You should increase the input size. It takes the input size as a command line argument. Something that runs for a second is more representative as it eliminates the noise due to JIT and other constant time costs. |
Good point. I find that chameneos is 9,8 % slower with this PR, 3.428 s versus 3.753 s. My theory is that effect handlers are slightly slower due to the fact that function calls in CPS code cost an additional field access (applying |
I agree with the reasoning and do not expect real programs to behave like |
Btw, the original numbers aren't useful to understand the improvements brought about by this PR. For this, you need 3 variants:
I'd be interested to see the difference between (2) and (3) in addition to the current numbers which show the difference between (1) and (3). |
Note that I had to do that in #1397: js_of_ocaml/compiler/lib/generate.ml Lines 954 to 959 in 3b3f66b
|
I believe that the form
Here are the graphs showing the difference between |
Thanks. The execution time improvement is smaller than what I would have expected. Is that surprising to you or does it match your expectation? Also, it would be useful to have all the variants plotted in the same graph with |
It more or less matches my expectation. My reasoning is the following: on most of these small, monomorphic benchmarks, the static analysis will eliminate most CPS calls at compile time. Therefore, the dynamic switching will not change the run time a lot and maybe slightly worsen it. On benchmarks that heavily use effect handlers, I also expect the run time to be worse: most of the time is spent in CPS code anyway, the dynamic switching only adds overhead. I therefore expect the biggest improvements to happen on larger programs, on which the static analysis does not work as well due to higher-order and mutability; and which do not spend most of their time in effect handlers. If my hypothesis is verified, then the question is: is this trade-off acceptable? Keeping in mind that there might be ways to improve this PR to save more performance. |
I have updated the PR message with new graphs showing all the variants. |
I tried to build a benchmark that uses Domainslib, as an example of a more typical effect-using program. But the linker complains that the primitive I tried to add it in a new Also, when I build js_of_ocaml I’m getting a lot of primitive-related messages that I don’t really understand:
|
I solved it by downgrading to lockfree 0.3.0 as suggested by @jonludlam. But the resulting program never completes. I assume that the mock parallelism provided by the runtime doesn’t suffice for using Domainslib—a “domain” must be stuck forever spinwaiting or something. |
5362ff4
to
91e352e
Compare
I think that this PR is ready for review. The only two problems that prevent the CI from being green are:
|
Just add it to the stdlib module compiler/lib/stdlib.ml |
Lambda_lifting doesn't seem to be used anymore, is it expected ? Should Lambda_lifting_simple replace Lambda_lifting ? |
From a quick look at the PR, the benefit of such change is not clear, can you highlight examples where we see clear improvements ? |
0625907
to
3cd6c95
Compare
As I discovered just today, There are now two lambda lifting passes for two different reasons. For this reason I am not convinced that there is an interest in merging the two modules.
I am convinced that most real-world effect-using programs will benefit from this PR, for the reasons given in my above message; but it’s hard to prove it, because we don’t have (yet) examples of such typical programs that work in Javascript. Programs using Domainslib don’t work well with js_of_ocaml (and are arguably not really relevant as JS is not a multicore language). Concurrency libraries like Eio are a more natural fit. I am currently trying to cook up a benchmark using the experimental JS backend for Eio. |
Given the size impact of this change, it would be nice to be able to disable (or control) this optimization. There are programs that would not benefit from this optimization, it be nice to not have to pay the size cost for no benefits. The two lambda_lifting modules are confusing, we should either merge them (allowing to share duplicated code) or at least find better names.
Do you know where this come from ? does it mostly come from the new lambda lifting pass ? or are other passes heavily affected as well (generate, js_assign, ..) |
Thank you for the review and sorry for the response delay, I have been prioritizing another objective in the previous weeks. One update is that there is no performance gain on programs that use Eio, which is a shame as it is expected to be one of the central uses of effects. More generally, when the program stays almost all the time within at least one level of effect handlers, there is essentially no performance gain as we run the CPS versions of every function. And I expect this programming pattern (installing a topmost effect handler at the beginning of the program) to be the most common with effects. So it is unclear to me yet if the implementation of double translation can be adapted to accommodate this. |
I'm trying to understand the results in this PR, but with the given the current results, the sense I get is that double translation is
Is this a correct reading? What is missing (or I am failing to see) are cases where the double translation is faster where it is expected to be. Do you have a (micro-)benchmark that exhibits the best-case scenario for double translation in terms of running time? Such a program would be useful to determine the upper limit of the performance benefit; any real programs will have an improvement that is smaller. |
I don’t think the data points unilaterally in the direction of a slowdown. Microbenchmarks are sometimes faster, sometimes slower. Arguably, when they are slower, they can be up to 20 % slower, but this only happens on one benchmark. Macroscopic benchmarks that do not use effect handlers show no significant differences in run time. It is true that the code size is larger.
I expect the improvements to happen on larger programs which do not spend most of their time in effect handlers. Unfortunately, this does not seem to be a typical use case as of now, as e.g. Eio programs tend to install an effect handler at the start of the program, in which case dynamic switching will probably not help and will only increase the code size. I can try to write programs that would benefit from dynamic switching, I just fear they will not represent real-life use cases. |
I think that in its current state you're probably not going to see much benefit on programs using effects for concurrency. However, you probably would see benefit in, say, a program that used effects to implement generators. Such programs use effects, but don't spend most of their execution time within the effect handler. The real win with this approach requires some analysis that can tell you when a particular call of a function won't use effects. That would come naturally out of the type-based approaches that we'll be experimenting with at Jane Street, but you could also probably write other static analyses for obtain such information. For instance, you could look at a call to |
Our current data flow analysis does that but it is not context-sensitive, i.e. each function is analyzed in a context which is the join of all possible call contexts. So heavily used higher-order functions like |
Would this program make sense? https://github.com/ocaml-multicore/effects-examples/blob/master/generator.ml The program benchmarks three different versions of iterating over a binary tree -- a plain iterator, a generator implemented by hand using CPS and one using effect handlers. The native code executable produces the following output: % dune exec ./generator.exe
Iter: mean = 0.100558, sd = 0.000017
Gen_cps: mean = 0.336419, sd = 0.000009
Gen_eff: mean = 1.506840, sd = 0.000424 I would be curious to see the performance of this program with
|
Thank you for suggesting this benchmark. Here is the output with only the static analysis (current state on master):
Here is the output with static analysis + double translation (this PR):
A first notable thing is that the effect-using code ( How is this possible? It’s because all three benchmarks use a function, namely The consequences of this “contamination” are aggravated if we modify the benchmark to call diff --git a/test_generator/generator.ml b/test_generator/generator.ml
index df7f2741f6..49e1a81073 100644
--- a/test_generator/generator.ml
+++ b/test_generator/generator.ml
@@ -17,6 +17,8 @@ module type TREE = sig
(** [deep n] constructs a tree of depth n, in linear time, where every node at
level [l] has value [l]. *)
+ val iter : ('a -> unit) -> 'a t -> unit
+
val to_iter : 'a t -> ('a -> unit) -> unit
(** Iterator function. *)
@@ -123,7 +125,12 @@ let t = Tree.deep n
let iter_fun () = Tree.to_iter t (fun _ -> ())
let m, sd = benchmark iter_fun 10
let () = printf "Iter: mean = %f, sd = %f\n%!" m sd
-let rec consume_all f = match f () with None -> () | Some _ -> consume_all f
+let t' = Tree.deep 4
+let total = ref 0
+let rec consume_all f =
+ match f () with
+ | None -> ()
+ | Some n -> Tree.iter (fun m -> total := !total + m + n) t'; consume_all f
let gen_cps_fun () =
let f = Tree.to_gen_cps t in Output with only the static analysis:
Output with static analysis + double translation:
Here, all three benchmarks are much faster with double translation, even the one using effects. This is also the reason why double translation would combine very well with the locality modes of Jane Street for effects: as @vouillon’s experiments have shown, they allow many more functions to be kept in direct style, but—paradoxically—this forces to switch more often between direct style and CPS, which is costly (another case of contamination). With double translation, this is not the case: all these functions can be called in direct style. |
Thanks for the benchmark results. The purpose of the PR is now apparent from the performance side. |
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
9f61092
to
b8303f5
Compare
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
b8303f5
to
9bbe751
Compare
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
9bbe751
to
c63f3ed
Compare
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
c63f3ed
to
b004b80
Compare
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
b004b80
to
e23facb
Compare
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
e23facb
to
68fbeb8
Compare
This PR is now rebased on master and ready again for review (cc @vouillon). Of note: as previously, a number of expect tests underwent some variable name substitution, i.e., they are the same up to alpha equivalence. I have not looked closely at the cause; I assume that in the CPS transform, some new variables are now created and not always used. If this is a problem, I can take some time to look at it. |
28c257c
to
68fbeb8
Compare
I pushed a commit which adds a new primitive, |
a66832d
to
8fa2eb8
Compare
... dynamic switching between direct-style and CPS code. (ocsigen#1461)
Passing a function [f] as argument of `caml_assume_no_effects` guarantees that, when compiling with `--enable doubletranslate`, the direct-style version of [f] is called, which is faster than the CPS version. As a consequence, performing an effect in a transitive callee of [f] will raise `Effect.Unhandled`, regardless of any effect handlers installed before the call to `caml_assume_no_effects`, unless a new effect handler was installed in the meantime. Usage: ``` external assume_no_effects : (unit -> 'a) -> 'a = "caml_assume_no_effects" ... caml_assume_no_effects (fun () -> (* Will be called in direct style... *)) ... ``` When double translation is disabled, `caml_assume_no_effects` simply acts like `fun f -> f ()`.
Based on an initial suggestion by @lpw25, we generate two versions for each functions that may be called from inside an effect handler (according to the existing static analysis): a direct-style version and a CPS version. At runtime, direct-style versions are used, except when entering an effect handler, in which case only CPS is run until the outermost effect handler is exited. Such functions are conceptually transformed into pairs of functions. For performance, the CPS closure is stored in the
f.cps
field of the direct-style functionf
. This is a joint work with @vouillon.We encountered a design difficulty: when functions are transformed into pairs of functions, it is unclear how to deal with captured identifiers when the functions are nested. To avoid this problem, functions that must be transformed are lambda-lifted, and thus no longer have any free variables except toplevel ones.
The transform is rather successful in preserving the performance of small / monomorphic programs.
I hypothesize thatEdit: I am not able to reproduce a slowdown on hamming in my latest round of benchmarks.hamming
is slower for the same reason as on current master: it uses lazy values, which are an obstacle for the global flow analysis.fft
is slightly slower, however. The generated codes look very similar.The difference becomes negligible on large programs. CAMLboy is actually… 3 % faster with effects enabled (compared to 25 % slower previously): 520 FPS instead of 505 FPS, although the standard deviation is high at ~11 FPS, so it would be fair to say that the difference is not discernable.
ocamlc
is not discernably slower, either (compared to 10 % slower previously).As some functions must be generated in two versions, the size of the generated code is larger (up to 76 % larger), a few percents larger when compressed.
Compiling
ocamlc
is about 70 % slower; the resulting file is 64 % larger when compressed.