Skip to content
larsbergstrom edited this page Sep 8, 2014 · 1 revision

Agenda

  • Cargo update (jack)
    • mach
    • unit tests
    • updating dependencies
    • misc issues
  • Script compilation memory usage update (larsberg)
  • Rust blocking issues (jack)
  • Rust long-term issues (larsberg)
  • Android embedding (mbrubeck)
  • sharegl (jack) -back off waiting instead of busy spinning - saving power (Laleh)

Attending

lars, mbrubeck, brson, pcwalton, azita, manish, jdm, mrobinson, laleh, cgaebel, jack, kmc, Manishearth

Cargo update

  • jack: The PR is up now. This moves Servo completely to Cargo. It also includes mach (from Mozilla) to do build/test orchestration. This is quite different. First difference is that we don't have configure or make anymore - mach does all of the bootstrapping, building, and management. You can run cargo by hand, but have to make sure it's OK by hand. There's a mach command for that. There are no submodules anymore, except for glfw-rs, because the upstream cargoified after the Rust upgrade, so we can't use it. As soon as we do another Rust upgrade, we'll catch up and can make glfw no longer a submodule. Also w-p-t is a submodule, but will continue to be so. The main feature we are losing with Cargo is 'cargo test' runst unit tests only on the toplevel package - and we have none (in libservo). So, there is currently no way to run tests for libcompositing, libgfx, etc. except for recompiling everything 12 times (once per subpackage directory). So that won't fly. Right now, we don't run unit tests. Two features cargo could add are sharing of the target directory so that rebuilding a given submodule doesn't rebuild the whole thing. The other is a bug I have filed for running arbitrary dependencies; there's some discussion about doing this automatically. The second part is that when we update dependencies, we update cargo.lock. When cargo finishes a build, it generates one. For libraries, not checked in. For Servo, we have one at toplevel that's checked in and Cargo uses it to figure out which versions of the repos to use for the build. If you want to update rust-layers, you run 'cargo update layers' and it will fetch the repo, write the head revision in, etc. If you need an older one, you get to edit that file by hand. There's a bug to let cargo update take that revision by hand. So, the submodule merging has been replaced by editing a text file. Should be a net improvement over time.
  • jack: Other thing with dependencies is that if you're working on rust-layers, you can't get a t the dependencies. Can create a .cargo dir and a file called config. Then you can check out what you want to work on in your home dir and put an override in your cargo config & cargo will replace the thing it would have done with pointing back at the checkout. The format allows you to comment stuff out, so I have all the dependencies listed there with them commented out and then can just edit them at the end when I finish pushing the PRs. Couple remaining issues; most have bugs filed. A good way to sort these is to look for bugs filed by me in the Cargo repo. Sometimes, when there's a custom script, the Cargo file shells out to make. If it goes wrong, Cargo just says "makefile.cargo failed" with no indication of which one failed. Two we mentioned already above. Also, make has no output if there's nothing to rebuild, though cargo likes to tell you all the libraries are still refreshed (50 lines of output). Most of the other stuff already fixed. The build is faster than the old build and should be more correct. jdm/ms2ger: on the code generation for script, there "might need to be some work there." It should recompile script if they generated files change, but removing a file might not cause a rebuild. No bug open for this yet.
  • brson: What is the compile time speedup attributed to?
  • jack: Dependencies weren't perfect before. I think that gives us more parallelism. Part is also that there are two levels. Cargo does what it can, but under the hood "make -j4" is being called, whereas the previous make was -j4 at the toplevel.
  • larsberg: Travis issues with lots of GCC compiles?
  • jack: Not currently; skia and mozjs seem to be fine, and they are sequential blockers for the rest of Servo, especially its big libraries. Didn't file a bug yet with Cargo for an argument to provide a number at commandline to control the degree of parallelism. I went through the PR and updated the readmes and instructions. There's a new building thing that talks about what you can do with mach. Maybe in readme. Dependency stuff was out of date. So, the docs should have an answer. If it's not there, we can get it added.
  • brson: Are you using a cargo snapshot?
  • jack: We use the same Rust snapshot we've used before - bootstrap rust looks at the hash and does the same thing we used to do. For Cargo, we can just use the latest nightly.
  • brson: Assumes stuff never breaks?
  • jack: Yes. Hasn't broken yet! It will only break if they change the way a command works, in which case we would fix our build. Or, if it starts relying on some new Rust compiler flag that didn't exist. If that happens, we'll switch to a Cargo snapshot system like we do for Rust. Nice thing is if you already have a system Cargo, you can just use it. One more thing: the mach settings are in a file called .servobld, and controls which version of Cargo and Rust it's using. Replaces the configure settings like local rust root, etc. By default, it'll use the bootstrapped ones, but you can give it a path to a Rust on your system. Or use the one on your path. So you can do that for your own Cargo or Rust versions.
  • jack: Android builds and libembedding are sort of special. They got ported over as a ports directory with the old libembedding, which is now called CEF and android which is ports/android. They have to be special since Cargo doesn't support circular dependencies, but they both depend on Servo. To compile them, you have to change into ports/android and do the build there. There's a makefile in Android that will do the build there (may mach-ify it later). And CEF is done the same way, but uses Cargo. There's a bug that CEF compiling on Linux complains about linker options, but I can't see what's passing these two options, so I filed a bug for that this morning. Questions? I'm hoping it will land anytime. Couple of things blocking us. The second clump of w-p-t tests are failing; nobody knows why. Also the CEF linker error. Not sure if I'll block it landing on that; may just switch to OSX testing it (works on OSX). I think those are the only two issues.

Script

  • larsberg: peak libscript mem usage is 1.2GB on my machine. we seem to build another large crate sometimes when we compile libscript. it looks like on travis we'll have to compile sequentially until travis upgrades build machines. is 1.2GB even reasonable for 80k lines?
  • pcwalton: it's not good.
  • brson: it's on the same order as librustc. sternsteiner's parallel patch may fix this if we run it serially. eddyb is working on reducing memory usage too.
  • pcwalton: is most of the memory usage LLVM?
  • brson: I think so.
  • pcwalton: As I recall rustc memory usage had two peaks, one from us and one from LLVM and they are about the same. That's good and bad. We're no worse than LLVM but fixing it won't solve all the problems.
  • kmc: Isn't this because we emit lots of IR and rely on the optimizer?
  • pcwalton: that's not the case anymore so much. we've gotten a lot better. most of the low hanging fruit has been picked at this point.
  • brson: is it possible to split codegen into multiple crates?
  • jdm: every time I try to do this it doesn't actually work. It seems like a big challenge.
  • pcwalton: we are working on this on the rust side. the parallel compilation stuff splits code into multiple llvm modules and that should allow rust to split up crates on its own, even ones with cyclic dependencies.
  • kmc: Does that affect codegen?
  • pcwalton: Yes, it decreases optimization significantly. Like a 30% performance hit. But in terms of the edit/debug/run cycle, it's much faster.
  • jack: I have a patch that adds mach builds with -j, so that we can try -j1 on Travis for a near-term workaround. Hopefully not too many problems; the way travis machines work is Linux is 2x as fast, but mac is the only one that can run w-p-t. As long as we can finish it all in time, we should be OK to increase our build cycle time. Running out of mem issue only seems to happen on Linux, which should be fine since they're already 2x as fast.

Rust blocking issues

  • jack: Can't upgrade due to 16483, the LLVM assertion when compiling Servo. Curious what progress has been made on this. If we can't upgrade Rust, we can't land html5ever and a couple of other things.
  • jdm: Have we tested if the LLVM upgrade helped at all?
  • brson: Hasn't landed. We have one approved and in the queue.
  • jack: So we should test that when it lands. Who found this? Cameron. I'll see if he can re-run with the LLVM upgrade. Or kmc since it's blocking your work :-)
  • kmc: Sure; I can take a look.
  • jack: Otherwise, we should let the Rust team know.
  • kmc: LLVM upgrade is on a branch?
  • brson: There's a PR (16927). You can grab it from dotdash's branch.
  • kmc: Cool.

Android embedding

  • mbrubeck: Been looking at the build/embedding story. I learned lots and wanted to let everybody know about what I've learned. First, what we have on Android is an application like the desktop app. It's a C++ program that has a main() that starts up, uses GLUT (windowing/input app framework that's part of opengl and similar to GLFW), gets a GL surface, starts up the Servo loop, passes it a hardcoded URL, and it runs. Basically works and is how you might implement a game, but is not running any of the Android Java Framework code that does stuff like receive broadcasts (e.g., open a URL). If we want to do that, we need to have Servo running as part of an Android Activity. The trouble I run into is that once you start an Activity, it creates a Window, which it owns, and our C++ and Rust code no longer has any ability to do anything with it.
  • pcwalton: Isn't there a GLActivity? I seem to remember this pain a couple years ago when working on FireFox Android. If you look at how Fennec does it, you may be able to.
  • mbrubeck: Long-term, we want to have a View, which is a widget you can embed. So you might have a ToolbarView, MenuBarView, and the ServoView. Fennec creates a SurfaceView, and then uses JNI to access the GLContext, which it passes to Gecko.
  • pcwalton: That's what we'll need to do in Servo.
  • mbrubeck: Downside is private JNI tricks.
  • pcwalton: So many people are doing this that Google won't break all these apps.
  • mbrubeck: I also looked at what the built-in WebKit view does and it's similar but can be a little cleaner because it has access to private implementation classes.
  • pcwalton: This is par for the course on Android. Is there any chance of using FireFox for Android as the shell? First thing is just the basic SurfaceView. But after that, it seems like one of the easiest chromes would be FireFox Android. It has no XUL and since it's JNI, it's pretty isolated from Gecko's internals since it has to do the message passing.
  • mbrubeck: Seems like a possibility. I have a branch where I started copy/pasting code for accessing the GLSurface. They have an IPC layer that passes messages from Java through JNI to C++ code to JS...
  • pcwalton: Based on JSON?
  • mbrubeck: Yes.
  • pcwalton: That's great for Servo, since it's language independent!
  • mbrubeck: And we share a JS engine, so we should even be able to share code. That gets into the other part, which is once I have the GLSurface, there's no way to pass it into GLUT/GLFW, as they want to take it all over. We'll need not just access to the Android View Surface, but also something to replace the whole windowing layer to pass messages in.
  • pcwalton: Now that we have a concept of a port/, we have a place to put that code.
  • jack: It's just a directory :-) There's also a new Rust project called InitGL or GLInit, which is basically lower-level than GLFW and just helps create the context. Then we could get rid of all the windowing. In theory, libservo should have no windowing of its own.
  • mbrubeck: I think the CEF port is going to run into the same issue. I think we'll want to replace the thing where Servo owns the window to just owning a surface. So, thanks for the feedback; I know what I need to go forward.
  • jack: I don't think the needs of Android are any different from desktop, so you could do it there. Then we'd move the windowing code out of libservo on desktop, too, which we need to do anyway.

ShareGL

  • jack: I was under the impression we were using it. When I was synchronizing ports, though, I realized it was only 'extern crate' in the CEF code, but not the rest of Servo. What does ShareGL do, and why did we lose it?
  • pcwalton: It was the interface between some platform-specific compositing code that tried to share textures between processes (NativeTextureSomething-like)... it's very old. We should get rid of it.
  • jack: So we should not have it all?
  • pcwalton: Yes, I duplicated its functionality and never removed the submodule.
  • jack: Good; wanted to make sure I didn't break something during cargoification. I'll check with zmike and then get rid of it.

Backoff waiting

  • laleh: In the transition to backoff waiting, I realized it's not good to start with backoff, but instead to busy-spin first, we don't lose performance. So I start with busy spinning and move to backoff. So now the performance is mostly the same and in some cases even better on wikipedia. Around 3-5% perf savings, and up to 5-6% power savings.
  • pcwalton: Makes sense; we overschedule right now, and I have a feeling the perf increase may be from that. Because busy-waiting keeps the thread active and contending with other threads.
  • jack: We use double the power of Firefox Android. So how does the 5-6% differ?
  • larsberg: I suspect we'll get better numbers on mobile.
  • laleh: These are desktop numbers. I'll get the numbers on mobile.
  • jack: That makes sense; if it was affecting mobile, I'd expect to see a bigger change there.
Clone this wiki locally