From 4c84f8b3c254d9f2eb22e993557e8fd4dd0027a1 Mon Sep 17 00:00:00 2001
From: Gohla If you’re using a Rust editor or IDE, it probably also has a mechanism for running cargo on your project.
You can of course use that in place of running cargo from a terminal. Run the test by running Running with The second command should result in something like: Inspect the build log with Compiling pie v0.1.0 (/pie)
- Finished dev [unoptimized + debuginfo] target(s) in 0.06s
+ Finished dev [unoptimized + debuginfo] target(s) in 0.07s
Simple Test
cargo test
.
The output should look something like: Compiling pie v0.1.0 (/pie)
- Finished test [unoptimized + debuginfo] target(s) in 0.29s
+ Finished test [unoptimized + debuginfo] target(s) in 0.37s
Running unittests src/lib.rs (target/debug/deps/pie-7f6c7927ea39bed5)
running 1 test
diff --git a/2_incrementality/6_example/index.html b/2_incrementality/6_example/index.html
index a03e2e7..3f66795 100644
--- a/2_incrementality/6_example/index.html
+++ b/2_incrementality/6_example/index.html
@@ -238,7 +238,7 @@
Reuse
assert_eq!(&output, "Hi");
cargo run --example incremental
should produce output like: Compiling pie v0.1.0 (/pie)
- Finished dev [unoptimized + debuginfo] target(s) in 0.35s
+ Finished dev [unoptimized + debuginfo] target(s) in 0.36s
Running `target/debug/examples/incremental`
A) New task: expect `read_task` to execute
Reading from input.txt with Modified stamper
@@ -315,7 +315,7 @@
No dependenci
top_down
integration test file with: cargo test --test top_down test_reuse
Finished test [unoptimized + debuginfo] target(s) in 0.05s
+
Finished test [unoptimized + debuginfo] target(s) in 0.02s
Running tests/top_down.rs (target/debug/deps/top_down-e757e81b664b50ba)
running 1 test
diff --git a/3_min_sound/4_fix_task_dep/index.html b/3_min_sound/4_fix_task_dep/index.html
index cba7e56..93b73bc 100644
--- a/3_min_sound/4_fix_task_dep/index.html
+++ b/3_min_sound/4_fix_task_dep/index.html
@@ -350,23 +350,23 @@
Manifest th
cargo test --test top_down test_no_superfluous_task_dependencies
.
The third (last) build log should look like this:→ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- ? ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- ✗ /tmp/.tmpNP19gu/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098540, tv_nsec: 312564915 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098540, tv_nsec: 316564903 })))
- ▶ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- - /tmp/.tmpNP19gu/in.txt
+
→ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ ? ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ ✗ /tmp/.tmpFOBrn0/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098936, tv_nsec: 637055517 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098936, tv_nsec: 641055544 })))
+ ▶ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ - /tmp/.tmpFOBrn0/in.txt
◀ Ok(String("HeLLo, WorLd!"))
← Ok(String("HeLLo, WorLd!"))
- ✗ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
- ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- → ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified))
- ? ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
+ ✗ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
+ ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ → ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
← Ok(String("HeLLo, WorLd!"))
- ✗ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
- ▶ ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified))
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
+ ✗ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
+ ▶ ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified))
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
← Ok(String("HeLLo, WorLd!"))
◀ Ok(String("hello, world!"))
← Ok(String("hello, world!"))
@@ -433,14 +433,14 @@
Finding t
We only manifested the bug in the last test due to having a chain of 2 task dependencies, and by carefully controlling what is being executed and what is being checked.
Recall the second build in the test_no_superfluous_task_dependencies
test.
The build log for that build looks like:
→ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- → ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified))
- ? ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- ✓ /tmp/.tmpNP19gu/in.txt
+→ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ → ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ ✓ /tmp/.tmpFOBrn0/in.txt
← Ok(String("Hello, World!"))
- ✓ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
+ ✓ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
← Ok(String("hello, world!"))
◀ Ok(String("HELLO, WORLD!"))
← Ok(String("HELLO, WORLD!"))
@@ -719,24 +719,24 @@ Fixing the bug<
This is because our changes have correctly removed several superfluous task requires, which influences these assertions.
Inspect the build log for this test with cargo test --test top_down test_require_task
.
The second build now looks like:
-→ ToLower(ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified))
- ? ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
- ✓ /tmp/.tmp5YsYRC/in.txt
- ✓ ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
+→ ToLower(ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
+ ✓ /tmp/.tmpLnu0X7/in.txt
+ ✓ ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
← Ok(String("hello world!"))
In this second build, ReadFile
is now no longer required, and instead is only checked.
This is correct, and does not make any assertions fail.
The third build now looks like:
-→ ToLower(ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified))
- ? ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
- ✗ /tmp/.tmp5YsYRC/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098541, tv_nsec: 212562288 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098541, tv_nsec: 216562277 })))
- ▶ ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
- - /tmp/.tmp5YsYRC/in.txt
+→ ToLower(ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
+ ✗ /tmp/.tmpLnu0X7/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098937, tv_nsec: 525061429 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098937, tv_nsec: 529061455 })))
+ ▶ ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
+ - /tmp/.tmpLnu0X7/in.txt
◀ Ok(String("!DLROW OLLEH"))
- ✗ ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified) (old: Equals(Ok(String("HELLO WORLD!"))) ≠ new: Equals(Ok(String("!DLROW OLLEH"))))
- ▶ ToLower(ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified))
- → ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
+ ✗ ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified) (old: Equals(Ok(String("HELLO WORLD!"))) ≠ new: Equals(Ok(String("!DLROW OLLEH"))))
+ ▶ ToLower(ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified))
+ → ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
← Ok(String("!DLROW OLLEH"))
◀ Ok(String("!dlrow olleh"))
← Ok(String("!dlrow olleh"))
diff --git a/3_min_sound/index.html b/3_min_sound/index.html
index 097bd35..24ebbce 100644
--- a/3_min_sound/index.html
+++ b/3_min_sound/index.html
@@ -230,13 +230,14 @@ Sessions
+The Ever-Changing Filesystem
One issue with this definition is that we do not control the filesystem: changes to the filesystem can happen at any time during the build.
Therefore, we would need to constantly check file dependencies for consistency, and we can never be sure that a task is really consistent!
-That makes incremental build infeasible.
-To solve that problem, we will introduce the concept of a build session in which we only check tasks for consistency once.
+That makes incremental builds infeasible.
+To solve that problem, we will introduce the concept of a build session in which we only check tasks for consistency once.
Once a task has been executed or checked, we don’t check it anymore that session, solving the problem of constantly having to check file dependencies.
-A new session has to created to check those tasks again.
+A new session has to created to check those tasks again.
+Therefore, sessions are typically short-lived, and are created whenever file changes should be detected again.
Integration Testing
In this chapter, we will show incrementality and correctness by integration testing.
However, this requires quite some setup, as testing incrementality requires checking whether tasks are executed or not.
diff --git a/gen/0_intro/1_setup/cargo.txt b/gen/0_intro/1_setup/cargo.txt
index a7dcc13..1110cda 100644
--- a/gen/0_intro/1_setup/cargo.txt
+++ b/gen/0_intro/1_setup/cargo.txt
@@ -1,2 +1,2 @@
Compiling pie v0.1.0 (/pie)
- Finished dev [unoptimized + debuginfo] target(s) in 0.06s
+ Finished dev [unoptimized + debuginfo] target(s) in 0.07s
diff --git a/gen/0_intro/1_setup/source.zip b/gen/0_intro/1_setup/source.zip
index c21dd50c7543e47fa53fa951ff623fcba98c3787..218bc1205ccb494b9b34c3647a69853ba52a7609 100644
GIT binary patch
delta 79
zcmeyx@{5Hhz?+#xgaHJ4sweU&Fab%uOb|I!1w`)G0g-RBf#l>UMgtJl!)OJfPB5B)
LsDBWi6_W!1>MR
hQnILke3W(gV10vsM1Ifuzj0Pa8htUc|onSNp
LQU4%3D<%g3CcPX$
diff --git a/gen/1_programmability/1_api/source.zip b/gen/1_programmability/1_api/source.zip
index 4d245dd1e3dc9fb4dd0b3f7f9af34703b4ba8f18..0f9e76e3a6f3afa3614e36f7c93912971731e50d 100644
GIT binary patch
delta 81
zcmeC-=;7cA@MdNaVE}=i>WMrGOh8gE6GYBb0g@Z{?_&f~lk1tXLDU;20}v(0Yz3mC
Om`y;`1PJd8vjYIm%o*tb
delta 81
zcmeC-=;7cA@MdNaVE}<0RTFs>n1G~SCWxG=0wg!?-^U1~Cf74%gQz!51|Uj~*$PBO
OF`Iy>2@u{HW(NQa;2W0!
diff --git a/gen/1_programmability/2_non_incremental/d_cargo.txt b/gen/1_programmability/2_non_incremental/d_cargo.txt
index 51b1c1c..074cd98 100644
--- a/gen/1_programmability/2_non_incremental/d_cargo.txt
+++ b/gen/1_programmability/2_non_incremental/d_cargo.txt
@@ -1,5 +1,5 @@
Compiling pie v0.1.0 (/pie)
- Finished test [unoptimized + debuginfo] target(s) in 0.29s
+ Finished test [unoptimized + debuginfo] target(s) in 0.37s
Running unittests src/lib.rs (target/debug/deps/pie-7f6c7927ea39bed5)
running 1 test
diff --git a/gen/1_programmability/2_non_incremental/source.zip b/gen/1_programmability/2_non_incremental/source.zip
index e981ff67cd04c4cd25865773ac4b17115315f25e..71ea89c679e090dcb780705bed80b3762edfec3a 100644
GIT binary patch
delta 124
zcmey$_mz()z?+#xgaHJ4t0(d(Fab%uOb|I!1xRk(f0z+SO>SY*0a2Tne1X(vMrL6q
u5Lb*f8$>m;8i1&+tX3fEBdZCBQfCA6lG$uPyoGE|AnGZbABZw#cK`sMnkb|I
delta 124
zcmey$_mz()z?+#xgaHJ0R!!tlU;>hQnILke3Xt5m|1cwvn%u&q1EMxD`2wlUjLgDJ
uAg&l|Hi&9wH2_gtS*<|SM^+OMrOpQCC9~OpcnjH_K-5zFIey*uMLRD&*uc9g8BSF)M7pd0Cvwr^#A|>
delta 195
zcmaDX`&gDIz?+#xgaHJ0RZZkkX9AL@tsrubHi&%r2uMyo%cuaN-ZM@IQ5{SwKx*?k
zrZ-F=dJgM85Is4DLkC24arlBnZ*pXSMV+}zL8803%|X;>ZbuMh#$yelih01iZ9E1b
o-U}Wp5T(Lv0-~aLEkV>&Ua;UrUKgAz?+#xgaHKlt0(fPGXY7{RuH*I8%S<^*~$c@CKoU(fT&5#(?OIriwcn1
z9LVC&45AmYzX2)QJb`Nkh(FniPX|P0@cDv7xAXObjqngG1&Qt#GzU>X1RX(?wU9N4
zst^M6b_*GRcyEQQK$NDi35bdpwgFLdg)KqU4PmgHhzM9PNW=*wI8DS4L_HI6003K|
BNYwxU
delta 211
zcmZ3cwM>gAz?+#xgaHJ0S54$mX9AL@tsrubHjv!-vXu!)O)g+o08x{er-LYM78M}1
zIgrJl8ALB)e*;pqc>>o85Pz~2pALx1;PVBEZs+R<8{r{X3KHEfXbz%&2s(l&Yawe8
zRUriC?G`cs@!kqqfhbL36A%?IYy+a^3R{Ax8^T~Y5fQLpkcbmVaGHo8h#FmIKl0f=`;(h5XLNSS~rKPej!)gxsIqK->}<(Q?xf{xNoAi-8?KM-|Q
K8f=P$i~|5RWKHe>
delta 226
zcmca%al?Wqz?+#xgaHJ0S54$mX9AL@tsrubHjv!-vXu!)O)g+o08x{er-LYM78M}1
zIgrJl8ALB)e*;pqc>>oO5Pz~ApALx1;qwKF?&j+U%lZhuWCIDCOO%4BbrR+v>bZm?
zh|-d@22tsfVBRW80}$_yq!oygkTL;Leo{6dsz=HaL>-p`%P~uX1s$cGK!UB(ejw_q
KG}sgg83zDdV^a74
diff --git a/gen/2_incrementality/4_store/source.zip b/gen/2_incrementality/4_store/source.zip
index 7a80b4bdcfe5a1741c008f641caf25a6d4faa13c..1f02422c34f606bd24399af60d8c7c60d3bfd470 100644
GIT binary patch
delta 240
zcmcZ{csYIbXplzqtt5-w3G1yQe6
z%t4fjsw0R>SG5LFOCY@GssIbXplzqtt5-w3G1yQe6
z%t4fjsw0R>SG5LFOCY@GssyG0I@MdNaVE}I&tTY#`}m
ztx^zmOUoQY32Qros4#775H%UXyQXab;&JF$fhZRp6A)FUV*{dg=vacN&k&xWE?7yf
dE?920t`kV^y{yG0I@MdNaVE}=HRTFvCnSi8eD~Q~q4J0?dY-IvclM9#?K-47W#URR@MFmK0
zj%D#@2GQ%--+&Ztp2MZZ1ENoeZ3Kx=u9ngPQ7fgQK%%VD4_HC;bA^7e>I&tTY#`}m
ztx^zmOUoQY32Qros4#775H%UXyQXab;&JF$fhZRp6A)FUV*{dg=vacN&k&xWE?7yf
dE?920t`kV^y{B;}~OF>kbfjNj;Z{P@`J{eepD1AdPugK5<#9Lx$1)?4pnt&)VBO4GE
xVq^)T`XRitMqnkJ#$Y*jV<(VYx3N2jx?=1HqEt-4W~Q5f^(`_1%e^*n0000{Xej^y
delta 285
zcmZ3TwK|I@z?+#xgaHH&RZZkkX9AL@tsrubHjv!-vXu!)O)g+o08x{e7lSBs78M}1
zIhMtr8APvRe*;pqc@CEr4~RY?wh<&gxmrpGM6HyH0*SIpKVSvX&lUQ?swB;}~OF>kbfjNj;Z{P@`J{eepD1AdPugK5<#9Lx$1)?4pnt&)VBO4GE
xVq^)T`XRitMqnkJ#$Y*jV<(VYx3N2jx?=1HqEt-4W~Q5f^(`_1%e^*n005eBZr1<+
diff --git a/gen/3_min_sound/1_session/source.zip b/gen/3_min_sound/1_session/source.zip
index a09262aabf3fa14fcc81f15f0efbb56657cfaeb9..1171139a3296c39f7ac0f099aae9074dd856031d 100644
GIT binary patch
delta 285
zcmeyG`!$y*z?+#xgaHJmRZrwmX9AL@tsrubHjv!-vXu!)O)g+o08x{e7lSBs78M}1
zIhMtr8APvRe*;pqc@CEr4~RY?ww4u0PcE0y0Z~h3qCldIvim@yn-41WgH;EqzGMSQ
z+v;e7gfn#F!J1AQmV!hbjLbn)rI90u+GAu5qJBbnHpT`ZUY@ZPh+1WA0-_!m+khxJ
w6H5>kX#(a=HvucT4B-iyI)UUuP2EA%JX1dq_0AM*gP|E%Uyd1AZmXFC07H*xF#rGn
delta 285
zcmeyG`!$y*z?+#xgaHJOR88bjX9AL@tsrubHjv!-vXu!)O)g+o08x{e7lSBs78M}1
zIhMtr8APvRe*;pqc@CEr4~RY?ww4u0PcE0y0Z~h3qCldIvim@yn-41WgH;EqzGMSQ
z+v;e7gfn#F!J1AQmV!hbjLbn)rI90u+GAu5qJBbnHpT`ZUY@ZPh+1WA0-_!m+khxJ
w6H5>kX#(a=HvucT4B-iyI)UUuP2EA%JX1dq_0AM*gP|E%Uyd1AZmXFC0Nt)`-v9sr
diff --git a/gen/3_min_sound/2_tracker/i_writing_example.txt b/gen/3_min_sound/2_tracker/i_writing_example.txt
index f7d262b..192ea77 100644
--- a/gen/3_min_sound/2_tracker/i_writing_example.txt
+++ b/gen/3_min_sound/2_tracker/i_writing_example.txt
@@ -2,51 +2,51 @@
Finished dev [unoptimized + debuginfo] target(s) in 0.46s
Running `target/debug/examples/incremental`
A) New task: expect `read_task` to execute
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input.txt", Modified)
- ▶ ReadStringFromFile("/tmp/.tmpj0anAm/input.txt", Modified)
- - /tmp/.tmpj0anAm/input.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input.txt", Modified)
+ ▶ ReadStringFromFile("/tmp/.tmpfqmzZi/input.txt", Modified)
+ - /tmp/.tmpfqmzZi/input.txt
◀ Ok("Hi")
← Ok("Hi")
🏁
B) Reuse: expect no execution
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input.txt", Modified)
- ✓ /tmp/.tmpj0anAm/input.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input.txt", Modified)
+ ✓ /tmp/.tmpfqmzZi/input.txt
← Ok("Hi")
🏁
C) Inconsistent file dependency: expect `read_task` to execute
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input.txt", Modified)
- ✗ /tmp/.tmpj0anAm/input.txt (old: Modified(Some(SystemTime { tv_sec: 1701098532, tv_nsec: 144588746 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098532, tv_nsec: 148588729 })))
- ▶ ReadStringFromFile("/tmp/.tmpj0anAm/input.txt", Modified)
- - /tmp/.tmpj0anAm/input.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input.txt", Modified)
+ ✗ /tmp/.tmpfqmzZi/input.txt (old: Modified(Some(SystemTime { tv_sec: 1701098929, tv_nsec: 269006500 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098929, tv_nsec: 273006527 })))
+ ▶ ReadStringFromFile("/tmp/.tmpfqmzZi/input.txt", Modified)
+ - /tmp/.tmpfqmzZi/input.txt
◀ Ok("Hello")
← Ok("Hello")
🏁
D) Different tasks: expect `read_task_b_modified` and `read_task_b_exists` to execute
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Modified)
- ▶ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Modified)
- - /tmp/.tmpj0anAm/input_b.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Modified)
+ ▶ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Modified)
+ - /tmp/.tmpfqmzZi/input_b.txt
◀ Ok("Test")
← Ok("Test")
🏁
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Exists)
- ▶ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Exists)
- - /tmp/.tmpj0anAm/input_b.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Exists)
+ ▶ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Exists)
+ - /tmp/.tmpfqmzZi/input_b.txt
◀ Ok("Test")
← Ok("Test")
🏁
E) Different stampers: expect only `read_task_b_modified` to execute
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Modified)
- ✗ /tmp/.tmpj0anAm/input_b.txt (old: Modified(Some(SystemTime { tv_sec: 1701098532, tv_nsec: 148588729 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098532, tv_nsec: 152588715 })))
- ▶ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Modified)
- - /tmp/.tmpj0anAm/input_b.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Modified)
+ ✗ /tmp/.tmpfqmzZi/input_b.txt (old: Modified(Some(SystemTime { tv_sec: 1701098929, tv_nsec: 273006527 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098929, tv_nsec: 277006553 })))
+ ▶ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Modified)
+ - /tmp/.tmpfqmzZi/input_b.txt
◀ Ok("Test Test")
← Ok("Test Test")
🏁
-→ ReadStringFromFile("/tmp/.tmpj0anAm/input_b.txt", Exists)
- ✓ /tmp/.tmpj0anAm/input_b.txt
+→ ReadStringFromFile("/tmp/.tmpfqmzZi/input_b.txt", Exists)
+ ✓ /tmp/.tmpfqmzZi/input_b.txt
← Ok("Test")
🏁
diff --git a/gen/3_min_sound/2_tracker/source.zip b/gen/3_min_sound/2_tracker/source.zip
index f1700008432b4841768c70729a39bb8624fe0819..2cf8d7d718e214010cd022d350bbecef2291fc7a 100644
GIT binary patch
delta 358
zcmcaViShmM*idJ~vn(LKkl7S=AU%17ttNrr1uM~Y1IuN*If3NX
zySam?A8vji%EKLOrmmaJte$29gi;Z~;-XJ;3H(^8o9W_jCXNm|KG9
delta 358
zcmcaViShmM*idJ~vn(LKkl7S=AU%17ttNrr1uM~Y1IuN*If3NX
zySam?A8vji%EKLOrmmaJte$29gi;Z~;-XJ;3H(^8o9W_jCXN^Dv6f
diff --git a/gen/3_min_sound/3_test/c_test_reuse_stdout.txt b/gen/3_min_sound/3_test/c_test_reuse_stdout.txt
index 7a6e38e..7a8ab8b 100644
--- a/gen/3_min_sound/3_test/c_test_reuse_stdout.txt
+++ b/gen/3_min_sound/3_test/c_test_reuse_stdout.txt
@@ -1,4 +1,4 @@
- Finished test [unoptimized + debuginfo] target(s) in 0.05s
+ Finished test [unoptimized + debuginfo] target(s) in 0.02s
Running tests/top_down.rs (target/debug/deps/top_down-e757e81b664b50ba)
running 1 test
diff --git a/gen/3_min_sound/3_test/source.zip b/gen/3_min_sound/3_test/source.zip
index 4b17143f500471af2603bb24686a01cbfcb90cb7..234f748139ae9c3cdd22e9fbb26b5aab7c2d9f65 100644
GIT binary patch
delta 403
zcmX@Hj`74gMxFp~W)=|!5SU#(kw={gNSd~S$UWLXa^uTZCLlGrfLQ@VO=4C9QM;IR
zfz;-g%*w1FnvXMw9VA>Muoy%s3aWra-39%bLG%ofHy|T6w~1@5NH6RHU*l2s8@luAWAdH8ARm;ft73w
zvI6nm2AO~;Muoy%s3aWra-39%bLG%ofHy|T6w~1@5NH6RHU*l2s8@luAWAdH8ARm;ft73w
zvI6nm2AO~;M1EQa)B(nkO$==#J
zAgWS33M6_}`!Gm!^DLwNVAav4dh8(SZX1xQ%{y%}!G@_goMi#=Kf6o;OLzKcg1B3J
z{6H?5%o0=zqN;<;LDZHYM-cTr$QneM1cQ0y!3H4S%3w1P^)T2LL@9(ggQ(OHu#y!a
zRv_M!5EBq36>0;bB0?=e)MNM1EQa)B(nkO$==#J
zAgWS33M6_}`!Gm!^DLwNVAav4dh8(SZX1xQ%{y%}!G@_goMi#=Kf6o;OLzKcg1B3J
z{6H?5%o0=zqN;<;LDZHYM-cTr$QneM1cQ0y!3H4S%3w1P^)T2LL@9(ggQ(OHu#y!a
zRv_M!5EBq36>0;bB0?=e)MNKBmDFVISgy(Y9XUYKCmmbCh6cDkW&!a{yt0
hb1YbcV=P!nYpfec?rf|Jh!Tzio9Pz^);B%Q0RYd?o|ymu
delta 412
zcmcbKBmDFVISgy(Y9XUYKCmmbCh6cDkW&!a{yt0
hb1YbcV=P!nYpfec?rf|Jh!Tzio9Pz^);B%Q0RY+&r>g(}
diff --git a/gen/3_min_sound/6_hidden_dep/source.zip b/gen/3_min_sound/6_hidden_dep/source.zip
index b76f428c71fefd74ae29715dd88d2d05f05317f2..f914d23a0d8dbe87706a24d54c03161260128b4a 100644
GIT binary patch
delta 412
zcmdmcgmLc?MxFp~W)=|!5LjA0kw={gNSd~S$UWLXa^uTZCLlGrfLQ@VO=4C9QM;IR
zfz;-g%CvBe9orf5eH^)=cWL>a|^d8IK1Al~vAGZ1w@#uh}$#X5tiEJ4&n2=9CxSP6GLSk5!v2_)AW?+&7_#ruIMwFIzX-0?D#
iTNA(`xI6)@>~(@0NZvTn1w>UQg3aBM2-f>I(E$MRRiSJE
delta 412
zcmdmcgmLc?MxFp~W)=|!5V%-1kw={gNSd~S$UWLXa^uTZCLlGrfLQ@VO=4C9QM;IR
zfz;-g%CvBe9orf5eH^)=cWL>a|^d8IK1Al~vAGZ1w@#uh}$#X5tiEJ4&n2=9CxSP6GLSk5!v2_)AW?+&7_#ruIMwFIzX-0?D#
iTNA(`xI6)@>~(@0NZvTn1w>UQg3aBM2-f>I(E$JpZK{3%
diff --git a/gen/3_min_sound/7_cycle/source.zip b/gen/3_min_sound/7_cycle/source.zip
index eed195c2984d1c52c849fec1d2c26fecb2146c66..99da0e405591f228039df2e023ee6270cf7dd98c 100644
GIT binary patch
delta 408
zcmbPzf^qfa=Eb%
zh+1kK1rlX6vE=~K4mQ`oswM2_vV)|zyMa_~e(Kf=HY~@Bn-#>b^nV4GP7T)taVLcP
zfrKYNh$#h8;j!i*YC^0dh`JnW4Wb0%z`Wo%0}!t@&J0BDjk5(&f8v}$lw&+tNlUyH
zh<6~~1Vnv{w*gTm36>x#KLN~Jn*dhw62enYbOOm`C%S{EEs1_0iX{nb*vkZ&$;nCJ
f&}&TsD?6Fw29g&{b^%f0$zXFQC4=?ePIdqQ7&)IT
delta 408
zcmbPzf^qfa=Eb%
zh+1kK1rlX6vE=~K4mQ`oswM2_vV)|zyMa_~e(Kf=HY~@Bn-#>b^nV4GP7T)taVLcP
zfrKYNh$#h8;j!i*YC^0dh`JnW4Wb0%z`Wo%0}!t@&J0BDjk5(&f8v}$lw&+tNlUyH
zh<6~~1Vnv{w*gTm36>x#KLN~Jn*dhw62enYbOOm`C%S{EEs1_0iX{nb*vkZ&$;nCJ
f&}&TsD?6Fw29g&{b^%f0$zXFQC4=?ePIdqQ{B@`Z
diff --git a/gen/4_example/source.zip b/gen/4_example/source.zip
index c60539b53c8b6ead1f7bcab0dc2e3bdc3fe35f1d..52a81fab1b3d13cc9a6b54c19d7a5eee393d4ad4 100644
GIT binary patch
delta 549
zcmXAjPbhrao-nO#v-v
z@2cR0V(u=qfY;U+ypj)W_^NzoTf(;2?lyp{?q_N{Jw-j@S07r6ZZ1Q|_SohIcp>;j
z@os1UdK&U$Evv&es3UA;pzDzd=wZY}|9QD_hP4ul-Zz4t?NJ<+x8gCN93KPB+&Hkz
zk>xoD?17sC+ISq4{XAKYAAyy4YX0(Wm`j)jB!Mb7f(!O1;J;*>I0$KiPZ
delta 549
zcmXAjPbh;j
z@os1UdK&U$Evv#-s6A|9q3e+e=wZZ2|9P=-hP4ul-Zg^m?NJ<+x8gCN93KNr+&Hkz
zk>xo%?4Fwf+ISq4{XAKYAAyy4YX0&rm{XVrB!Mc|f)n;9;J;+6I0 Compiling pie v0.1.0 (/pie)
- Finished dev [unoptimized + debuginfo] target(s) in 0.06s
+ Finished dev [unoptimized + debuginfo] target(s) in 0.07s
If you’re using a Rust editor or IDE, it probably also has a mechanism for running cargo on your project.
You can of course use that in place of running cargo from a terminal.
@@ -641,7 +641,7 @@ Simple Test
Run the test by running cargo test
.
The output should look something like:
Compiling pie v0.1.0 (/pie)
- Finished test [unoptimized + debuginfo] target(s) in 0.29s
+ Finished test [unoptimized + debuginfo] target(s) in 0.37s
Running unittests src/lib.rs (target/debug/deps/pie-7f6c7927ea39bed5)
running 1 test
@@ -3511,7 +3511,7 @@ Reuse
assert_eq!(&output, "Hi");
Running with cargo run --example incremental
should produce output like:
Compiling pie v0.1.0 (/pie)
- Finished dev [unoptimized + debuginfo] target(s) in 0.35s
+ Finished dev [unoptimized + debuginfo] target(s) in 0.36s
Running `target/debug/examples/incremental`
A) New task: expect `read_task` to execute
Reading from input.txt with Modified stamper
@@ -3588,7 +3588,7 @@ Sessions
+The Ever-Changing Filesystem
One issue with this definition is that we do not control the filesystem: changes to the filesystem can happen at any time during the build.
Therefore, we would need to constantly check file dependencies for consistency, and we can never be sure that a task is really consistent!
-That makes incremental build infeasible.
-To solve that problem, we will introduce the concept of a build session in which we only check tasks for consistency once.
+That makes incremental builds infeasible.
+To solve that problem, we will introduce the concept of a build session in which we only check tasks for consistency once.
Once a task has been executed or checked, we don’t check it anymore that session, solving the problem of constantly having to check file dependencies.
-A new session has to created to check those tasks again.
+A new session has to created to check those tasks again.
+Therefore, sessions are typically short-lived, and are created whenever file changes should be detected again.
Integration Testing
In this chapter, we will show incrementality and correctness by integration testing.
However, this requires quite some setup, as testing incrementality requires checking whether tasks are executed or not.
@@ -4916,52 +4917,52 @@
No dependenci
Run a single test in the top_down
integration test file with: cargo test --test top_down test_reuse
The second command should result in something like:
- Finished test [unoptimized + debuginfo] target(s) in 0.05s
+ Finished test [unoptimized + debuginfo] target(s) in 0.02s
Running tests/top_down.rs (target/debug/deps/top_down-e757e81b664b50ba)
running 1 test
@@ -6326,23 +6327,23 @@ Manifest th
Inspect the build log with cargo test --test top_down test_no_superfluous_task_dependencies
.
The third (last) build log should look like this:
-→ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- ? ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- ✗ /tmp/.tmpNP19gu/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098540, tv_nsec: 312564915 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098540, tv_nsec: 316564903 })))
- ▶ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- - /tmp/.tmpNP19gu/in.txt
+→ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ ? ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ ✗ /tmp/.tmpFOBrn0/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098936, tv_nsec: 637055517 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098936, tv_nsec: 641055544 })))
+ ▶ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ - /tmp/.tmpFOBrn0/in.txt
◀ Ok(String("HeLLo, WorLd!"))
← Ok(String("HeLLo, WorLd!"))
- ✗ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
- ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- → ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified))
- ? ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
+ ✗ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
+ ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ → ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
← Ok(String("HeLLo, WorLd!"))
- ✗ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
- ▶ ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified))
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
+ ✗ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified) (old: Equals(Ok(String("Hello, World!"))) ≠ new: Equals(Ok(String("HeLLo, WorLd!"))))
+ ▶ ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified))
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
← Ok(String("HeLLo, WorLd!"))
◀ Ok(String("hello, world!"))
← Ok(String("hello, world!"))
@@ -6409,14 +6410,14 @@ Finding t
We only manifested the bug in the last test due to having a chain of 2 task dependencies, and by carefully controlling what is being executed and what is being checked.
Recall the second build in the test_no_superfluous_task_dependencies
test.
The build log for that build looks like:
-→ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)))
- → ToLower(ReadFile("/tmp/.tmpNP19gu/in.txt", Modified))
- ? ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- → ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
- ✓ /tmp/.tmpNP19gu/in.txt
+→ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ ▶ ToUpper(ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)))
+ → ToLower(ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ → ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
+ ✓ /tmp/.tmpFOBrn0/in.txt
← Ok(String("Hello, World!"))
- ✓ ReadFile("/tmp/.tmpNP19gu/in.txt", Modified)
+ ✓ ReadFile("/tmp/.tmpFOBrn0/in.txt", Modified)
← Ok(String("hello, world!"))
◀ Ok(String("HELLO, WORLD!"))
← Ok(String("HELLO, WORLD!"))
@@ -6695,24 +6696,24 @@ Fixing the bug<
This is because our changes have correctly removed several superfluous task requires, which influences these assertions.
Inspect the build log for this test with cargo test --test top_down test_require_task
.
The second build now looks like:
-→ ToLower(ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified))
- ? ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
- ✓ /tmp/.tmp5YsYRC/in.txt
- ✓ ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
+→ ToLower(ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
+ ✓ /tmp/.tmpLnu0X7/in.txt
+ ✓ ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
← Ok(String("hello world!"))
In this second build, ReadFile
is now no longer required, and instead is only checked.
This is correct, and does not make any assertions fail.
The third build now looks like:
-→ ToLower(ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified))
- ? ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
- ✗ /tmp/.tmp5YsYRC/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098541, tv_nsec: 212562288 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098541, tv_nsec: 216562277 })))
- ▶ ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
- - /tmp/.tmp5YsYRC/in.txt
+→ ToLower(ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified))
+ ? ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
+ ✗ /tmp/.tmpLnu0X7/in.txt (old: Modified(Some(SystemTime { tv_sec: 1701098937, tv_nsec: 525061429 })) ≠ new: Modified(Some(SystemTime { tv_sec: 1701098937, tv_nsec: 529061455 })))
+ ▶ ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
+ - /tmp/.tmpLnu0X7/in.txt
◀ Ok(String("!DLROW OLLEH"))
- ✗ ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified) (old: Equals(Ok(String("HELLO WORLD!"))) ≠ new: Equals(Ok(String("!DLROW OLLEH"))))
- ▶ ToLower(ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified))
- → ReadFile("/tmp/.tmp5YsYRC/in.txt", Modified)
+ ✗ ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified) (old: Equals(Ok(String("HELLO WORLD!"))) ≠ new: Equals(Ok(String("!DLROW OLLEH"))))
+ ▶ ToLower(ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified))
+ → ReadFile("/tmp/.tmpLnu0X7/in.txt", Modified)
← Ok(String("!DLROW OLLEH"))
◀ Ok(String("!dlrow olleh"))
← Ok(String("!dlrow olleh"))
diff --git a/searchindex.js b/searchindex.js
index f61cf5c..841211a 100644
--- a/searchindex.js
+++ b/searchindex.js
@@ -1 +1 @@
-Object.assign(window.search, {"doc_urls":["0_intro/index.html#build-your-own-programmatic-incremental-build-system","0_intro/index.html#motivation","0_intro/index.html#pie-a-programmatic-incremental-build-system-in-rust","0_intro/index.html#feedback--contributing","0_intro/1_setup/index.html#setup","0_intro/1_setup/index.html#rust","0_intro/1_setup/index.html#rust-editor--ide","0_intro/1_setup/index.html#creating-a-new-rust-project","0_intro/1_setup/index.html#source-control-optional-but-recommended","1_programmability/index.html#programmability","1_programmability/1_api/index.html#programmable-build-system-api","1_programmability/1_api/index.html#api-implementation","1_programmability/2_non_incremental/index.html#non-incremental-context","1_programmability/2_non_incremental/index.html#context-module","1_programmability/2_non_incremental/index.html#implementation","1_programmability/2_non_incremental/index.html#simple-test","1_programmability/2_non_incremental/index.html#test-with-multiple-tasks","2_incrementality/index.html#introduction","2_incrementality/1_require_file/index.html#requiring-files","2_incrementality/1_require_file/index.html#filesystem-utilities","2_incrementality/1_require_file/index.html#create-the-dev_shared-package","2_incrementality/1_require_file/index.html#testing-filesystem-utilities","2_incrementality/1_require_file/index.html#implement-require_file","2_incrementality/2_stamp/index.html#stamps","2_incrementality/2_stamp/index.html#file-stamps","2_incrementality/2_stamp/index.html#task-output-stamps","2_incrementality/2_stamp/index.html#tests","2_incrementality/2_stamp/index.html#testing-with-file-modified-time-correctly","2_incrementality/2_stamp/index.html#stamps-in-context","2_incrementality/3_dependency/index.html#dynamic-dependencies","2_incrementality/3_dependency/index.html#file-dependencies","2_incrementality/3_dependency/index.html#task-dependencies","2_incrementality/3_dependency/index.html#dependency-enum","2_incrementality/3_dependency/index.html#tests","2_incrementality/4_store/index.html#dependency-graph-store","2_incrementality/4_store/index.html#store-basics","2_incrementality/4_store/index.html#graph-nodes","2_incrementality/4_store/index.html#task-outputs","2_incrementality/4_store/index.html#dependencies","2_incrementality/4_store/index.html#resetting-tasks","2_incrementality/4_store/index.html#tests","2_incrementality/4_store/index.html#testing-file-mapping","2_incrementality/4_store/index.html#testing-task-mapping","2_incrementality/4_store/index.html#testing-task-outputs","2_incrementality/4_store/index.html#testing-dependencies","2_incrementality/4_store/index.html#testing-task-reset","2_incrementality/5_context/index.html#top-down-context","2_incrementality/5_context/index.html#top-down-context-basics","2_incrementality/5_context/index.html#requiring-files","2_incrementality/5_context/index.html#requiring-tasks","2_incrementality/5_context/index.html#checking-tasks","2_incrementality/6_example/index.html#incrementality-example","2_incrementality/6_example/index.html#readstringfromfile-task","2_incrementality/6_example/index.html#exploring-incrementality","2_incrementality/6_example/index.html#reuse","2_incrementality/6_example/index.html#inconsistent-file-dependency","2_incrementality/6_example/index.html#different-tasks","2_incrementality/6_example/index.html#same-file-different-stampers","3_min_sound/index.html#testing-incrementality-and-correctness","3_min_sound/1_session/index.html#incrementality-with-sessions","3_min_sound/1_session/index.html#pie-and-session","3_min_sound/1_session/index.html#update-topdowncontext","3_min_sound/1_session/index.html#update-session","3_min_sound/1_session/index.html#update-incremental-example","3_min_sound/1_session/index.html#incrementality","3_min_sound/2_tracker/index.html#tracking-build-events","3_min_sound/2_tracker/index.html#tracker-trait","3_min_sound/2_tracker/index.html#no-op-tracker","3_min_sound/2_tracker/index.html#using-the-tracker-trait","3_min_sound/2_tracker/index.html#implement-writing-tracker","3_min_sound/2_tracker/index.html#implement-event-tracker","3_min_sound/2_tracker/index.html#implement-composite-tracker","3_min_sound/3_test/index.html#integration-testing","3_min_sound/3_test/index.html#testing-utilities","3_min_sound/3_test/index.html#first-integration-test","3_min_sound/3_test/index.html#testing-incrementality-and-soundness","3_min_sound/3_test/index.html#no-dependencies","3_min_sound/3_test/index.html#testing-file-dependencies","3_min_sound/3_test/index.html#testing-task-dependencies","3_min_sound/4_fix_task_dep/index.html#fix-superfluous-task-dependency","3_min_sound/4_fix_task_dep/index.html#add-toupper-task","3_min_sound/4_fix_task_dep/index.html#test-case-setup","3_min_sound/4_fix_task_dep/index.html#manifest-the-bug","3_min_sound/4_fix_task_dep/index.html#finding-the-cause","3_min_sound/4_fix_task_dep/index.html#fixing-the-bug","3_min_sound/5_overlap/index.html#prevent-overlapping-file-writes","3_min_sound/5_overlap/index.html#add-writefile-and-sequence-tasks","3_min_sound/5_overlap/index.html#test-to-showcase-the-issue","3_min_sound/5_overlap/index.html#implement-provided-files","3_min_sound/5_overlap/index.html#add-providefile-variant-to-dependency","3_min_sound/5_overlap/index.html#update-tracker-and-implementations","3_min_sound/5_overlap/index.html#add-add_file_provide_dependency-method-to-store","3_min_sound/5_overlap/index.html#add-methods-to-context-and-implementations","3_min_sound/5_overlap/index.html#detect-and-disallow-overlapping-provided-files","3_min_sound/6_hidden_dep/index.html#prevent-hidden-dependencies","3_min_sound/6_hidden_dep/index.html#test-to-showcase-the-issue","3_min_sound/6_hidden_dep/index.html#prevent-hidden-dependencies-1","3_min_sound/6_hidden_dep/index.html#add-store-methods","3_min_sound/6_hidden_dep/index.html#add-checks-to-topdowncontext","3_min_sound/6_hidden_dep/index.html#fixing-and-improving-the-tests","3_min_sound/7_cycle/index.html#prevent-cycles","3_min_sound/7_cycle/index.html#add-cyclic-testing-tasks","3_min_sound/7_cycle/index.html#add-cycle-tests","3_min_sound/7_cycle/index.html#reserving-task-dependencies","4_example/index.html#example-interactive-parser-development","4_example/index.html#compiling-grammars-and-parsing","4_example/index.html#tasks","4_example/index.html#parse-cli-arguments","4_example/index.html#interactive-parser-development","4_example/index.html#ratatui-scaffolding","4_example/index.html#text-editor-buffer","4_example/index.html#drawing-and-updating-buffers","4_example/index.html#saving-buffers-and-providing-feedback","4_example/index.html#showing-the-build-log","4_example/index.html#conclusion","4_example/index.html#side-note-serialization","a_appendix/1_pie.html#pie-implementations--publications","a_appendix/1_pie.html#implementations","a_appendix/1_pie.html#publications","a_appendix/2_related_work.html#related-work","a_appendix/2_related_work.html#pluto","a_appendix/2_related_work.html#other-incremental-build-systems-with-dynamic-dependencies","a_appendix/2_related_work.html#shake","a_appendix/2_related_work.html#rattle","a_appendix/3_future_work.html#future-work"],"index":{"documentStore":{"docInfo":{"0":{"body":89,"breadcrumbs":6,"title":5},"1":{"body":569,"breadcrumbs":2,"title":1},"10":{"body":163,"breadcrumbs":9,"title":4},"100":{"body":69,"breadcrumbs":7,"title":2},"101":{"body":31,"breadcrumbs":9,"title":4},"102":{"body":173,"breadcrumbs":8,"title":3},"103":{"body":386,"breadcrumbs":8,"title":3},"104":{"body":150,"breadcrumbs":8,"title":4},"105":{"body":455,"breadcrumbs":7,"title":3},"106":{"body":330,"breadcrumbs":5,"title":1},"107":{"body":443,"breadcrumbs":7,"title":3},"108":{"body":86,"breadcrumbs":7,"title":3},"109":{"body":374,"breadcrumbs":6,"title":2},"11":{"body":608,"breadcrumbs":7,"title":2},"110":{"body":485,"breadcrumbs":7,"title":3},"111":{"body":211,"breadcrumbs":7,"title":3},"112":{"body":140,"breadcrumbs":8,"title":4},"113":{"body":159,"breadcrumbs":7,"title":3},"114":{"body":281,"breadcrumbs":5,"title":1},"115":{"body":80,"breadcrumbs":7,"title":3},"116":{"body":0,"breadcrumbs":6,"title":3},"117":{"body":291,"breadcrumbs":4,"title":1},"118":{"body":391,"breadcrumbs":4,"title":1},"119":{"body":26,"breadcrumbs":4,"title":2},"12":{"body":21,"breadcrumbs":7,"title":3},"120":{"body":151,"breadcrumbs":3,"title":1},"121":{"body":56,"breadcrumbs":7,"title":5},"122":{"body":154,"breadcrumbs":3,"title":1},"123":{"body":757,"breadcrumbs":3,"title":1},"124":{"body":47,"breadcrumbs":4,"title":2},"13":{"body":131,"breadcrumbs":6,"title":2},"14":{"body":138,"breadcrumbs":5,"title":1},"15":{"body":278,"breadcrumbs":6,"title":2},"16":{"body":468,"breadcrumbs":7,"title":3},"17":{"body":240,"breadcrumbs":2,"title":1},"18":{"body":249,"breadcrumbs":5,"title":2},"19":{"body":273,"breadcrumbs":5,"title":2},"2":{"body":109,"breadcrumbs":7,"title":6},"20":{"body":200,"breadcrumbs":6,"title":3},"21":{"body":162,"breadcrumbs":6,"title":3},"22":{"body":43,"breadcrumbs":5,"title":2},"23":{"body":48,"breadcrumbs":3,"title":1},"24":{"body":140,"breadcrumbs":4,"title":2},"25":{"body":145,"breadcrumbs":5,"title":3},"26":{"body":131,"breadcrumbs":3,"title":1},"27":{"body":125,"breadcrumbs":7,"title":5},"28":{"body":79,"breadcrumbs":4,"title":2},"29":{"body":71,"breadcrumbs":5,"title":2},"3":{"body":41,"breadcrumbs":3,"title":2},"30":{"body":292,"breadcrumbs":5,"title":2},"31":{"body":362,"breadcrumbs":5,"title":2},"32":{"body":183,"breadcrumbs":5,"title":2},"33":{"body":289,"breadcrumbs":4,"title":1},"34":{"body":126,"breadcrumbs":7,"title":3},"35":{"body":225,"breadcrumbs":6,"title":2},"36":{"body":501,"breadcrumbs":6,"title":2},"37":{"body":134,"breadcrumbs":6,"title":2},"38":{"body":281,"breadcrumbs":5,"title":1},"39":{"body":83,"breadcrumbs":6,"title":2},"4":{"body":0,"breadcrumbs":3,"title":1},"40":{"body":12,"breadcrumbs":5,"title":1},"41":{"body":197,"breadcrumbs":7,"title":3},"42":{"body":114,"breadcrumbs":7,"title":3},"43":{"body":131,"breadcrumbs":7,"title":3},"44":{"body":263,"breadcrumbs":6,"title":2},"45":{"body":137,"breadcrumbs":7,"title":3},"46":{"body":16,"breadcrumbs":8,"title":3},"47":{"body":96,"breadcrumbs":9,"title":4},"48":{"body":100,"breadcrumbs":7,"title":2},"49":{"body":419,"breadcrumbs":7,"title":2},"5":{"body":64,"breadcrumbs":3,"title":1},"50":{"body":541,"breadcrumbs":7,"title":2},"51":{"body":11,"breadcrumbs":5,"title":2},"52":{"body":115,"breadcrumbs":5,"title":2},"53":{"body":95,"breadcrumbs":5,"title":2},"54":{"body":124,"breadcrumbs":4,"title":1},"55":{"body":45,"breadcrumbs":6,"title":3},"56":{"body":85,"breadcrumbs":5,"title":2},"57":{"body":207,"breadcrumbs":7,"title":4},"58":{"body":619,"breadcrumbs":6,"title":3},"59":{"body":138,"breadcrumbs":7,"title":2},"6":{"body":82,"breadcrumbs":5,"title":3},"60":{"body":261,"breadcrumbs":7,"title":2},"61":{"body":58,"breadcrumbs":7,"title":2},"62":{"body":50,"breadcrumbs":7,"title":2},"63":{"body":97,"breadcrumbs":8,"title":3},"64":{"body":130,"breadcrumbs":6,"title":1},"65":{"body":95,"breadcrumbs":9,"title":3},"66":{"body":369,"breadcrumbs":8,"title":2},"67":{"body":89,"breadcrumbs":8,"title":2},"68":{"body":183,"breadcrumbs":9,"title":3},"69":{"body":724,"breadcrumbs":9,"title":3},"7":{"body":283,"breadcrumbs":6,"title":4},"70":{"body":912,"breadcrumbs":9,"title":3},"71":{"body":184,"breadcrumbs":9,"title":3},"72":{"body":0,"breadcrumbs":7,"title":2},"73":{"body":539,"breadcrumbs":7,"title":2},"74":{"body":217,"breadcrumbs":8,"title":3},"75":{"body":4,"breadcrumbs":8,"title":3},"76":{"body":169,"breadcrumbs":6,"title":1},"77":{"body":125,"breadcrumbs":8,"title":3},"78":{"body":800,"breadcrumbs":8,"title":3},"79":{"body":42,"breadcrumbs":11,"title":4},"8":{"body":65,"breadcrumbs":6,"title":4},"80":{"body":18,"breadcrumbs":10,"title":3},"81":{"body":70,"breadcrumbs":10,"title":3},"82":{"body":188,"breadcrumbs":9,"title":2},"83":{"body":251,"breadcrumbs":9,"title":2},"84":{"body":341,"breadcrumbs":9,"title":2},"85":{"body":130,"breadcrumbs":11,"title":4},"86":{"body":100,"breadcrumbs":11,"title":4},"87":{"body":386,"breadcrumbs":10,"title":3},"88":{"body":74,"breadcrumbs":10,"title":3},"89":{"body":26,"breadcrumbs":11,"title":4},"9":{"body":56,"breadcrumbs":2,"title":1},"90":{"body":39,"breadcrumbs":10,"title":3},"91":{"body":39,"breadcrumbs":11,"title":4},"92":{"body":70,"breadcrumbs":11,"title":4},"93":{"body":490,"breadcrumbs":12,"title":5},"94":{"body":132,"breadcrumbs":9,"title":3},"95":{"body":222,"breadcrumbs":9,"title":3},"96":{"body":147,"breadcrumbs":9,"title":3},"97":{"body":70,"breadcrumbs":9,"title":3},"98":{"body":78,"breadcrumbs":9,"title":3},"99":{"body":1176,"breadcrumbs":9,"title":3}},"docs":{"0":{"body":"This is a programming tutorial where you will build your own programmatic incremental build system in Rust. The primary goal of this tutorial is to provide understanding of programmatic incremental build systems through implementation and experimentation. Although the tutorial uses Rust, you don’t need to be a Rust expert to follow it. A secondary goal of this tutorial is to teach more about Rust through implementation and experimentation, given that you already have programming experience (in another language) and are willing to learn. Therefore, all Rust code is available, and I try to explain and link to the relevant Rust book chapters as much as possible. This is of course not a full tutorial or book on Rust. For that, I can recommend the excellent The Rust Programming Language book. However, if you like to learn through examples and experimentation, or already know Rust basics and want to practice, this might be a fun programming tutorial for you! We will first motivate programmatic incremental build systems.","breadcrumbs":"Introduction » Build your own Programmatic Incremental Build System","id":"0","title":"Build your own Programmatic Incremental Build System"},"1":{"body":"A programmatic incremental build system is a mix between an incremental build system and an incremental computation system, with the following key properties: Programmatic : Build scripts are regular programs written in a programming language, where parts of the build script implement an API from the build system. This enables build authors to write incremental builds with the full expressiveness of the programming language. Incremental : Builds are truly incremental – only the parts of a build that are affected by changes are executed. Correct : Builds are fully correct – all parts of the build that are affected by changes are executed. Builds are free of glitches: only up-to-date (consistent) data is observed. Automatic : The build system takes care of incrementality and correctness. Build authors do not have to manually implement incrementality. Instead, they only have to explicitly declare dependencies . Multipurpose : The same build script can be used for incremental batch builds in a terminal, but also for live feedback in an interactive environment such as an IDE. For example, a compiler implemented in this build system can provide incremental batch compilation but also incremental editor services such as syntax highlighting or code completion. Teaser Toy Example As a small teaser, here is a simplified version of a programmatic incremental toy build script that copies a text file by reading and writing: struct ReadFile { file: PathBuf\n}\nimpl Task for ReadFile { fn execute(&self, context: &mut C) -> Result { context.require_file(&self.file)?; fs::read_to_string(&self.file) }\n} struct WriteFile { task: T, file: PathBuf\n}\nimpl Task for WriteFile { fn execute(&self, context: &mut C) -> Result<(), io::Error> { let string: String = context.require_task(&self.task)?; fs::write(&self.file, string.as_bytes())?; context.provide_file(&self.file) }\n} fn main() { let read_task = ReadFile { file: PathBuf::from(\"in.txt\") }; let write_task = WriteFile { task: read_task, file: PathBuf::from(\"out.txt\") }; Pie::default().new_session().require(&write_task);\n} The unit of computation in a programmatic incremental build system is a task . A task is kind of like a closure, a function along with its inputs that can be executed, but incremental. For example, the ReadFile task carries the file path it reads from. When we execute the task, it reads from the file and returns its text as a string. However, due to incrementality, we mark the file as a require_file dependency through context, such that this task is only re-executed when the file changes! Note that this file read dependency is created while the task is executing . We call these dynamic dependencies . This is one of the main benefits of programmatic incremental build systems: you create dependencies while the build is executing , instead of having to declare them upfront! Dynamic dependencies are also created between tasks. For example, WriteFile carries a task as input, which it requires with context.require_task to retrieve the text for writing to a file. We’ll cover how this works later on in the tutorial. For now, let’s zoom back out to the motivation of programmatic incremental build systems. Back to Motivation I prefer writing builds in a programming language like this, over having to encode a build into a YAML file with underspecified semantics, and over having to learn and use a new build scripting language with limited tooling. By programming builds , I can reuse my knowledge of the programming language, I get help from the compiler and IDE that I’d normally get while programming, I can modularize and reuse parts of my build as a library, and can use other programming language features such as unit testing, integration testing, benchmarking, etc. Programmatic builds do not exclude declarativity , however. You can layer declarative features on top of programmatic builds, such as declarative configuration files that determine what should be built without having to specify how things are built. For example, you could write a task like the one from the example, which reads and parses a config file, and then dispatch tasks that build required things. Therefore, programmatic builds are useful for both small one-off builds, and for creating larger incremental build systems that work with a lot of user inputs. Dynamic dependencies enable creating precise dependencies, without requiring staging , as is often found in build systems with static dependencies. For example, dynamic dependencies in Make requires staging: generate new makefiles and recursively execute them, which is tedious and error-prone. Gradle has a two-staged build process: first configure the task graph, then incrementally execute it. In the execution stage, you cannot modify dependencies or create new tasks. Therefore, more work needs to be done in the configuration stage, which is not (fully) incrementalized. Dynamic dependencies solve these problems by doing away with staging! Finally, precise dynamic dependencies enable incrementality but also correctness. A task is re-executed when one or more of its dependencies become inconsistent. For example, the WriteFile task from the example is re-executed when the task dependency returns different text, or when the file it writes to is modified or deleted. This is both incremental and correct. Disadvantages Of course, programmatic incremental build systems also have some disadvantages. These disadvantages become more clear during the tutorial, but I want to list them here to be up-front about it: The build system is more complicated, but hopefully this tutorial can help mitigate some of that by understanding the key ideas through implementation and experimentation. Some correctness properties are checked while building. Therefore, you need to test your builds to try to catch these issues before they reach users. However, I think that testing builds is something you should do regardless of the build system, to be more confident about the correctness of your build. More tracking is required at runtime compared to non-programmatic build systems. However, in our experience, the overhead is not excessive unless you try to do very fine-grained incrementalization. For fine-grained incrementalization, incremental computing approaches are more well suited.","breadcrumbs":"Introduction » Motivation","id":"1","title":"Motivation"},"10":{"body":"In this section, we will program the core API of the programmatic incremental build system. Although we are primarily concerned with programmability in this chapter, we must design the API to support incrementality! The unit of computation in a programmatic build system is a task . A task is kind of like a closure: a value that can be executed to produce their output, but incremental . To provide incrementality, we also need to keep track of the dynamic dependencies that tasks make while they are executing. Therefore, tasks are executed under an incremental build context , enabling them to create these dynamic dependencies. Tasks require files through the build context, creating a dynamic file dependency, ensuring the task gets re-executed when that file changes. Tasks also require other tasks through the build context, asking the build context to provide the consistent (most up-to-date) output of that task, and creating a dynamic task dependency to it. It is then up to the build context to check if it actually needs to execute that required task. If the required task is already consistent, the build context can just return the cached output of that task. Otherwise, the build context executes the required task, caches its output, and returns the output to the requiring task. A non-incremental context can naively execute tasks without checking. Because tasks require other tasks through the context, and the context selectively executes tasks, the definition of task and context is mutually recursive. Context In this tutorial, we will be using the words context , build context , and build system interchangeably, typically using just context as it is concise. Let’s make tasks and contexts more concrete by defining them in code.","breadcrumbs":"Programmability » Programmable Build System API » Programmable Build System API","id":"10","title":"Programmable Build System API"},"100":{"body":"In this section, we will fix the remaining correctness issue with cyclic tasks. Didn’t we already catch dependency graph cycles in the Incremental Top-Down Context section? Yes, you remembered right! However, there is a corner case that we didn’t handle. The issue is that we add a task dependency to the dependency graph only after the task has finished executing . We do this because we need the output from executing the task to create the dependency. But what would happen if we made a task that just requires itself? Let’s figure that out in this section, in which we will: Add cyclic tasks to the testing tasks. Create tests to showcase the cyclic task execution problem. Prevent cycles by reserving a task dependency before executing the task.","breadcrumbs":"Testing Incrementality & Correctness » Prevent Cycles » Prevent Cycles","id":"100","title":"Prevent Cycles"},"101":{"body":"We don’t have any testing tasks to easily construct different kinds of cycles yet, so we will add those first. Modify pie/tests/common/mod.rs: We add the RequireSelf task which directly requires itself. We also add the RequireA and RequireB tasks which require each other in a cycle. We want to prevent both of these kinds of cycles.","breadcrumbs":"Testing Incrementality & Correctness » Prevent Cycles » Add cyclic testing tasks","id":"101","title":"Add cyclic testing tasks"},"102":{"body":"Now add tests that check whether requiring these tasks (correctly) panics due to cycles. Modify pie/tests/top_down.rs: // Cycle tests #[test]\n#[should_panic(expected = \"Cyclic task dependency\")]\nfn require_self_panics() { let mut pie = test_pie(); pie.require(&RequireSelf).unwrap();\n} #[test]\n#[should_panic(expected = \"Cyclic task dependency\")]\nfn require_cycle_a_panics() { let mut pie = test_pie(); pie.require(&RequireA).unwrap();\n} #[test]\n#[should_panic(expected = \"Cyclic task dependency\")]\nfn require_cycle_b_panics() { let mut pie = test_pie(); pie.require(&RequireB).unwrap();\n} These test are simple: require the task and that’s it. Which of these tests will correctly result in a cyclic task dependency panic? Infinite Recursion Running these tests will result in infinite recursion, but should quickly cause a stack overflow. However, be sure to save everything in the event your computer becomes unresponsive. Expected Test Failure Tests require_self_panics, require_cycle_a_panics, and require_cycle_b_panics will fail as expected, which we will fix in this section! Run the tests with cargo test, or skip running them (and comment them out) if you don’t want to waste battery life running infinite recursions. These tests will infinitely recurse and thus fail. The issue is that we only add a dependency to the dependency graph after the task has executed . We do this because we need the output from the executing task to create the dependency. Therefore, no dependencies are ever added to the dependency graph in these tests, because a task never finishes executing! This in turn causes the cycle detection to never trigger, because there is no cycle in the dependency graph. To fix this, we need to add task dependencies to the dependency graph before we execute the task . But we cannot do this, because we need the output of the task to create the task dependency. Therefore, we need to first reserve a task dependency in the dependency graph, which creates an edge in the dependency graph without any attached data.","breadcrumbs":"Testing Incrementality & Correctness » Prevent Cycles » Add cycle tests","id":"102","title":"Add cycle tests"},"103":{"body":"To support reserved task dependencies, we will first add a ReservedRequireTask variant to Dependency. Modify pie/src/dependency.rs: The ReservedRequireTask variant has no data, as this variant needs to be creatable before we have the output of the task. A reserved task dependency cannot be consistency checked, so we panic if this occurs, but this will never occur if our implementation is correct. A reserved task dependency is added from the current executing task that is being made consistent, and we never check a task that is already consistent this session. As long as the reserved task dependency is updated to a real RequireTask dependency within this session, we will never check a reserved task dependency. Properties of the Build System Again, it is great that we have defined these kind of properties/invariants of the build system, so we can informally reason about whether certain situations occur or not. This change breaks the WritingTracker, which we will update in pie/src/tracker/writing.rs: Since reserved task dependencies are never checked, we can just ignore them in the check_dependency_end method. Now we update the Store with a method to reserve a task dependency, and a method to update a reserved task dependency to a real one. Modify pie/src/store.rs: We rename add_task_require_dependency to reserve_task_require_dependency, change it to add Dependency::ReservedRequireTask as edge dependency data, and remove the dependency parameter since we don’t need that anymore. Note that this method still creates the dependency edge, and can still create cycles which need to be handled. This is good, because this allows us to catch cycles before we start checking and potentially executing a task. For example, this will catch the self-cycle created by TestTask::RequireSelf because graph.add_edge returns a cycle error on a self-cycle. We add the update_task_require_dependency method to update a reserved task dependency to a real one. As per usual, we will update the tests. Modify pie/src/store.rs: We update test_dependencies to reserve and update task dependencies, and rename test_add_task_require_dependency_panics. We add 2 tests for update_task_require_dependency. The store now handles reserved task dependencies. Now we need to use them in TopDownContext. Task dependencies are made in require_file_with_stamper, so we just need to update that method. Modify pie/src/context/top_down.rs: Before we call make_task_consistent and potentially execute a task, we first reserve the task dependency (if a task is currently executing). Since reserve_task_require_dependency now can make cycles, we move the cycle check to the start. As mentioned before, this will catch self cycles. Additionally, this will add a dependency edge to the graph that is needed to catch future cycles, such as the cycle between TestTask::RequireA and TestTask::RequireB. For example, TestTask::RequireA executes and requires TestTask::RequireB, thus we reserve an edge from A to B. Then, we require and execute B, which requires A, thus we reserve an edge from B to A, and the cycle is detected! If the edge from A to B was not in the graph before executing B, we would not catch this cycle. After the call to make_task_consistent we have the consistent output of the task, and we update the reserved dependency to a real one with update_task_require_dependency. This will correctly detect all cyclic tasks. Confirm your changes work and all tests now succeed with cargo test. Fixed Tests Tests require_self_panics, require_cycle_a_panics, and require_cycle_b_panics should now succeed. We don’t need to write additional tests, as these 3 tests capture the kind of cycles we wanted to fix. Additional positive tests are not really needed, as the other tests cover the fact that cycles are only detected when there actually is one. This is the last correctness issue that needed to be solved. Our programmatic incremental build system is now truly incremental (minimal) and correct (sound)! There are of course certain caveats, such as non-canonical paths and symbolic links which need to be solved for additional correctness. We will not do that in this tutorial, but feel free to solve those issues (and write tests for them!). Download source code You can download the source files up to this point . In the next chapter, we will implement a “parser development” application using PIE, which can do batch builds but also provides an interactive parser development environment, using a single set of tasks.","breadcrumbs":"Testing Incrementality & Correctness » Prevent Cycles » Reserving task dependencies","id":"103","title":"Reserving task dependencies"},"104":{"body":"To demonstrate what can be done with the programmatic incremental build system we just created, we will create a simple “parser development” example. In this example, we can develop a grammar for a new (programming) language, and test that grammar against several example files written in the new language. It will have both a batch mode and an interactive mode. In the batch mode, the grammar is checked and compiled, the example program files are parsed with the grammar, and the results are printed to the terminal. The interactive mode will start up an interactive editor in which we can develop and test the grammar interactively. We will develop tasks to perform grammar compilation and parsing, and incrementally execute them with PIE. Both batch and interactive mode will use the same tasks! We will use pest as the parser framework, because it is written in Rust and can be easily embedded into an application. Pest uses Parsing Expression Grammars (PEGs) which are easy to understand, which is also good for this example. For the GUI, we will use Ratatui, which is a cross-platform terminal GUI framework, along with tui-textarea for a text editor widget. We could use a more featured GUI framework like egui, but for this example we’ll keep it simple and runnable in a terminal. As a little teaser, this is what the interactive mode looks like: We will continue as follows: Implement compilation of pest grammars and parsing of text with the compiled grammar. Create tasks for grammar compilation and parsing. Parse CLI arguments and run these tasks in a non-interactive setting. Create a terminal GUI for interactive parser development.","breadcrumbs":"Example: Interactive Parser Development » Example: Interactive Parser Development","id":"104","title":"Example: Interactive Parser Development"},"105":{"body":"First we will implement compilation of pest grammars, and parsing text with a compiled grammar. A pest grammar contains named rules that describe how to parse something. For example, number = { ASCII_DIGIT+ } means that a number is parsed by parsing 1 or more ASCII_DIGIT, with ASCII_DIGIT being a builtin rule that parses ASCII numbers 0-9. Add the following dev-dependencies to pie/Cargo.toml: pest is the library for parsing with pest grammars. pest_meta validates, optimises, and compiles pest grammars. pest_vm provides parsing with a compiled pest grammar, without having to generate Rust code for grammars, enabling interactive use. Create the pie/examples/parser_dev/main.rs file and add an empty main function to it: fn main() { } Confirm the example can be run with cargo run --example parser_dev. Let’s implement the pest grammar compiler and parser. Add parse as a public module to pie/examples/parser_dev/main.rs: We will add larger chunks of code from now on, compared to the rest of the tutorial, to keep things going. Create the pie/examples/parser_dev/parse.rs file and add to it: use std::collections::HashSet;\nuse std::fmt::Write; /// Parse programs with a compiled pest grammar.\n#[derive(Clone, Eq, PartialEq, Debug)]\npub struct CompiledGrammar { rules: Vec, rule_names: HashSet,\n} impl CompiledGrammar { /// Compile the pest grammar from `grammar_text`, using `path` to annotate errors. Returns a [`Self`] instance. /// /// # Errors /// /// Returns `Err(error_string)` when compiling the grammar fails. pub fn new(grammar_text: &str, path: Option<&str>) -> Result { match pest_meta::parse_and_optimize(grammar_text) { Ok((builtin_rules, rules)) => { let mut rule_names = HashSet::with_capacity(builtin_rules.len() + rules.len()); rule_names.extend(builtin_rules.iter().map(|s| s.to_string())); rule_names.extend(rules.iter().map(|s| s.name.clone())); Ok(Self { rules, rule_names }) }, Err(errors) => { let mut error_string = String::new(); for mut error in errors { if let Some(path) = path.as_ref() { error = error.with_path(path); } error = error.renamed_rules(pest_meta::parser::rename_meta_rule); let _ = writeln!(error_string, \"{}\", error); // Ignore error: writing to String cannot fail. } Err(error_string) } } }\n} The CompiledGrammar struct contains a parsed pest grammar, consisting of a Vec of optimised parsing rules, and a hash set of rule names. We will use this struct as an output of a task in the future, so we derive Clone, Eq, and Debug. The new function takes text of a pest grammar, and an optional file path for error reporting, and creates a CompilerGrammar or an error in the form of a String. We’re using Strings as errors in this example for simplicity. We compile the grammar with pest_meta::parse_and_optimize. If successful, we gather the rule names into a hash set and return a CompiledGrammar. If not, multiple errors are returned, which are first preprocessed with with_path and renamed_rules, and then written to a single String with writeln!, which is returned as the error. Now we implement parsing using a CompiledGrammar. Add the parse method to pie/examples/parser_dev/parse.rs: parse takes the text of the program to parse, the rule name to start parsing with, and an optional file path for error reporting. We first check whether rule_name exists by looking for it in self.rule_names, and return an error if it does not exist. We have to do this because pest_vm panics when the rule name does not exist, which would kill the entire program. If the rule name is valid, we create a pest_vm::Vm and parse. If successful, we get a pairs iterator that describes how the program was parsed, which are typically used to create an Abstract Syntax Tree (AST) in Rust code. However, for simplicity we just format the pairs as a String and return that. If not successful, we do the same as the previous function, but instead for 1 error instead of multiple. Unfortunately we cannot store pest_vm::Vm in CompiledGrammar, because Vm does not implement Clone nor Eq. Therefore, we have to create a new Vm every time we parse, which has a small performance overhead, but that is fine for this example. To check whether this code does what we want, we’ll write a test for it (yes, you can add tests to examples in Rust!). Add to pie/examples/parser_dev/parse.rs: #[cfg(test)]\nmod tests { use super::*; #[test] fn test_compile_parse() -> Result<(), String> { // Grammar compilation failure. let result = CompiledGrammar::new(\"asd = { fgh } qwe = { rty }\", None); assert!(result.is_err()); println!(\"{}\", result.unwrap_err()); // Grammar that parses numbers. let compiled_grammar = CompiledGrammar::new(\"num = { ASCII_DIGIT+ }\", None)?; println!(\"{:?}\", compiled_grammar); // Parse failure let result = compiled_grammar.parse(\"a\", \"num\", None); assert!(result.is_err()); println!(\"{}\", result.unwrap_err()); // Parse failure due to non-existent rule. let result = compiled_grammar.parse(\"1\", \"asd\", None); assert!(result.is_err()); println!(\"{}\", result.unwrap_err()); // Parse success let result = compiled_grammar.parse(\"1\", \"num\", None); assert!(result.is_ok()); println!(\"{}\", result.unwrap()); Ok(()) }\n} We test grammar compilation failure and success, and parse failure and success. Run this test with cargo test --example parser_dev -- --show-output, which also shows what the returned Strings look like.","breadcrumbs":"Example: Interactive Parser Development » Compiling grammars and parsing","id":"105","title":"Compiling grammars and parsing"},"106":{"body":"Now we’ll implement tasks for compiling a grammar and parsing. Add task as a public module to pie/examples/parser_dev/main.rs: Create the pie/examples/parser_dev/task.rs file and add to it: use std::io::Read;\nuse std::path::{Path, PathBuf}; use pie::{Context, Task}; use crate::parse::CompiledGrammar; /// Tasks for compiling a grammar and parsing files with it.\n#[derive(Clone, Eq, PartialEq, Hash, Debug)]\npub enum Tasks { CompileGrammar { grammar_file_path: PathBuf }, Parse { compiled_grammar_task: Box, program_file_path: PathBuf, rule_name: String }\n} impl Tasks { /// Create a [`Self::CompileGrammar`] task that compiles the grammar in file `grammar_file_path`. pub fn compile_grammar(grammar_file_path: impl Into) -> Self { Self::CompileGrammar { grammar_file_path: grammar_file_path.into() } } /// Create a [`Self::Parse`] task that uses the compiled grammar returned by requiring `compiled_grammar_task` to /// parse the program in file `program_file_path`, starting parsing with `rule_name`. pub fn parse( compiled_grammar_task: &Tasks, program_file_path: impl Into, rule_name: impl Into ) -> Self { Self::Parse { compiled_grammar_task: Box::new(compiled_grammar_task.clone()), program_file_path: program_file_path.into(), rule_name: rule_name.into() } }\n} /// Outputs for [`Tasks`].\n#[derive(Clone, Eq, PartialEq, Debug)]\npub enum Outputs { CompiledGrammar(CompiledGrammar), Parsed(Option)\n} We create a Tasks enum with: A CompileGrammar variant for compiling a grammar from a file. A Parse variant that uses the compiled grammar returned from another task to parse a program in a file, starting parsing with a specific rule given by name. compile_grammar and parse are convenience functions for creating these variants. We derive Clone, Eq, Hash and Debug as these are required for tasks. We create an Outputs enum for storing the results of these tasks, and derive the required traits. Since both tasks will require a file, and we’re using Strings as errors, we will implement a convenience function for this. Add to pie/examples/parser_dev/task.rs: fn require_file_to_string>(context: &mut C, path: impl AsRef) -> Result { let path = path.as_ref(); let mut file = context.require_file(path) .map_err(|e| format!(\"Opening file '{}' for reading failed: {}\", path.display(), e))? .ok_or_else(|| format!(\"File '{}' does not exist\", path.display()))?; let mut text = String::new(); file.read_to_string(&mut text) .map_err(|e| format!(\"Reading file '{}' failed: {}\", path.display(), e))?; Ok(text)\n} require_file_to_string is like context.require_file, but converts all errors to String. Now we implement Task for Tasks. Add to pie/examples/parser_dev/task.rs: impl Task for Tasks { type Output = Result; fn execute>(&self, context: &mut C) -> Self::Output { match self { Tasks::CompileGrammar { grammar_file_path } => { let grammar_text = require_file_to_string(context, grammar_file_path)?; let compiled_grammar = CompiledGrammar::new(&grammar_text, Some(grammar_file_path.to_string_lossy().as_ref()))?; Ok(Outputs::CompiledGrammar(compiled_grammar)) } Tasks::Parse { compiled_grammar_task, program_file_path, rule_name } => { let Ok(Outputs::CompiledGrammar(compiled_grammar)) = context.require_task(compiled_grammar_task.as_ref()) else { // Return `None` if compiling grammar failed. Don't propagate the error, otherwise the error would be // duplicated for all `Parse` tasks. return Ok(Outputs::Parsed(None)); }; let program_text = require_file_to_string(context, program_file_path)?; let output = compiled_grammar.parse(&program_text, rule_name, Some(program_file_path.to_string_lossy().as_ref()))?; Ok(Outputs::Parsed(Some(output))) } } }\n} The output is Result: either an Outputs if the task succeeds, or a String if not. In execute we match our variant and either compile a grammar or parse, which are mostly straightforward. In the Parse variant, we require the compile grammar task, but don’t propagate its errors and instead return Ok(Outputs::Parsed(None)). We do this to prevent duplicate errors. If we propagated the error, the grammar compilation error would be duplicated into every parse task. Confirm the code compiles with cargo build --example parser_dev. We won’t test this code as we’ll use these tasks in the main function next.","breadcrumbs":"Example: Interactive Parser Development » Tasks","id":"106","title":"Tasks"},"107":{"body":"We have tasks for compiling grammars and parsing files, but we need to pass file paths and a rule name into these tasks. We will pass this data to the program via command-line arguments. To parse command-line arguments, we will use clap, which is an awesome library for easily parsing command-line arguments. Add clap as a dependency to pie/Cargo.toml: We’re using the derive feature of clap to automatically derive a full-featured argument parser from a struct. Modify pie/examples/parser_dev/main.rs: The Args struct contains exactly the data we need: the path to the grammar file, the name of the rule to start parsing with, and paths to program files to parse. We derive an argument parser for Args with #[derive(Parser)]. Then we parse command-line arguments in main with Args::parse(). Test this program with cargo run --example parser_dev -- --help, which should result in usage help for the program. Note that the names, ordering, and doc-comments of the fields are used to generate this help. You can test out several more commands: cargo run --example parser_dev -- cargo run --example parser_dev -- foo cargo run --example parser_dev -- foo bar cargo run --example parser_dev -- foo bar baz qux Now let’s use these arguments to actually compile the grammar and parse example program files. Modify pie/examples/parser_dev/main.rs: In compile_grammar_and_parse, we create a new Pie instance that writes the build log to stderr, and create a new build session. Then, we require a compile grammar task using the grammar_file_path from Args, and write any errors to the errors String. We then require a parse task for every path in args.program_file_paths, using the previously created compile_grammar_task and args.rule_name. Successes are printed to stdout and errors are written to errors. Finally, we print errors to stdout if there are any. To test this out, we need a grammar and some test files. Create grammar.pest: num = @{ ASCII_DIGIT+ } main = { SOI ~ num ~ EOI } WHITESPACE = _{ \" \" | \"\\t\" | \"\\n\" | \"\\r\" } Pest Grammars You don’t need to fully understand pest grammars to finish this example. However, I will explain the basics of this grammar here. Feel free to learn and experiment more if you are interested. Grammars are lists of rules, such as num and main. This grammar parses numbers with the num rule, matching 1 or more ASCII_DIGIT with repetition. The main rule ensures that there is no additional text before and after a num rule, using SOI (start of input) EOI (end of input), and using the ~ operator to sequence these rules. We set the WHITESPACE builtin rule to { \" \" | \"\\t\" | \"\\n\" | \"\\r\" } so that spaces, tabs, newlines, and carriage return characters are implicitly allowed between sequenced rules. The @ operator before { indicates that it is an atomic rule, disallowing implicit whitespace. We want this on the num rule so that we can’t add spaces in between digits of a number (try removing it and see!) The _ operator before { indicates that it is a silent rule that does not contribute to the parse result. This is important when processing the parse result into an Abstract Syntax Tree (AST). In this example we just print the parse result, so silent rules are not really needed, but I included it for completeness. Create test_1.txt with: 42 And create test_2.txt with: foo Run the program with cargo run --example parser_dev -- grammar.pest main test_1.txt test_2.txt. This should result in a build log showing that the grammar is successfully compiled, that one file is successfully parsed, and that one file has a parse error. Unfortunately, there is no incrementality between different runs of the example, because the Pie Store is not persisted. The Store only exists in-memory while the program is running, and is then thrown away. Thus, there cannot be any incrementality. To get incrementality, we need to serialize the Store before the program exits, and deserialize it when the program starts. This is possible and not actually that hard, I just never got around to explaining it in this tutorial. See the Side Note: Serialization section at the end for info on how this can be implemented. Hiding the Build Log If you are using a bash-like shell on a UNIX-like OS, you can hide the build log by redirecting stderr to /dev/null with: cargo run --example parser_dev -- grammar.pest main test_1.txt test_2.txt 2>/dev/null. Otherwise, you can hide the build log by replacing WritingTracker::with_stderr() with NoopTracker. Feel free to experiment a bit with the grammar, example files, etc. before continuing. We will develop an interactive editor next however, which will make experimentation easier!","breadcrumbs":"Example: Interactive Parser Development » Parse CLI arguments","id":"107","title":"Parse CLI arguments"},"108":{"body":"Now we’ll create an interactive version of this grammar compilation and parsing pipeline, using Ratatui to create a terminal GUI. Since we need to edit text files, we’ll use tui-textarea, which is a text editor widget for Ratatui. Ratatui works with multiple backends, with crossterm being the default backend since it is cross-platform. Add these libraries as a dependency to pie/Cargo.toml: We continue as follows: Set up the scaffolding for a Ratatui application. Create a text editor Buffer using tui-textarea to edit the grammar and example program files. Draw and update those text editor Buffers, and keep track of the active buffer. Save Buffers back to files and run the CompileGrammar and Parse tasks to provide feedback on the grammar and example programs. Show the build log in the application.","breadcrumbs":"Example: Interactive Parser Development » Interactive Parser Development","id":"108","title":"Interactive Parser Development"},"109":{"body":"We will put the editor in a separate module, and start out with the basic scaffolding of a Ratatui “Hello World” application. Add editor as a public module to pie/examples/parser_dev/main.rs: Create the pie/examples/parser_dev/editor.rs file and add the following to it: use std::io; use crossterm::event::{DisableMouseCapture, EnableMouseCapture, Event, KeyCode, KeyEventKind};\nuse crossterm::terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen};\nuse ratatui::backend::{Backend, CrosstermBackend};\nuse ratatui::Terminal;\nuse ratatui::widgets::Paragraph; use crate::Args; /// Live parser development editor.\npub struct Editor {} impl Editor { /// Create a new editor from `args`. pub fn new(_args: Args) -> Result { Ok(Self {}) } /// Run the editor, drawing it into an alternate screen of the terminal. pub fn run(&mut self) -> Result<(), io::Error> { // Setup terminal for GUI rendering. enable_raw_mode()?; let mut backend = CrosstermBackend::new(io::stdout()); crossterm::execute!(backend, EnterAlternateScreen, EnableMouseCapture)?; let mut terminal = Terminal::new(backend)?; terminal.clear()?; // Draw and process events in a loop until a quit is requested or an error occurs. let result = loop { match self.draw_and_process_event(&mut terminal) { Ok(false) => break Ok(()), // Quit requested Err(e) => break Err(e), // Error _ => {}, } }; // First undo our changes to the terminal. disable_raw_mode()?; crossterm::execute!(terminal.backend_mut(), LeaveAlternateScreen, DisableMouseCapture)?; terminal.show_cursor()?; // Then present the result to the user. result } fn draw_and_process_event(&mut self, terminal: &mut Terminal) -> Result { terminal.draw(|frame| { frame.render_widget(Paragraph::new(\"Hello World! Press Esc to exit.\"), frame.size()); })?; match crossterm::event::read()? { Event::Key(key) if key.kind == KeyEventKind::Release => return Ok(true), // Skip releases. Event::Key(key) if key.code == KeyCode::Esc => return Ok(false), _ => {} }; Ok(true) }\n} The Editor struct will hold the state of the editor application, which is currently empty, but we’ll add fields to it later. Likewise, the new function doesn’t do a lot right now, but it is scaffolding for when we add state. It returns a Result because it can fail in the future. The run method sets up the terminal for GUI rendering, draws the GUI and processes events in a loop until stopped, and then undoes our changes to the terminal. It is set up in such a way that undoing our changes to the terminal happens regardless if there is an error or not (although panics would still skip that code and leave the terminal in a bad state). This is a standard program loop for Ratatui. Rust Help: Returning From Loops A loop indicates an infinite loop. You can return a value from such loops with break. The draw_and_process_event method first draws the GUI, currently just a hello world message, and then processes events such as key presses. Currently, this skips key releases because we are only interested in presses, and returns Ok(false) if escape is pressed, causing the loop to be breaked out. Now we need to go back to our command-line argument parsing and add a flag indicating that we want to start up an interactive editor. Modify pie/examples/parser_dev/main.rs: We add a new Cli struct with an edit field that is settable by a short (-e) or long (--edit) flag, and flatten Args into it. Using this new Cli struct here keeps Args clean, since the existing code does not need to know about the edit flag. Instead of using a flag, you could also define a separate command for editing. In main, we parse Cli instead, check whether cli.edit is set, and create and run the editor if it is. Otherwise, we do a batch build. Try out the code with cargo run --example parser_dev -- test.pest main test_1.test test_2.test -e in a terminal, which should open up a separate screen with a hello world text. Press escape to exit out of the application. If the program ever panics, your terminal will be left in a bad state. In that case, you’ll have to reset your terminal back to a good state, or restart your terminal.","breadcrumbs":"Example: Interactive Parser Development » Ratatui Scaffolding","id":"109","title":"Ratatui Scaffolding"},"11":{"body":"Since we want users of the build system to implement their own tasks, we will define Task as a trait. Likewise, we will also be implementing multiple contexts in this tutorial, so we will also define Context as a trait. Add the following code to your pie/src/lib.rs file: use std::fmt::Debug;\nuse std::hash::Hash; /// A unit of computation in a programmatic incremental build system.\npub trait Task: Clone + Eq + Hash + Debug { /// Type of output this task returns when executed. type Output: Clone + Eq + Debug; /// Execute the task, using `context` to specify dynamic dependencies, returning `Self::Output`. fn execute>(&self, context: &mut C) -> Self::Output;\n} /// Programmatic incremental build context, enabling tasks to create dynamic dependencies that context implementations\n/// use for incremental execution.\npub trait Context { /// Requires given `task`, recording a dependency and selectively executing it. Returns its up-to-date output. fn require_task(&mut self, task: &T) -> T::Output;\n} Tip If this seems overwhelming to you, don’t worry. We will go through the API and explain things. But more importantly, the API should become more clear once we implement it in the next section and subsequent chapters. Furthermore, if you’re new to Rust and/or need help understanding certain concepts, I will try to explain them in Rust Help blocks. They are collapsed by default to reduce distraction, clicking the header opens them. See the first Rust Help block at the end of this section. The Task trait has several supertraits that we will need later in the tutorial to implement incrementality: Eq and Hash: to check whether a task is equal to another one, and to create a hash of it, so we can use a HashMap to get the output of a task if it is up-to-date. Clone: to create a clone of the task so that we can store it in the HashMap without having ownership of it. Debug: to format the task for debugging purposes. A Task has a single method execute, which takes a reference to itself (&self), and a mutable reference to a context (context: &mut C), and produces a value of type Self::Output. Because Context is a trait, we use generics (>) to have execute work for any Context implementation (ignoring the Self part for now). The execute method takes self by reference such that a task can access its data, but not mutate it, as that could throw off incrementality by changing the hash/equality of the task. Finally, the type of output of a task is defined by the Output associated type, and this type must implement Clone, Eq, and Debug for the same reason as Task. The Context trait is generic over Task, allowing it to work with any task implementation. It has a single method require_task for creating a dependency to a task and returning its consistent (up-to-date) result. It takes a mutable reference to itself, enabling dynamic dependency tracking and caching, which require mutation. Because of this, the context reference passed to Task::execute is also mutable. This Task and Context API mirrors the mutually recursive definition of task and context we discussed earlier, and forms the basis for the entire build system. Note We will implement file dependencies in the next chapter, as file dependencies only become important with incrementality. Build the project by running cargo build. The output should look something like: Compiling pie v0.1.0 (/pie) Finished dev [unoptimized + debuginfo] target(s) in 0.03s In the next section, we will implement a non-incremental Context and test it against Task implementations. Rust Help: Modules, Imports, Ownership, Traits, Methods, Supertraits, Associated Types, Visibility The Rust Programming Language is an introductory book about Rust. I will try to provide links to the book where possible. Rust has a module system for project organization. The lib.rs file is the “main file” of a library. Later on, we will be creating more modules in different files. Things are imported into the current scope with use statements. We import the Debug and Hash traits from the standard library with two use statements. Use statements use paths to refer to nested things. We use :: for nesting, similar to namespaces in C++. Rust models the concept of ownership to enable memory safety without a garbage collector. The execute method accepts a reference to the current type, indicated with &: &self. This reference is immutable , meaning that we can read data from it, but not mutate it. In Rust, things are immutable by default. On the other hand, execute accepts a mutable reference to the context, indicated with &mut: context: &mut C, which does allow mutation. Traits are the main mechanism for open extensibility in Rust. They are comparable to interfaces in class-oriented languages. We will implement a context and tasks in the next section. Supertraits are a kind of inheritance. The : Clone + Eq + Hash + Debug part of the Task trait means that every Task implementation must also implement the Clone, Eq, Hash, and Debug traits. These traits are part of the standard library: Clone for duplicating values. Eq for equality comparisons, along with PartialEq. Hash for turning a value into a hash. Debug for formatting values in a programmer-facing debugging context. Clone and Eq are so common that they are part of the Rust Prelude, so we don’t have to import those with use statements. Methods are functions that take a form of self as the first argument. This enables convenient object-like calling syntax: context.require_task(&task);. Associated types are a kind of placeholder type in a trait such that methods of traits can use that type. In Task this allows us to talk about the Output type of a task. In Context this allows us to refer to both the Task type T and its output type T::Output. The :: syntax here is used to access associated types of traits. The Self type in a trait is a built-in associated type that is a placeholder for the type that is implementing the trait. The Task trait is defined with pub (public) visibility, such that users of the library can implement it. Because Task uses Context in its public API, Context must also be public, even though we don’t intend for users to implement their own Context. Download source code You can download the source files up to this point .","breadcrumbs":"Programmability » Programmable Build System API » API Implementation","id":"11","title":"API Implementation"},"110":{"body":"The goal of this application is to develop a grammar alongside example programs of that grammar, getting feedback whether the grammar is correct, but also getting feedback whether the example programs can be parsed with the grammar. Therefore, we will need to draw multiple text editors along with space for feedback, and be able to swap between active editors. This will be the responsibility of the Buffer struct which we will create in a separate module. Add the buffer module to pie/examples/parser_dev/editor.rs: Then create the pie/examples/parser_dev/editor/buffer.rs file and add to it: #![allow(dead_code)] use std::fs::{File, read_to_string};\nuse std::io::{self, Write};\nuse std::path::PathBuf; use crossterm::event::Event;\nuse ratatui::Frame;\nuse ratatui::layout::{Constraint, Direction, Layout, Rect};\nuse ratatui::style::{Color, Modifier, Style};\nuse ratatui::widgets::{Block, Borders, Paragraph, Wrap};\nuse tui_textarea::TextArea; /// Editable text buffer for a file.\npub struct Buffer { path: PathBuf, editor: TextArea<'static>, feedback: String, modified: bool,\n} impl Buffer { /// Create a new [`Buffer`] for file at `path`. /// /// # Errors /// /// Returns an error when reading file at `path` fails. pub fn new(path: PathBuf) -> Result { let text = read_to_string(&path)?; let mut editor = TextArea::from(text.lines()); // Enable line numbers. Default style = no additional styling (inherit). editor.set_line_number_style(Style::default()); Ok(Self { path, editor, feedback: String::default(), modified: false }) } /// Draws this buffer with `frame` into `area`, highlighting it if it is `active`. pub fn draw(&mut self, frame: &mut Frame, area: Rect, active: bool) { // Determine and set styles based on whether this buffer is active. Default style = no additional styling (inherit). let mut cursor_line_style = Style::default(); let mut cursor_style = Style::default(); let mut block_style = Style::default(); if active { // Highlight active editor. cursor_line_style = cursor_line_style.add_modifier(Modifier::UNDERLINED); cursor_style = cursor_style.add_modifier(Modifier::REVERSED); block_style = block_style.fg(Color::Gray); } self.editor.set_cursor_line_style(cursor_line_style); self.editor.set_cursor_style(cursor_style); // Create and set the block for the text editor, bordering it and providing a title. let mut block = Block::default().borders(Borders::ALL).style(block_style); if let Some(file_name) = self.path.file_name() { // Add file name as title. block = block.title(format!(\"{}\", file_name.to_string_lossy())) } if self.modified { // Add modified to title. block = block.title(\"[modified]\"); } self.editor.set_block(block); // Split area up into a text editor (80% of available space), and feedback text (minimum of 7 lines). let areas = Layout::default() .direction(Direction::Vertical) .constraints(vec![Constraint::Percentage(80), Constraint::Min(7)]) .split(area); // Render text editor into first area (`areas[0]`). frame.render_widget(self.editor.widget(), areas[0]); // Render feedback text into second area (`areas[1]`). let feedback = Paragraph::new(self.feedback.clone()) .wrap(Wrap::default()) .block(Block::default().style(block_style).borders(Borders::ALL - Borders::TOP)); frame.render_widget(feedback, areas[1]); } /// Process `event`, updating whether this buffer is modified. pub fn process_event(&mut self, event: Event) { self.modified |= self.editor.input(event); } /// Save this buffer to its file if it is modified. Does nothing if not modified. Sets as unmodified when successful. /// /// # Errors /// /// Returns an error if writing buffer text to the file fails. pub fn save_if_modified(&mut self) -> Result<(), io::Error> { if !self.modified { return Ok(()); } let mut file = io::BufWriter::new(File::create(&self.path)?); for line in self.editor.lines() { file.write_all(line.as_bytes())?; file.write_all(b\"\\n\")?; } file.flush()?; self.modified = false; Ok(()) } /// Gets the file path of this buffer. pub fn path(&self) -> &PathBuf { &self.path } /// Gets the mutable feedback text of this buffer. pub fn feedback_mut(&mut self) -> &mut String { &mut self.feedback }\n} A Buffer is a text editor for a text file at a certain path. It keeps track of a text editor with TextArea<'static>, feedback text, and whether the text was modified in relation to the file. new creates a Buffer and is fallible due to reading a file. The draw method draws/renders the buffer (using the Ratatui frame) into area, with active signifying that this buffer is active and should be highlighted differently. The first part sets the style of the editor, mainly highlighting an active editor by using Color::Gray as the block style. Default styles indicate that no additional styling is done, basically inheriting the style from a parent widget (i.e., a block), or using the style from your terminal. The second part creates a block that renders a border around the text editor and renders a title on the upper border. The third part splits up the available space into space for the text editor (80%), and space for the feedback text (at least 7 lines), and renders the text editor and feedback text into those spaces. The layout can of course be tweaked, but it works for this example. process_event lets the text editor process input events, and updates whether the text has been modified. save_if_modified saves the text to file, but only if modified. path gets the file path of the buffer. feedback_mut returns a mutable borrow to the feedback text, enabling modification of the feedback text. It is up to the user of Buffer to keep track of the active buffer, sending active: true to the draw method of that buffer, and calling process_event on the active buffer. That’s exactly what we’re going to implement next.","breadcrumbs":"Example: Interactive Parser Development » Text Editor Buffer","id":"110","title":"Text Editor Buffer"},"111":{"body":"We’ll create Buffers in Editor and keep track of the active buffer. To keep this example simple, we’ll create buffers only for the grammar file and example program files given as command-line arguments. If you want more or less example files, you’ll have to exit the application, add those example files to the command-line arguments, and then start the application again. Modify pie/examples/parser_dev/editor.rs: Editor now has a list of buffers via Vec and keeps track of the active tracker via active_buffer which is an index into buffers. In new, we create buffers based on the grammar and program file paths in args. The buffers Vec is created in such a way that the first buffer is always the grammar buffer, with the rest being example program buffers. The grammar buffer always exists because args.grammar_file_path is mandatory, but there can be 0 or more example program buffers. draw_and_process_event now splits up the available space. First vertically: as much space as possible is reserved for buffers, with at least 1 line being reserved for a help line at the bottom. Then horizontally: half of the horizontal space is reserved for a grammar buffer, and the other half for program buffers. The vertical space for program buffers (program_buffer_areas) is further divided: evenly split between all program buffers. Then, the buffers are drawn in the corresponding spaces with active only being true if we are drawing the active buffer, based on the active_buffer index. In the event processing code, we match the Control+T shortcut and increase the active_buffer index. We wrap back to 0 when the active_buffer index would overflow, using a modulo (%) operator, ensuring that active_buffer is always a correct index into the buffers Vec. Finally, if none of the other shortcuts match, we send the event to the active buffer. Try out the code again with cargo run --example parser_dev -- test.pest main test_1.test test_2.test -e in a terminal. This should open up the application with a grammar buffer on the left, and two program buffers on the right. Use Control+T to swap between buffers, and escape to exit.","breadcrumbs":"Example: Interactive Parser Development » Drawing and Updating Buffers","id":"111","title":"Drawing and Updating Buffers"},"112":{"body":"Next up is saving the buffers, running the compile grammar and parse tasks, and show feedback from those tasks in the feedback space of buffers. Modify pie/examples/parser_dev/editor.rs: The biggest addition as at the bottom: the save_and_update_buffers method. This method first clears the feedback text for all buffers, and saves all buffers (if save is true). Then we create a new PIE session and require the compile grammar task and parse tasks, similar to compile_grammar_and_parse in the main file. Here we instead writeln! the results to the feedback text of buffers. We store the rule_name in Editor as that is needed to create parse tasks, and store a Pie instance so that we can create new PIE sessions to require tasks. When the Control+S shortcut is pressed, we call save_and_update_buffers with save set to true. We also call save_and_update_buffers in Editor::new to provide feedback when the application starts out, but with save set to false, so we don’t immediately save all files. Finally, we update the help line to include the Control+S shortcut. Try out the code again with cargo run --example parser_dev -- test.pest main test_1.test test_2.test -e in a terminal. Now you should be able to make changes to the grammar and/or example programs, press Control+S to save modified files, and get feedback on grammar compilation and parsing example programs. If you like, you can go through the pest parser book and experiment with/develop a parser.","breadcrumbs":"Example: Interactive Parser Development » Saving Buffers and Providing Feedback","id":"112","title":"Saving Buffers and Providing Feedback"},"113":{"body":"We’ll add one more feature to the editor: showing the build log. We can do this by writing the build log to an in-memory text buffer, and by drawing that text buffer. Modify pie/examples/parser_dev/editor.rs: In new we now create the Pie instance with a writing tracker: WritingTracker::new(Cursor::new(Vec::new())). This writing tracker writes to a Cursor, specifically Cursor> for which Write is implemented. We modify the type of the pie field to include the tracker type to reflect this: WritingTracker>>. Build logs will then be written to the Vec inside the Cursor. To draw the build log in between the buffers and help line, we first modify the layout split into root_areas: buffers now take up 70% of vertical space, and add a new constraint for the build log which takes 30% of vertical space. We access the in-memory buffer via &self.pie.tracker().writer().get_ref(), convert this to a string via String::from_utf8_lossy, and convert that to Ratatui Text which can be passed to Paragraph::new and also gives us line information for scrolling the build log. The scroll calculation is explained in the comments. We then draw the build log as a Paragraph. Finally, we update the area for the help line from root_areas[1] to root_areas[2], as adding the layout constraint shifted the index up. Try out the code again with cargo run --example parser_dev -- test.pest main test_1.test test_2.test -e in a terminal. Pressing Control+S causes tasks to be required, which is shown in the build log. Try modifying a single file to see what tasks PIE executes, or what the effect of an error in the grammar has. And with that, we’re done with the interactive parser development example 🎉🎉🎉!","breadcrumbs":"Example: Interactive Parser Development » Showing the Build Log","id":"113","title":"Showing the Build Log"},"114":{"body":"In this example, we developed tasks for compiling a grammar and parsing files with that grammar, and then used those tasks to implement both a batch build, and an interactive parser development environment. In the introduction, we motivated programmatic incremental build systems with the key properties of: programmatic, incremental, correct, automatic, and multipurpose. Did these properties help with the implementation of this example application? Programmatic: due to the build script – that is: the compile grammar and parse tasks – being written in the same programming language as the application, it was extremely simple to integrate. We also didn’t have to learn a separate language, we could just apply our knowledge of Rust! Incremental: PIE incrementalized the build for us, so we didn’t have to implement incrementality. This saves a lot of development effort as implemented incrementality is complicated. The batch build is unfortunately not incremental due to not having implemented serialization in this tutorial, but this is not a fundamental limitation. See Side Note: Serialization for info on how to solve this. Correct: PIE ensures the build is correct, so we don’t have to worry about glitches or inconsistent data, again saving development effort that would otherwise be spent on ensuring incrementality is correct. For a real application, we should write tests to increase the confidence that our build is correct, because PIE checks for correctness at runtime. Automatic: we didn’t manually implement incrementality, but only specified the dependencies: from compile grammar/parse task to a file, and from parse tasks to compile grammar tasks. Multipurpose: we reused the same tasks for both a batch build and for use in an interactive environment, without any modifications. Again, this saves development time. So yes, I think that programmatic incremental build systems – and in particular PIE – help a lot when developing applications that require incremental batch builds or interactive pipelines, and especially when both are required. The main benefit is reduced development effort, due to not having to solve the problem of correct incrementality, due to easy integration, and due to only needing to know and use a single programming language. Larger applications with more features and complications that need incrementality would require an even bigger implementation effort. Therefore, larger applications could benefit even more from using PIE. Of course, you cannot really extrapolate that from this small example. However, I have applied PIE to a larger application: the Spoofax Language Workbench, and found similar benefits. More info on this can be found in the appendix . You should of course decide for yourself whether a programmatic incremental build system really helped with implementing this example. Every problem is different, and requires separate consideration as to what tools best solve a particular problem. This is currently the end of the guided programming tutorial. In the appendix chapters, we discuss PIE implementations and publications, related work, and future work. Download source code You can download the source files up to this point .","breadcrumbs":"Example: Interactive Parser Development » Conclusion","id":"114","title":"Conclusion"},"115":{"body":"To get incrementality between different runs (i.e., processes) of the program, we need to serialize the Store before the program exits, and deserialize the Store when the program starts. The de-facto standard (and awesome) serialization library in Rust in serde. See the PIE in Rust repository at the pre_type_refactor tag for a version of PIE with serde serialization. For example, the Store struct has annotations for deriving serde::Deserialize and serde::Serialize. These attributes are somewhat convoluted due to serialization being optional, and due to the H generic type parameter which should not be included into serialization bounds. You should derive serde::Deserialize and serde::Serialize for all required types in the PIE library, but also all tasks, and all task outputs. The pie_graph library support serialization when the serde feature is enabled, which is enabled by default. Then, see this serialization integration test.","breadcrumbs":"Example: Interactive Parser Development » Side Note: Serialization","id":"115","title":"Side Note: Serialization"},"116":{"body":"","breadcrumbs":"PIE Implementations & Publications » PIE Implementations & Publications","id":"116","title":"PIE Implementations & Publications"},"117":{"body":"In this tutorial, you implemented a large part of the PIE, the programmatic incremental build system that I developed during my PhD and Postdoc. There are currently two versions of PIE: PIE in Rust, a superset of what you have been developing in this tutorial. I plan to make this a full-fledged and usable library for incremental batch builds and interactive systems. You are of course free to continue developing the library you made in this tutorial, but I would appreciate users and/or contributions to the PIE library! The largest differences between PIE in this tutorial and the PIE library are: Support for arbitrary task and resource types, achieved by using trait objects to provide dynamic dispatch. Resource abstraction enables resources other than files. Resources are global mutable state where the state is not handled by the PIE library (as opposed to task inputs and outputs), but read and write access to that state is handled by PIE. Files (as PathBuf) are a resource, but so is a hashmap. Terminology differences. The PIE library uses read and write for resource dependencies instead of require and provide . This allows us to use require only for tasks, and read and write only for resources. It uses checkers instead of stampers . The motivation for developing a PIE library in Rust was to test whether the idea of a programmatic incremental build system really is programming-language agnostic, as a target for developing this tutorial, and to get a higher-performance implementation compared to the Java implementation of PIE. In my opinion, implementing PIE in Rust as part of this tutorial is a much nicer experience than implementing it in Java, due to the more powerful type system and great tooling provided by Cargo. However, supporting multiple task types, which we didn’t do in this tutorial, is a bit of a pain due to requiring trait objects, which can be really complicated to work with in certain cases. In Java, everything is a like a trait object, and you get many of these things for free, at the cost of garbage collection and performance of course. PIE in Java. The motivation for using Java was so that we could use PIE to correctly incrementalize the Spoofax Language Workbench, a set of tools and interactive development environment (IDE) for developing programming languages. In Spoofax, you develop a programming language by defining the aspects of your language in domain-specific meta-languages , such as SDF3 for syntax definition, and Statix for type system and name binding definitions. Applying PIE to Spoofax culminated in Spoofax 3 (sometimes also called Spoofax-PIE), a new version of Spoofax that uses PIE for all tasks such as generating parsers, running parsers on files to create ASTs, running highlighters on those ASTs to provide syntax highlighting for code editors, etc. Because all tasks are PIE tasks, we can do correct and incremental batch builds of language definitions, but also live development of those language definitions in an IDE, using PIE to get feedback such as inline errors and syntax highlighting as fast as possible.","breadcrumbs":"PIE Implementations & Publications » Implementations","id":"117","title":"Implementations"},"118":{"body":"We wrote two papers about programmatic incremental build systems and PIE, for which updated versions are in my dissertation: Chapter 7, page 83: PIE: A Domain-Specific Language for Interactive Software Development Pipelines. This describes a domain-specific language (DSL) for programmatic incremental build systems, and introduces the PIE library in Kotlin. This implementation was later changed to a pure Java library to reduce the number of dependencies. Chapter 8, page 109: Scalable Incremental Building with Dynamic Task Dependencies. This describes a hybrid incremental build algorithm that builds from the bottom-up, only switching to top-down building when necessary. Bottom-up builds are more efficient with changes that have a small effect (i.e., most changes), due to only checking the part of the dependency graph affected by changes . Therefore, this algorithm scales down to small changes while scaling up to large dependency graphs . Unfortunately, we did not implement (hybrid) bottom-up building in this tutorial due to a lack of time. However, the PIE in Rust library has a bottom-up context implementation which you can check out. Due to similarities between the top-down and bottom-up context, some common functionality was extracted into an extension trait. We published a summary/abstract paper as well: Precise, Efficient, and Expressive Incremental Build Scripts with PIE. Two master students graduated on extensions to PIE: Roelof Sol: Task Observability in Change Driven Incremental Build Systems with Dynamic Dependencies. A problem with bottom-up builds is that tasks stay in the dependency graph forever, even if they are no longer needed. Even though those tasks are not executed (because they are not needed), they do need to be checked and increase the size of the dependency graph which in turn has overhead for several graph operations. To solve that problem, we introduce task observability . A task is observable if and only if it is explicitly observed by the user of the build system through directly requiring (Session::require) the task, or if it is implicitly observed by a require task dependency from another task. Otherwise, the task is unobserved . The build system updates the observability status of tasks while the build is executing. Unobserved tasks are never checked , removing the checking overhead. Unobserved tasks can be removed from the dependency graph in a “garbage collection” pass, removing graph operation overhead. Removing unobserved tasks is flexible: during the garbage collection pass you can decide to keep a task in the dependency graph if you think it will become observed again, to keep its cached output. You can also remove the provided (intermediate or output) files of an unobserved task to clean up disk space, which is correct due to the absence of hidden dependencies! Currently, observability is implemented in the Java implementation of PIE, but not yet in the Rust implementation of PIE. Ivo Wilms: Extending the DSL for PIE. This improves and solves many problems in the original PIE DSL implementation. It introduces a module system, compatibility with dependency injection, and generics with subtyping into the DSL. Generics and subtyping have a proper type system implementation in the Statix meta-DSL. One paper was published about using PIE: Constructing Hybrid Incremental Compilers for Cross-Module Extensibility with an Internal Build System. This paper introduces a compiler design approach for reusing parts of a non-incremental to build an incremental compiler, using PIE to perform the incrementalization. The approach is applied to Stratego, a term transformation meta-DSL with several cross-cutting features that make incremental compilation hard. The result is the Stratego 2 compiler that is split up into multiple PIE tasks to do incremental parsing (per-file), incremental name analysis, and incremental compilation. Stratego 2 was also extended with gradual typing at a later stage, where the gradual typing was also performed in PIE tasks.","breadcrumbs":"PIE Implementations & Publications » Publications","id":"118","title":"Publications"},"119":{"body":"There are several other programmatic incremental build systems and works published about them. This subsection discusses them. For additional related work discussion, check the related work sections of chapter 7 (page 104) and chapter 8 (page 126) of my dissertation.","breadcrumbs":"Related Work » Related Work","id":"119","title":"Related Work"},"12":{"body":"We set up the Task and Context API in such a way that we can implement incrementality. However, incrementality is hard , so let’s start with an extremely simple non-incremental Context implementation to get a feeling for the API.","breadcrumbs":"Programmability » Non-Incremental Context » Non-Incremental Context","id":"12","title":"Non-Incremental Context"},"120":{"body":"PIE is based on Pluto, a programmatic incremental build system developed by Sebastian Erdweg et al. This is not a coincidence, as Sebastian Erdweg was my PhD promotor, and we developed and wrote the “Scalable Incremental Building with Dynamic Task Dependencies” paper together. The Pluto paper provides a more formal proof of incrementality and correctness for the top-down build algorithm, which provides confidence that this algorithm works correctly, but also explains the intricate details of the algorithm very well. Note that Pluto uses “builder” instead of “task”. In fact, a Pluto builder is more like an incremental function that does not carry its input , whereas a PIE task is more like an incremental closure that includes its input. PIE uses almost the same top-down build algorithm as Pluto, but there are some technical changes that make PIE more convenient to use. In Pluto, tasks are responsible for storing their output and dependencies, called “build units”, which are typically stored in files. In PIE, the library handles this for you. The downside is that PIE requires a mapping from a Task (using its Eq and Hash impls) to its dependencies and output (which is what the Store does), and some modifications had to be done to the consistency checking routines. The upside is that tasks don’t have to manage these build unit files, and the central Store can efficiently manage the entire dependency graph. Especially this central dependency graph management is useful for the bottom-up build algorithm, as we can use dynamic topological sort algorithms for directed acyclic graphs.","breadcrumbs":"Related Work » Pluto","id":"120","title":"Pluto"},"121":{"body":"Build Systems à la Carte shows a systematic and executable framework (in Haskell) for developing and comparing build systems. It compares the impact of design decisions such as what persistent build information to store, the scheduler to use, static/dynamic dependencies, whether it is minimal, supports early cutoff, and whether it supports distributed (cloud) builds. Even though the Haskell code might be a bit confusing if you’re not used to functional programming, it is a great paper that discusses many aspects of programmatic incremental build systems and how to implement them.","breadcrumbs":"Related Work » Other Incremental Build Systems with Dynamic Dependencies","id":"121","title":"Other Incremental Build Systems with Dynamic Dependencies"},"122":{"body":"Shake is an incremental build system implemented in Haskell, described in detail in the Shake Before Building paper. The main difference in the model between Shake and PIE is that Shake follows a more target-based approach as seen in Make, where targets are build tasks that provide the files of the target. Therefore, the output (provided) files of a build task need to be known up-front. The upside of this approach is that build scripts are easier to read and write and easier to parallelize. However, the main downside is that it is not possible to express build tasks where the names of provided files are only known after executing the task. For example, compiling a Java class with inner classes results in a class file for every inner class with a name based on the outer and inner class, which is not known up-front. Implementation wise, Shake supports explicit parallelism, whereas PIE cannot (at the time of writing). Parallel builds in PIE are tricky because two build tasks executing in parallel could require/provide (read/write) to the same file, which can result in data races. Shake avoids this issue by requiring provided files to be specified as targets up-front, speeding up builds through explicit parallelism. In PIE, this might be solvable with a protocol where tasks first call a Context method to tell PIE about the files that will be provided, or the directory in which files will be provided, so PIE can limit parallelism on those files and directories. Tasks that do not know this up-front cannot be executed in parallel, but can still be executed normally.","breadcrumbs":"Related Work » Shake","id":"122","title":"Shake"},"123":{"body":"Rattle is a build system focussing on easily turning build scripts into incremental and parallel build scripts without requiring dependency annotations, described in detail in the Build Scripts with Perfect Dependencies paper. To make this possible, Rattle has a very different model compared to PIE. Rattle build scripts consist of (terminal/command-line) commands such as gcc -c main.c, and simple control logic/manipulation to work with the results of commands, such as if checks, for loops, or changing the file extension in a path. Therefore, future commands can use values of previous commands, and use control logic to selectively or iteratively execute commands. Commands create dynamic file dependencies, both reading (require) and writing (provide), which are automatically detected with dependency tracing on the OS level. There are no explicit dependencies between commands, but implicit dependencies arise when a command reads a file that another command writes for example. Rattle incrementally executes the commands of a build script, skipping commands for which no files have changed. The control logic/manipulation around the commands is not incrementalized. Rattle build scripts can be explicitly parallelized, but Rattle also implicitly parallelizes builds by speculatively executing future commands. If speculation results in a hazard, such as a command reading a file and then a command writing to that file – equivalent to a hidden dependency in PIE – then the build is inconsistent and must be restarted without speculative parallelism. Core difference The best way I can explain the core difference is that Rattle builds a single build script which is a stream of commands with file dependencies ; whereas in PIE, every build task is in essence its own build script that produces an output value , with file dependencies but also dependencies between build tasks . Both models have merit! The primary advantage of the Rattle model is that existing build scripts, such as Make scripts or even just Bash scripts, can be easily converted to Rattle build scripts by converting the commands and control logic/manipulation into Rattle. No file dependencies have to be specified since they are automatically detected with file dependency tracing. Then, Rattle can parallelize and incrementally execute the build script. Therefore, Rattle is great for incrementalizing and parallelizing existing Make/Bash/similar build scripts with very low effort. While it is possible to incrementalize these kind of builds in PIE, the effort will be higher due to having to split commands into task, and having to report the file dependencies to PIE. If PIE had access to reliable cross-platform automated file dependency tracing, we could reduce this effort by building a “command task” that executes arbitrary terminal/command-line commands. However, reliable cross-platform file dependency tracking does not exist (to my knowledge, at the time of writing). The library that Rattle uses, Fsatrace, has limitations such as not detecting reads/writes to directories, and having to disable system integrity protection on macOS. Therefore, Rattle also (as mentioned in the paper, frustratingly) inherits the limitations of this library. Compared to Rattle, the primary advantages of programmatic incremental build systems (i.e., the PIE model) are: PIE can rebuild a subset of the build script , instead of only the entire build script. The entire build is incrementalized (using tasks as a boundary), not just commands. Tasks can return any value of the programming language, not just strings. Tasks are modular, and can be shared using the mechanism of the programming language. These properties are a necessity for use in interactive environments, such as a code editors, IDEs, or other using-facing interactive applications. Therefore, the PIE model is more suited towards incrementalization in interactive environment, but can still be used to do incremental batch builds. Implicit Parallelism (Speculative Execution) Rattle supports both implicit and explicit parallelization, whereas PIE does not at the time of writing. Explicit parallelism was already discussed in the Shake section. After a first build, Rattle knows which commands have been executed and can perform implicit parallelization by speculatively executing future commands. If a hazard occurs, the build is restarted without speculation (other recovery mechanisms are also mentioned in the paper), although the evaluation shows that this is rare, and even then the builds are still fast due to incrementality and explicit parallelism. After the initial build, PIE also has full knowledge of the build script. In fact, we know more about the build script due to tracking both the file dependencies and the dependencies between tasks . However, just like Rattle, PIE doesn’t know whether the tasks that were required last time, will be the tasks that are required this time. In principle, 0 tasks that were required last time can be required the next time. Therefore, if we would do speculative execution of future commands, we could run into similar hazard: hidden dependencies and overlapping provided files. However, I think that restarting the build without speculative execution, when a hazard is detected, is incorrect in PIE. This is because PIE keeps track of the entire dependency graph, including task output values, which would not be correct after a hazard. Restarting the build could then produce a different result, because PIE uses the previously created dependency graph for incrementality. In Rattle this is correct because it only keeps track of file dependencies of commands. So I am currently not sure if and how we could do implicit parallelism in PIE. Self-Tracking Self-tracking is the ability of an incremental build system to correctly react to changes in the build script . If a part of the build script changes, that part should be re-executed. Rattle supports self-tracking without special support for it, because Rattle makes no assumption about the build script, and re-executes the build script every time (while skipping commands that have not been affected). Therefore, build script changes are handled automatically. PIE supports self-tracking by creating a dependency to the source code or binary file of a task. However, this requires support from the programming language to find the source or binary file corresponding to a task. In the Java implementation of PIE, we can use class loaders to get the (binary) class files for tasks and related files. In the Rust implementation of PIE, we have not yet implemented self-tracking. In Rust, we could implement self-tracking by writing a procedural macro that can be applied to Task implementations to embed a self-tracking dependency (probably a hash over the Task impl) into the Task::execute method. However, since PIE is fully programmatic, tasks can use arbitrary code. To be fully correct, we’d need to over-approximate: check whether the binary of the program has changed and consider all tasks inconsistent if the binary has changed. In practice, the approach from the Java implementation of PIE works well, alongside a version number that gets updated when code used by tasks changes semantics in a significant way. Cloud Builds Rattle could support “cloud builds” where the output files of a command are stored on a server, using the hashed inputs (command string and read files) of the command as a key. Subsequent builds that run command with matching hashes could then just download the output files and put them in the right spot. It is unclear if Rattle actually does this, but they discuss it (and several problems in practice) in the paper. PIE does not currently support this, but could support it in a similar way (with the same practical problems). In essence, the Store as implemented in this tutorial is such a key-value store, except that it is locally stored. We also cache task outputs, but they could be stored in a similar way. Whether this is a good idea depends on the task. For tasks that are expensive to execute, querying a server and getting the data from the server can be faster than executing the task. For tasks that are cheap to execute, just executing it can be faster.","breadcrumbs":"Related Work » Rattle","id":"123","title":"Rattle"},"124":{"body":"I’d still like to write a tutorial going over an example where we use this build system for incremental batch builds, but at the same time also reuse the same build for an interactive environment. This example will probably be something like interactively developing a parser with live feedback. I’d also like to go over all kinds of extensions to the build system, as there are a lot of interesting ones. Unfortunately, those will not be guided like the rest of this programming tutorial, due to lack of time.","breadcrumbs":"Future Work » Future work","id":"124","title":"Future work"},"13":{"body":"Since we will be implementing three different contexts in this tutorial, we will separate them in different modules. Create the context module by adding a module to pie/src/lib.rs: This is a diff over pie/src/lib.rs where lines with a green background are additions, lines with a red background are removals, lines without a special background are context on where to add/remove lines, and lines starting with @@ denote changed lines (in unified diff style). This is similar to diffs on source code hubs like GitHub. Create the pie/src/context directory, and in it, create the pie/src/context/mod.rs file with the following contents: pub mod non_incremental; Both modules are public so that users of our library can access context implementations. Create the pie/src/context/non_incremental.rs file, it will be empty for now. Your project structure should now look like: pibs\n├── pie\n│ ├── Cargo.toml\n│ └── src\n│ ├── lib.rs\n│ └── context\n│ ├── mod.rs\n│ └── non_incremental.rs\n└── Cargo.toml Confirm your module structure is correct by building with cargo build. Rust Help: Modules, Visibility Modules are typically separated into different files. Modules are declared with mod context. Then, the contents of a module are defined either by creating a sibling file with the same name: context.rs, or by creating a sibling directory with the same name, with a mod.rs file in it: context/mod.rs. Use the latter if you intend to nest modules, otherwise use the former. Like traits, modules also have visibility.","breadcrumbs":"Programmability » Non-Incremental Context » Context module","id":"13","title":"Context module"},"14":{"body":"Implement the non-incremental context in pie/src/context/non_incremental.rs by adding: use crate::{Context, Task}; pub struct NonIncrementalContext; impl Context for NonIncrementalContext { fn require_task(&mut self, task: &T) -> T::Output { task.execute(self) }\n} This NonIncrementalContext is extremely simple: in require_task we unconditionally execute the task, and pass self along so the task we’re calling can require additional tasks. Let’s write some tests to see if this does what we expect. Rust Help: Crates (Libraries), Structs, Trait Implementations, Last Expression In Rust, libraries are called crates. We import the Context and Task traits from the root of your crate (i.e., the src/lib.rs file) using crate:: as a prefix. Structs are concrete types that can contain data through fields and implement traits, similar to classes in class-oriented languages. Since we don’t need any data in NonIncrementalContext, we define it as a unit-like struct. Traits are implemented for a type with impl Context for NonIncrementalContext { ... }, where we then have to implement all methods and associated types of the trait. The Context trait is generic over Task, so in the impl block we introduce a type parameter T with impl, and use trait bounds as impl to declare that T must implement Task. The last expression of a function – in this case task.execute(self) in require_task which is an expression because it does not end with ; – is used as the return value. We could also write that as return task.execute(self);, but that is more verbose.","breadcrumbs":"Programmability » Non-Incremental Context » Implementation","id":"14","title":"Implementation"},"15":{"body":"Add the following test to pie/src/context/non_incremental.rs: #[cfg(test)]\nmod test { use super::*; #[test] fn test_require_task_direct() { #[derive(Clone, PartialEq, Eq, Hash, Debug)] struct ReturnHelloWorld; impl Task for ReturnHelloWorld { type Output = String; fn execute>(&self, _context: &mut C) -> Self::Output { \"Hello World!\".to_string() } } let mut context = NonIncrementalContext; assert_eq!(\"Hello World!\", context.require_task(&ReturnHelloWorld)); }\n} In this test, we create a struct ReturnHelloWorld which is the “hello world” of the build system. We implement Task for it, set its Output associated type to be String, and implement the execute method to just return \"Hello World!\". We derive the Clone, Eq, Hash, and Debug traits for ReturnHelloWorld as they are required for all Task implementations. We require the task with our context by creating a NonIncrementalContext, calling its require_task method, passing in a reference to the task. It returns the output of the task, which we test with assert_eq!. Run the test by running cargo test. The output should look something like: Compiling pie v0.1.0 (/pie) Finished test [unoptimized + debuginfo] target(s) in 0.29s Running unittests src/lib.rs (target/debug/deps/pie-7f6c7927ea39bed5) running 1 test\ntest context::non_incremental::test::test_require_task_direct ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s Doc-tests pie running 0 tests test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s Which indicates that the test indeed succeeds! You can experiment by returning a different string from ReturnHelloWorld::execute to see what a failed test looks like. Rust Help: Unit Testing, Nested Items, Unused Parameters, Assertion Macros Unit tests for a module are typically defined by creating a nested module named test with the #[cfg(test)] attribute applied to it. In that test module, you apply #[test] to testing functions, which then get executed when you run cargo test. The #[cfg(...)] attribute provides conditional compilation for the item it is applied to. In this case, #[cfg(test)] ensures that the module is only compiled when we run cargo test. We import all definitions from the parent module (i.e., the non_incremental module) into the test module with use super::*;. In Rust, items — that is, functions, structs, implementations, etc. — can be nested inside functions. We use that in test_require_task_direct to scope ReturnHelloWorld and its implementation to the test function, so it can’t clash with other test functions. In execute, we use _context as the parameter name for the context, as the parameter is unused. Unused parameters give a warning in Rust, unless it is prefixed by a _. assert_eq! is a macro that checks if its two expressions are equal. If not, it panics. This macro is typically used in tests for assertions, as a panic marks a test as failed.","breadcrumbs":"Programmability » Non-Incremental Context » Simple Test","id":"15","title":"Simple Test"},"16":{"body":"Our first test only tests a single task that does not use the context, so let’s write a test with two tasks where one requires the other to increase our test coverage. Add the following test: We use the same ReturnHelloWorld task as before, but now also have a ToLowerCase task which requires ReturnHelloWorld and then turn its string lowercase. However, due to the way we’ve set up the types between Task and Context, we will run into a problem. Running cargo test, you should get these errors: Compiling pie v0.1.0 (/pie)\nerror[E0308]: mismatched types --> pie/src/context/non_incremental.rs:47:30 |\n47 | context.require_task(&ReturnHelloWorld).to_lowercase() | ------------ ^^^^^^^^^^^^^^^^^ expected `&ToLowerCase`, found `&ReturnHelloWorld` | | | arguments to this method are incorrect | = note: expected reference `&ToLowerCase` found reference `&non_incremental::test::test_require_task_problematic::ReturnHelloWorld`\nnote: method defined here --> pie/src/lib.rs:18:6 |\n18 | fn require_task(&mut self, task: &T) -> T::Output; | ^^^^^^^^^^^^ For more information about this error, try `rustc --explain E0308`.\nerror: could not compile `pie` (lib test) due to previous error The problem is that execute of ToLowerCase takes a Context, so in impl Task for ToLowerCase it takes a Context, while we’re trying to require &ReturnHelloWorld through the context. This doesn’t work as Context::require_task only takes a &ToLowerCase as input. We could change execute of ToLowerCase to take Context: But that is not allowed: Compiling pie v0.1.0 (/pie)\nerror[E0276]: impl has stricter requirements than trait --> pie/src/context/non_incremental.rs:46:21 |\n46 | fn execute>(&self, context: &mut C) -> Self::Output { | ^^^^^^^^^^^^^^^^^^^^^^^^^ impl has extra requirement `C: Context` | ::: pie/src/lib.rs:11:3 |\n11 | fn execute>(&self, context: &mut C) -> Self::Output; | --------------------------------------------------------------------- definition of `execute` from trait For more information about this error, try `rustc --explain E0276`.\nerror: could not compile `pie` (lib test) due to previous error This is because the Task trait defines execute to take a Context, thus every implementation of Task must adhere to this, so we can’t solve it this way. Effectively, due to the way we defined Task and Context, we can only use a single task implementation . This is to simplify the implementation in this tutorial, as supporting multiple task types complicates matters a lot. Why only a Single Task Type? Currently, our context is parameterized by the type of tasks: Context. Again, this is for simplicity. An incremental context wants to build a single dependency graph and cache task outputs, so that we can figure out from the graph whether a task is affected by a change, and just return its output if it is not affected. Therefore, a context implementation will maintain a Store. Consider the case with two different task types A Context and Context would then have a Store and Store respectively. These two stores would then maintain two different dependency graphs, one where the nodes in the graph are ReturnHelloWorld and one where the nodes are ToLowerCase. But that won’t work, as we need a single dependency graph over all tasks to figure out what is affected. Therefore, we are restricted to a single task type in this tutorial. To solve this, we would need to remove the T generic parameter from Context, and instead use trait objects. However, this introduces a whole slew of problems because many traits that we use are not inherently trait-object safe. Clone is not object safe because it requires Sized. Eq is not object safe because it uses Self. Serializing trait objects is problematic. There are workarounds for all these things, but it is not pretty and very complicated. The actual PIE library supports arbitrary task types through trait objects. We very carefully control where generic types are introduced, and which traits need to be object-safe. Check out the PIE library if you want to know more! For now, we will solve this by just using a single task type which is an enumeration of the different possible tasks. First remove the problematic test: Then add the following test: Here, we instead define a single task Test which is an enum with two variants. In its Task implementation, we match ourselves and return \"Hello World!\" when the variant is ReturnHelloWorld. When the variant is ReturnHelloWorld, we require &Self::ReturnHelloWorld through the context, which is now valid because it is an instance of Test, and turn its string lowercase and return that. This now works due to only having a single task type. Run the test with cargo test to confirm it is working. Rust Help: Enum Enums define a type by a set of variants, similar to enums in other languages, sometimes called tagged unions in other languages. The match expression matches the variant and dispatches based on that, similar to switch statements in other languages. We have defined the API for the build system and implemented a non-incremental version of it. We’re now ready to start implementing an incremental context in the next chapter. Download source code You can download the source files up to this point .","breadcrumbs":"Programmability » Non-Incremental Context » Test with Multiple Tasks","id":"16","title":"Test with Multiple Tasks"},"17":{"body":"In this chapter, we will implement an incremental build context. An incremental context selectively executes tasks — only those that are affected by a change. In other words, an incremental context executes the minimum number of tasks required to make all tasks up-to-date. However, due to dynamic dependencies, this is not trivial. We cannot first gather all tasks into a dependency tree and then topologically sort that, as dependencies are added and removed while tasks are executing . To do incremental builds in the presence of dynamic dependencies, we need to check and execute affected tasks one at a time, updating the dependency graph, while tasks are executing . To achieve this, we will employ a technique called top-down incremental building , where we start checking if a top (root) task needs to be executed, and recursively check whether dependent tasks should be executed until we reach the bottom (leaf) task(s), akin to a depth-first search. Furthermore, build systems almost always interact with the file system in some way. For example, tasks read configuration and source files, or write intermediate and binary files. Thus, a change in a file can affect a task that reads it, and executing a task can result in writing to new or existing files. Therefore, we will also keep track of file dependencies . Like task dependencies, file dependencies are also tracked dynamically while tasks are executing. There are several ways to check if a file dependency is consistent (i.e., has not changed), such as checking the last modification date, or comparing a hash. To make this configurable on a per-dependency basis, we will implement stamps . A file stamp is just a value that is produced from a file, such as the modification date or hash, that is stored with the file dependency. To check if a file dependency is consistent, we just stamp the file again and compare it with the stored stamp. Similarly, we can employ stamps for task dependencies as well by stamping the output of a task. To achieve incrementality, we will continue with these steps in the following sections: Extend Context with a method to require a file , enabling tasks to specify dynamic dependencies to files. Implement file stamps and task output stamps , and extend Context with methods to select stampers , enabling tasks to specify when a dependency is consistent. Implement dynamic dependencies and their consistency checking . Implement a dependency graph store with methods to query and mutate the dependency graph. Implement the top-down incremental context that incrementally executes tasks.","breadcrumbs":"Incrementality » Introduction","id":"17","title":"Introduction"},"18":{"body":"Since build systems frequently interact with files, and changes to files can affect tasks, we need to keep track of file dependencies. Therefore, we will extend the Context API with methods to require files , enabling tasks to specify dynamic dependencies to files. Add a method to the Context trait in pie/src/lib.rs: require_file is similar to requiring a task, but instead takes a path to a file or directory on the filesystem as input. We use AsRef as the type for the path, so that we can pass anything in that can dereference to a path. For example, str has an AsRef implementation, so we can just use \"test.txt\" as a path. As an output, we return Result