From 36260a480ff77ef4b5f1ac3b024609887d6bd296 Mon Sep 17 00:00:00 2001
From: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
Date: Wed, 28 Aug 2024 15:23:02 +0000
Subject: [PATCH] Compile the cljs to the js bundle and update RSS feed
---
resources/public/blog/rss/clojure-feed.xml | 1578 ++++++++++++++
resources/public/main.js | 2277 ++++++++++----------
2 files changed, 2723 insertions(+), 1132 deletions(-)
diff --git a/resources/public/blog/rss/clojure-feed.xml b/resources/public/blog/rss/clojure-feed.xml
index e69de29..3634077 100644
--- a/resources/public/blog/rss/clojure-feed.xml
+++ b/resources/public/blog/rss/clojure-feed.xml
@@ -0,0 +1,1578 @@
+ If you are not familiar with fun-map, please refer to the doc Fun-Map: a solution to deps injection in Clojure. In this document, I will show you how we leverage In our backend, we use Here is the system we currently have for production: At a glance, we can easily understand the dependency injections flow of the app. If we were to represent these deps as a simple graph, we could have: The function We can then easily start the system via the fun-map function The Actually, the only differences between the prod and dev systems are the following: Thus, we just have to assoc a new db component to the The important thing to remember is that all the modifications to the system must be done before starting the system (via Naturally, the fun-map system also plays well with testing. Same process as for dev and prod, we just need to adapt the system a bit to run our tests. The tests requirement are: So same as for dev, we just read dedicated test configs and assoc a test db system to the default system: This works well with the clojure.test fixtures: It is possible to provide a ring-handler to figwheel configs which will be passed to a server figwheel starts for us. We just need to specify a ring-handler in Our system does have a ring-handler we can supply to figwheel, it is called Since figwheel starts the server, we do not need the aleph server dependency in our system anymore, se we can dissoc it from the system. So here is the The So the system is started first via While working on flybot.sg , I experimented with You can read the rationale of Datomic from their on-prem documentation Stuart Sierra explained very well how datomic works in the video Intro to Datomic. Basically, Datomic works as a layer on top of your underlying storage (in this case, we will use Cassandra db). Your The transactor is the process that controls inbounds, and coordinates persistence to the storage services. The process acts as a single authority for inbound transactions. A single transactor process allows the to be ACID compliant and fully consistent. The peer is the process that will query the persisted data. Since Datomic leverages existing storage services, you can change persistent storage fairly easily. Datomic is closed-source and commercial. You can see the different pricing models in the page Get Datomic On-Prem. There are a few way to get started for free. The first one being to use the datomic-free version which comes with in-mem database storage and local-storage transactor. You don’t need any license to use it so it is a good choice to get familiar with the datomic Clojure API. Then, there is Datomic only support Cassandra up to version 3.x.x Datomic start pro version of Cassandra at the time of writting: 3.7.1 Closest stable version of Cassandra: 3.11.10 Problem 1: Datomic does not support java 11 so we have to have a java 8 version on the machine Solution: use jenv to manage multiple java version Problem 2: cqlsh does not work with python3 with Cassandra running on java8 Solution: download the python2 pkg directly from python.org Problem 3: Solution: download the tar.gz directly on apache.org To test Cassandra and datomic locally, we can use the Test Cluster of Cassandra which comes up with only one node. Datomic instruction for Cassandra here It’s important to note that we do not add Since the peer works using the datomic shell, we can confidently use the Clojure API from our code now. We just need to add the datomic and Cassandra deps in the In case of embedded DB, we only need to start a transactor and that’s it. The URI to connect to the peer is of the shape: In case we want to run datomic in a container (and maybe having our app in another container), we can do the following: We assume that the app has its own DockerFile and run on port 8123 in this example. Here is a DockerFile example to have Datomic running in a container: Here is a Here are the commands to create the images and run 2 containers. However, this will not work right away as we need to add a few configurations to the datomic transactor properties to make sure the app can communicate with the transactor. Regarding the transactor properties (datomic provides a template for a transactor with Cassandra storage), when we use docker, we need to pay attention to 3 properties: Here are the difference between containerized and not containerized properties for a After updating the transactor properties, you should be able to see the app running on port 8123 and be able to perform transactions as expected. It is always very confusing to deal with time in programming. In fact there are so many time representations, for legacy reasons, that sticking to one is not possible as our dependencies, databases or even programming languages might use different ways of representing time! You might have asked yourself the following questions: This article will answer these questions and will illustrate the answers with Clojure code snippets using the juxt/tick is an excellent open-source Clojure library to deal with The So basically, it is just an The obvious advantage is the universal simplicity of representing time. The disadvantage is the human readability. So we need to find a more human-friendly representation of time. Alice is having some fish and chips for her lunch in the UK. She checks her clock on the wall and it shows 12pm. She checks her calendar and it shows the day is January the 20th. The local time is the time in a specific time zone, usually represented using a date and time-of-day without any time zone information. In java it is called So if we ask Alice for the time and date, she will reply:Goal
fun-map
to create different systems in the website flybot.sg: prod-system
, dev-system
, test-system
and figwheel-system
.Prod System
life-cycle-map
to manage the life cycle of all our stateful components.Describe the system
(defn system
+ [{:keys [http-port db-uri google-creds oauth2-callback client-root-path]
+ :or {client-root-path "/"}}]
+ (life-cycle-map
+ {:db-uri db-uri
+ :db-conn (fnk [db-uri]
+ (let [conn (d/get-conn db-uri db/initial-datalevin-schema)]
+ (load-initial-data conn data/init-data)
+ (closeable
+ {:conn conn}
+ #(d/close conn))))
+ :oauth2-config (let [{:keys [client-id client-secret]} google-creds]
+ (-> config/oauth2-default-config
+ (assoc-in [:google :client-id] client-id)
+ (assoc-in [:google :client-secret] client-secret)
+ (assoc-in [:google :redirect-uri] oauth2-callback)
+ (assoc-in [:google :client-root-path] client-root-path)))
+ :session-store (memory-store)
+ :injectors (fnk [db-conn]
+ [(fn [] {:db (d/db (:conn db-conn))})])
+ :executors (fnk [db-conn]
+ [(handler/mk-executors (:conn db-conn))])
+ :saturn-handler handler/saturn-handler
+ :ring-handler (fnk [injectors saturn-handler executors]
+ (handler/mk-ring-handler injectors saturn-handler executors))
+ :reitit-router (fnk [ring-handler oauth2-config session-store]
+ (handler/app-routes ring-handler oauth2-config session-store))
+ :http-server (fnk [http-port reitit-router]
+ (let [svr (http/start-server
+ reitit-router
+ {:port http-port})]
+ (closeable
+ svr
+ #(.close svr))))}))
+
+(def prod-system
+ "The prod system starts a server on port 8123.
+ It does not load any init-data on touch and it does not delete any data on halt!.
+ You can use it in your local environment as well."
+ (let [prod-cfg (config/system-config :prod)]
+ (system prod-cfg)))
+
life-cycle-map
+├── :db-conn (closeable)
+├── :oauth2-config
+├── :session-store
+├── :injectors
+│ └── :db-conn
+├── :executors
+│ └── :db-conn
+├── :saturn-handler
+├── :ring-handler
+│ ├── :injectors
+│ ├── :executors
+│ ├── :saturn-handler
+├── :reitit-router
+│ ├── :ring-handler
+│ ├── :oauth2-config
+│ └── :session-store
+└── :http-server (closeable)
+ ├── :http-port
+ ├── :reitit-router
+
prod-system
just fetches some env variables with the necessary configs to start the system.Run the system
touch
:clj꞉clj.flybot.core꞉>
+(touch prod-system)
+{:ring-handler #function[clj.flybot.handler/mk-ring-handler/fn--37646],
+ :executors [#function[clj.flybot.handler/mk-executors/fn--37616]],
+ :injectors [#function[clj.flybot.core/system/fn--38015/fn--38016]],
+ :http-server
+ #object[aleph.netty$start_server$reify__11448 0x389add75 "AlephServer[channel:[id: 0xd98ed2db, L:/0.0.0.0:8123], transport::nio]"],
+ :reitit-router #function[clojure.lang.AFunction/1],
+ :http-port 8123,
+ :db-uri "datalevin/prod/flybotdb",
+ :oauth2-config
+ {:google
+ {:scopes ["https://www.googleapis.com/auth/userinfo.email" "https://www.googleapis.com/auth/userinfo.profile"],
+ :redirect-uri "https://v2.fybot.sg/oauth/google/callback",
+ :client-id "client-id",
+ :access-token-uri "https://oauth2.googleapis.com/token",
+ :authorize-uri "https://accounts.google.com/o/oauth2/auth",
+ :launch-uri "/oauth/google/login",
+ :client-secret "client-secret",
+ :project-id "flybot-website",
+ :landing-uri "/oauth/google/success"}},
+ :session-store
+ #object[ring.middleware.session.memory.MemoryStore 0x1afb7eac "ring.middleware.session.memory.MemoryStore@1afb7eac"],
+ :saturn-handler #function[clj.flybot.handler/saturn-handler],
+ :db-conn
+ {:conn
+ #<Atom@1ada44a1:
+ {:store #object[datalevin.storage.Store 0x4578bf30 "datalevin.storage.Store@4578bf30"],
+ :eavt #{},
+ :avet #{},
+ :veat #{},
+ :max-eid 73,
+ :max-tx 5,
+ :hash nil}>}}
+
Dev System
system
described above can easily be adapted to be used for development purposes.dev
clears the db, prod
retains db data)system
and read some dev configs instead of getting prod env variables:(defn db-conn-system
+ "On touch: empty the db and get conn.
+ On halt!: close conn and empty the db."
+ [init-data]
+ (fnk [db-uri]
+ (let [conn (d/get-conn db-uri)
+ _ (d/clear conn)
+ conn (d/get-conn db-uri db/initial-datalevin-schema)]
+ (load-initial-data conn init-data)
+ (closeable
+ {:conn conn}
+ #(d/clear conn)))))
+
+(def dev-system
+ "The dev system starts a server on port 8123.
+ It loads some real data sample. The data is deleted when the system halt!.
+ It is convenient if you want to see your backend changes in action in the UI."
+ (-> (system (config/system-config :dev))
+ (assoc :db-conn (db-conn-system data/init-data))))
+
touch
). If some modifications need to be made to the running system:halt!
)touch
)Test system
(defn test-system
+ []
+ (-> (config/system-config :test)
+ sys/system
+ (dissoc :oauth2-config)
+ (assoc :db-conn (sys/db-conn-system test-data))))
+
;; atom required to re-evalualte (test-system) because of fixture `:each`
+(def a-test-system (atom nil))
+
+(defn system-fixture [f]
+ (reset! a-test-system (test-system))
+ (touch @a-test-system)
+ (f)
+ (halt! @a-test-system))
+
+(use-fixtures :each system-fixture)
+
Figwheel system
figwheel-main.edn
like so:{:ring-handler flybot.server.systems/figwheel-handler
+ :auto-testing true}
+
reitit-router
in our system (it returns a ring-handler).figwheel-system
:(def figwheel-system
+ "Figwheel automatically touches the system via the figwheel-main.edn on port 9500.
+ Figwheel just needs a handler and starts its own server hence we dissoc the http-server.
+ If some changes are made in one of the backend component (such as handler for instance),
+ you can halt!, reload ns and touch again the system."
+ (-> (config/system-config :figwheel)
+ system
+ (assoc :db-conn (db-conn-system data/init-data))
+ (dissoc :http-port :http-server)))
+
+(def figwheel-handler
+ "Provided to figwheel-main.edn.
+ Figwheel uses this handler to starts a server on port 9500.
+ Since the system is touched on namespace load, you need to have
+ the flag :figwheel? set to true in the config."
+ (when (:figwheel? CONFIG)
+ (-> figwheel-system
+ touch
+ :reitit-router)))
+
figheel-handler
is the value of the key :reitit-router
of our running system.touch
and its handler is provided to the servers figwheel starts that will be running while we work on our frontend.datomic-free
, datomic starter-pro
with Cassandra and datomic starter-pro with embedded storage.Rational
application
and a Datomic transactor
are contained in a peer
. Datomic Starter Pro with Cassandra
Datomic pro starter version
datomic pro starter
renamed datomic starter
which is free and maintained for 1 year. After the one year threshold, you won’t benefit from support and you won’t get new versions of Datomic. You need to register to Datomic to get the license key.Cassandra, Java and Python version caveats
# jenv to manage java version
+brew install jenv
+echo 'export PATH="$HOME/.jenv/bin:$PATH"' >> ~/.bash_profile
+echo 'eval "$(jenv init -)"' >> ~/.bash_profile
+# add cask version
+brew tap homebrew/cask-versions
+# install java 8 cask
+brew install --cask adoptopenjdk8
+# add java 11 (current java version) to jenv
+jenv add "$(/usr/libexec/java_home)"
+# add java 8 to jenv
+jenv add /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home
+# update the ${JAVA_HOME} everytim we change version
+jenv enable-plugin export
+#swith to java 8
+jenv global 1.8
+
brew install cassandra@3
triggers an execution error hard to debugSetup Cassandra locally and run start the transactor
# Check if all the versions are ok
+java -version
+openjdk version "1.8.0_292"
+OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
+OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed mode)
+python2 -V
+Python 2.7.18
+cqlsh
+Connected to Test Cluster at 127.0.0.1:9042.
+[cqlsh 5.0.1 | Cassandra 3.11.14 | CQL spec 3.4.4 | Native protocol v4]
+Use HELP for help.
+
+# Start cassandra
+cassandra -f
+
+# ===========================================================
+# in other terminal
+
+# Only setup replica to 1 for the test cluster locally
+# add datomic keyspace and table
+cqlsh
+CREATE KEYSPACE IF NOT EXISTS datomic WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 1};
+CREATE TABLE IF NOT EXISTS datomic.datomic
+(
+ id text PRIMARY KEY,
+ rev bigint,
+ map text,
+ val blob
+);
+
+# ===========================================================
+# in other terminal
+
+# start datomic transactor
+# A sample of the cassandra transactor properties is provided in the datomic distribution samples.
+# the documentation of datomic mentioned we should have a msg of the shape:
+# System starter URI but I do not have URI but it seems to work nonetheless
+cd datomic-pro-1.0.6527/
+bin/transactor ~/workspaces/myproj/config/cassandra-transactor.properties
+Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
+System started
+
+# ===========================================================
+# in other terminal
+
+# Test if the peer works properly on our localhost single node
+bin/shell
+Datomic Java Shell
+Type Shell.help(); for help.
+datomic % uri = "datomic:cass://localhost:9042/datomic.datomic/myproj";
+<datomic:cass://localhost:9042/datomic.datomic/myproj>
+datomic % Peer.createDatabase(uri);
+<true>
+datomic % conn = Peer.connect(uri);
+<{:unsent-updates-queue 0, :pending-txes 0, :next-t 1000, :basis-t 66, :index-rev 0, :db-id "myproj-some-id-here"}>
+
ssl
in the database URI so we don’t have to deal with the KeyStore and TrustStore (for local use only)Use Clojure API to create db and perform transactions
deps.edn
:;; deps.edn : versions are provided upon subscription to datomic-pro
+com.datomic/datomic-pro {:mvn/version "1.0.6527"}
+com.datastax.cassandra/cassandra-driver-core {:mvn/version "3.1.0"}
+
Datomic Starter Pro with embedded storage
"datomic:dev://localhost:4334/myproj-db?password=my-secret"
+;; the password is the `storage-datomic-password` setup in the transactor properties.
+
Datomic in docker container
DockerFiles
FROM clojure:lein-2.6.1-alpine
+
+ENV DATOMIC_VERSION 1.0.6527
+ENV DATOMIC_HOME /opt/datomic-pro-$DATOMIC_VERSION
+ENV DATOMIC_DATA $DATOMIC_HOME/data
+
+RUN apk add --no-cache unzip curl
+
+# Datomic Pro Starter as easy as 1-2-3
+# 1. Create a .credentials file containing user:pass
+# for downloading from my.datomic.com
+ADD .credentials /tmp/.credentials
+
+# 2. Make sure to have a config/ folder in the same folder as your
+# Dockerfile containing the transactor property file you wish to use
+RUN curl -u $(cat /tmp/.credentials) -SL https://my.datomic.com/repo/com/datomic/datomic-pro/$DATOMIC_VERSION/datomic-pro-$DATOMIC_VERSION.zip -o /tmp/datomic.zip \
+ && unzip /tmp/datomic.zip -d /opt \
+ && rm -f /tmp/datomic.zip
+
+ADD config $DATOMIC_HOME/config
+
+WORKDIR $DATOMIC_HOME
+RUN echo DATOMIC HOME: $DATOMIC_HOME
+
+# 3. Provide a CMD argument with the relative path to the transactor.properties
+VOLUME $DATOMIC_DATA
+
+EXPOSE 4334 4335 4336
+
+CMD bin/transactor -Ddatomic.printConnectionInfo=true config/dev-transactor.properties
+
Docker Compose
docker-compose.yml
we could use describing our app and datomic transactor containersversion: '3.0'
+services:
+ datomicdb:
+ image: datomic-img
+ hostname: datomicdb
+ ports:
+ - "4336:4336"
+ - "4335:4335"
+ - "4334:4334"
+ volumes:
+ - "/data"
+ myprojapp:
+ image: myproj-img
+ ports:
+ - "8123:8123"
+ depends_on:
+ - datomicdb
+
# Create datomic transactor image
+docker build -t datomic-img .
+
+# Create app image
+docker build -t myproj-img .
+
+# run the 2 images in containers
+docker-compose up
+
Transactors Properties
localhost
is now 0.0.0.0alt-host
must be added with the container name (or IP) or the container running the app.storage-access
must be set to remote
dev-transactor
: # If datomic not in container
+protocol=dev
+host=localhost
+port=4334
+
+# If datomic in container
+protocol=dev
+host=0.0.0.0
+port=4334
+alt-host=datomicdb
+storage-access=remote
+
timestamp
, date-time
, offset-date-time
, zoned-date-time
, instant
, inst
?UTC
, DST
?Instant
instead of Java Date
?timestamp
?duration
and a period
?juxt/tick
library.What is
Tick
?date
and time
as values. The documentation is of very good quality as well.Time since epoch (timestamp)
time since epoch
, or timestamp
, is a way of measuring time by counting the number of time units that have elapsed since a specific point in time, called the epoch. It is often represented in either milliseconds or seconds, depending on the level of precision required for a particular application.int
such as 1705752000000
Local time
java.time.LocalDateTime
. However, tick
mentioned that when you asked someone the time, it is always going to be "local", so they prefer to call it date-time
as the local part is implicit.(-> (t/time "12:00")
+ (t/on "2024-01-20"))
+;=> #time/date-time "2024-01-20T12:00"
+
At the same time and date Alice is having lunch in London, Bob is having some fish soup for dinner in his Singapore's nearby food court. He checked the clock on the wall and reads 8pm.
So if we ask Bob for the time, he will reply that it is 8pm. So we can see that the local time is indeed local as Bob and Alice have different times.
The question is: how to have a common time representation for Bob and Alice?
One of the difference between Bob and Alice times is due to the Coordinated Universal Time (UTC). The UTC offset is the difference between the local time and the UTC time, and it is usually represented using a plus or minus sign followed by the number of hours ahead or behind UTC
The United Kingdom is located on the prime meridian, which is the reference line for measuring longitude and the basis for the UTC time standard. Therefore, the local time in the UK is always the same as UTC time, and the time zone offset is UTC+0
(also called Z
). Alice is on the prime meridian, therefore the time she sees is the UTC time, the universal time reference.
As you go east, the difference with UTC increase. For example, Singapore is located at approximately 103.8 degrees east longitude, which means that it is eight hours ahead of UTC, and its time zone offset is UTC+8
. That is why Bob is 8 hours ahead of Alice (8 hours in the "future")
As you go west, the difference with UTC decrease. For example, New York City is located at approximately 74 degrees west longitude, which means that it is four hours behind UTC during standard time, and its time zone offset is UTC-4
(4 hours behind - 4 hours in the "past").
So, going back to our example, Bob is 8 hours ahead (in the "future") of Alice as we can see via the UTC+8
:
;; Alice time
+(-> (t/time "12:00")
+ (t/on "2024-01-20")
+ (t/offset-by 0))
+;=> #time/offset-date-time "2024-01-20T12:00Z"
+
+;; Bob time
+(-> (t/time "12:00")
+ (t/on "2024-01-20")
+ (t/offset-by 8))
+;=> #time/offset-date-time "2024-01-20T12:00+08:00"
+
We added the offset to our time representation, note the tick name for that representation: offset-date-time
. In java, it is called java.time.OffsetDateTime
. We can see for Bob's time a +08:00
. This represents The Coordinated Universal Time (UTC) offset.
So we could assume that the UTC offset remains the same within the same zone (country or region), but it is not the case. Let's see why in the next section.
So far we have the following components to define a time:
However, counter-intuitively, the UTC offset for Alice is not the same all year long. Sometimes it is UTC+0
(Z
) in winter (as we saw earlier) but sometimes it is UTC+1
in summer.
Let me prove it to you:
;; time for Alice in winter
+(-> (t/time "12:00")
+ (t/on "2024-01-20") ;; January - a winter month
+ (t/in "Europe/London")
+ (t/offset-date-time))
+;=> #time/offset-date-time "2024-01-20T12:00Z"
+
+;; time for Alice in summer
+(-> (t/time "12:00")
+ (t/on "2024-08-20") ;; August - a summer month
+ (t/in "Europe/London")
+ (t/offset-date-time))
+;=> #time/offset-date-time "2024-08-20T12:00+01:00"
+
This UTC offset difference is due to the Daylight Saving Time (DST).
Daylight Saving Time (DST) is a system of adjusting the clock in order to make better use of daylight during the summer months by setting the clock forward by one hour in the spring and setting it back by one hour in the fall. This way, Alice can enjoy more of the sunlight in summer since the days are "longer" (more sunlight duration) while keeping her same working hours!
It is important to note that not all countries implement DSL. Some countries do not use DSL because they don't need. That is the case of Singapore. In Singapore, the sunset/sunrise is almost happening at the same time everyday so technically, there is no Winter/Summer. Some country chose not to use it. That's the case of Japan for instance. Japan could benefit from the DSL but chose not to implement it for diverse reasons.
So we can conclude that a UTC offset is not representative of a Zone because some country might implement DST and other not. Also, for the country implementing DST, their UTC is therefore not fix throughout the year. Thus, we need another parameter to fully define a time: the Zone:
(-> (t/time "12:00")
+ (t/on "2024-01-20") ;; January - a winter month
+ (t/in "Europe/London"))
+;=> #time/zoned-date-time "2024-01-20T12:00Z[Europe/London]"
+
You can notice that it is the same code as before but I remove the conversion to an offset-date-time
. Indeed, Adding the zone like in (t/in "Europe/London")
is already considering the Zone obviously (and therefore the UTC) thus creating a zoned-date-time
.
A #time/zoned-date-time
in Java is called a java.time.ZonedDateTime
.
So we now have a complete way to describe the time:
So the time for Bob is:
(-> (t/time "12:00")
+ (t/on "2024-01-20")
+ (t/in "Asia/Singapore"))
+;=> #time/zoned-date-time "2024-01-20T12:00+08:00[Asia/Singapore]"
+
So to recap:
Asia/Singapore
always has the same UTC all year long because no DSTEurope/London
has a different UTC in summer and winterSo a Zone encapsulates the notion of UTC and DST.
You might thought we were done here but actually the recommended time representation would be an instant
. In java, it is called java.time.Instant
. Why do we want to use instant is actually to avoid confusion. When you store a time in your DB, or when you want to add 10 days to this time, you actually don't want to deal with time zone. In programming, we always want to have a solution as simple as possible. Remember the very first time representation I mentioned? The time since epoch. The epoch
in the prime meridian (UTC+0
) is the same for everybody. So the time since epoch (to current UTC+0 time) in ms is a universal way of representing the time.
;; instant time for Alice
+(-> (t/time "12:00")
+ (t/on "2024-01-20")
+ (t/in "Europe/London")
+ (t/instant))
+;=> #time/instant "2024-01-20T12:00:00Z"
+
+;; instant time for Bob
+(-> (t/time "20:00")
+ (t/on "2024-01-20")
+ (t/in "Asia/Singapore")
+ (t/instant))
+;=> #time/instant "2024-01-20T12:00:00Z"
+
We can see in the example above, that since Singapore is 8 hours ahead of London, 12pm in London and 8pm in Singapore are indeed the same instant
.
The instant
is the human-friendly time representation of the timestamp (time since epoch). You can then store that format in your DB or do operation on it such as adding/substituting duration or period to it (more on this later).
The epoch
in time-since-epoch is equivalent to #time/instant "1970-01-01T00:00:00Z":
(t/epoch)
+;=> #time/instant "1970-01-01T00:00:00Z"
+
That is correct, if we have a web page, we want Alice to see the time in London time and Bob the time in Singapore time. This is easy to do. we can derive the zoned-date-time
from an instant
since we know the zone of Bob and Alice:
;; in Alice's browser
+(t/format (t/formatter "yyyy-MM-dd HH:mm:ss")
+ (t/in #time/instant "2024-01-20T12:00:00Z" "Europe/London"))
+"2024-01-20 12:00:00"
+
+;; in Bob's browser
+(t/format (t/formatter "yyyy-MM-dd HH:mm:ss")
+ (t/in #time/instant "2024-01-20T12:00:00Z" "Asia/Singapore"))
+"2024-01-20 20:00:00"
+
Last time format I promise. As a clojure developer, you might often see inst
. It is different from instant
. In java inst
is called java.util.Date
. The java.util.Date
class is an old and flawed class that was replaced by the Java 8 time API, and it should be avoided when possible.
However, some libraries might require you to pass inst
instead of instant
still, and it is easy to convert between the two using the Tick library:
(t/inst #time/instant "2024-01-20T04:00:00Z")
+;=> #inst "2024-01-20T04:00:00.000-00:00"
+
What about the other way around?
(t/instant #inst "2024-01-20T04:00:00.000-00:00")
+;=> #time/instant "2024-01-20T04:00:00Z"
+
Just remember these key points:
instant
(java.time.Instant)zoned-date-time
(java.time.ZonedDateTime)zoned-date-time
using string formatterWe now know that we need to use instant
to perform operations on time. However, sometimes we use duration
and sometimes we use period
:
(t/new-duration 10 :seconds)
+;=> #time/duration "PT10S"
+
+(t/new-period 10 :weeks)
+;=> #time/period "P70D"
+
They are not interchangeable:
(t/new-period 10 :seconds)
+; Execution error (IllegalArgumentException) at tick.core/new-period (core.cljc:649).
+; No matching clause: :seconds
+
So what is the difference? I will give you a clue:
nanosecond
to day
(included) are durations
day
such as a week
for instance are a period
.There is one unit that can be both a duration
and a period
: a day
:
;; day as duration
+(t/new-duration 10 :days)
+#time/duration "PT240H"
+
+;; day as period
+(t/new-period 10 :days)
+#time/period "P10D"
+
Therefore, a simple definition could be:
duration
measures an amount of time using time-based values (seconds, nanoseconds).period
uses date-based (we can also calendar-based) values (years, months, days)day
can be both duration
and period
: a duration of one day is exactly 24 hours long but a period of one day, when considering the calendar, may vary.First, here is how you would add a day as duration or as a period to the proper format:
;; time-based so use duration
+(-> (t/time "10:00")
+ (t/>> (t/new-duration 4 :hours)))
+;=> #time/time "14:00"
+
+;; date-based so use period
+(-> (t/date "2024-04-01")
+ (t/>> (t/new-period 1 :days)))
+;=> #time/date "2024-04-02"
+
Now, let me prove to you that we need to be careful to chose the right format for a day. In London, at 1am on the last Sunday of March, the clocks go forward 1 hour (DST increase by one because we enter summer months). So in 2024, at 1am, on March 31st, clocks go forward 1 hour.
;; we add a period of 1 day
+(-> (t/time "08:00")
+ (t/on "2024-03-30")
+ (t/in "Europe/London")
+ (t/>> (t/new-period 1 :days)))
+#time/zoned-date-time "2024-03-31T08:00+01:00[Europe/London]"
+
+;; we add a duration of 1 day
+(-> (t/time "08:00")
+ (t/on "2024-03-30")
+ (t/in "Europe/London")
+ (t/>> (t/new-duration 1 :days)))
+#time/zoned-date-time "2024-03-31T09:00+01:00[Europe/London]"
+
We can see that since in this specific DST update to summer month, the day 03/31 "gained" an hour so it has a duration
of 25 hours, therefore our new time is 09:00
. However, the period
taking into consideration the date in a calendar system, does not see a day as 24 hours (time-base) but as calendar unit (date-based) and therefore the new time is still 08:00
.
A Zone encapsulates the notion of UTC and DST.
The time since epoch is the universal computer-friendly of representing time whereas the Instant is the universal human-friendly of representing time.
A duration
measures an amount of time using time-based values whereas a period
uses date-based (calendar) values.
Finally, for Clojure developers, I highly recommend using juxt/tick
as it allows us to handle time efficiently (conversion, operations) and elegantly (readable, as values) and I use it in several of my projects. It is also of course possible to do interop with the java.time.Instant
class directly if you prefer.
If you are not familiar with lasagna-pull, please refer to the doc Lasagna Pull: Precisely select from deep nested data
In this document, I will show you how we leverage lasagna-pull
in the flybot app to define a pure data API.
A good use case of the pattern is as parameter in a post request.
In our backend, we have a structure representing all our endpoints:
;; BACKEND data structure
+(defn pullable-data
+ "Path to be pulled with the pull-pattern.
+ The pull-pattern `:with` option will provide the params to execute the function
+ before pulling it."
+ [db session]
+ {:posts {:all (fn [] (get-all-posts db))
+ :post (fn [post-id] (get-post db post-id))
+ :new-post (with-role session :editor
+ (fn [post] (add-post db post)))
+ :removed-post (with-role session :editor
+ (fn [post-id user-id] (delete-post db post-id user-id)))}
+ :users {:all (with-role session :owner
+ (fn [] (get-all-users db)))
+ :user (fn [id] (get-user db id))
+ :removed-user (with-role session :owner
+ (fn [id] (delete-user db id)))
+ :auth {:registered (fn [id email name picture] (register-user db id email name picture))
+ :logged (fn [] (login-user db (:user-id session)))}
+ :new-role {:admin (with-role session :owner
+ (fn [email] (grant-admin-role db email)))
+ :owner (with-role session :owner
+ (fn [email] (grant-owner-role db email)))}
+ :revoked-role {:admin (with-role session :owner
+ (fn [email] (revoke-admin-role db email)))}}})
+
This resembles a REST API structure.
Since the API “route” information is contained within the pattern keys themselves, all the http requests with a pattern as params can hit the same backend URI.
So we have a single route for all pattern http request:
(into (auth/auth-routes oauth2-config)
+ [["/pattern" {:post ring-handler}] ;; all requests with pull pattern go here
+ ["/users/logout" {:get (auth/logout-handler client-root-path)}]
+ ["/oauth/google/success" {:get ring-handler :middleware [[auth/authentification-middleware client-root-path]]}]
+ ["/*" {:get {:handler index-handler}}]])
+
Therefore the pull pattern:
:with
option for the concerned endpointsFor instance, getting a specific post, meaning with the “route”: :posts :post
, can be done this way:
((pull/qfn
+ {:posts
+ {(list :post :with [s/post-1-id]) ;; provide required params to pullable-data :post function
+ {:post/id '?
+ :post/page '?
+ :post/css-class '?
+ :post/creation-date '?
+ :post/last-edit-date '?
+ :post/author {:user/id '?
+ :user/email '?
+ :user/name '?
+ :user/picture '?
+ :user/roles [{:role/name '?
+ :role/date-granted '?}]}
+ :post/last-editor {:user/id '?
+ :user/email '?
+ :user/name '?
+ :user/picture '?
+ :user/roles [{:role/name '?
+ :role/date-granted '?}]}
+ :post/md-content '?
+ :post/image-beside {:image/src '?
+ :image/src-dark '?
+ :image/alt '?}
+ :post/default-order '?}}}
+ '&? ;; bind the whole data
+ ))
+; =>
+{:posts
+ {:post
+ #:post{:id #uuid "64cda032-b4e4-431e-bd85-0dbe34a8feeb" ;; s/post-1-id
+ :page :home
+ :css-class "post-1"
+ :creation-date #inst "2023-01-04T00:00:00.000-00:00"
+ :last-edit-date #inst "2023-01-05T00:00:00.000-00:00"
+ :author #:user{:id "alice-id"
+ :email "alice@basecity.com"
+ :name "Alice"
+ :picture "alice-pic"
+ :roles [#:role{:name :editor
+ :date-granted
+ #inst "2023-01-02T00:00:00.000-00:00"}]}
+ :last-editor #:user{:id "bob-id"
+ :email "bob@basecity.com"
+ :name "Bob"
+ :picture "bob-pic"
+ :roles [#:role{:name :editor
+ :date-granted
+ #inst "2023-01-01T00:00:00.000-00:00"}
+ #:role{:name :admin
+ :date-granted
+ #inst "2023-01-01T00:00:00.000-00:00"}]}
+ :md-content "#Some content 1"
+ :image-beside #:image{:src "https://some-image.svg"
+ :src-dark "https://some-image-dark-mode.svg"
+ :alt "something"}
+ :default-order 0}}}
+
It is important to understand that the param s/post-1-id
in (list :post :with [#uuid s/post-1-id])
was passed to (fn [post-id] (get-post db post-id))
in pullable-data
.
The function returned the post fetched from the db.
We decided to fetch all the information of the post in our pattern but we could have just fetch some of the keys only:
((pull/qfn
+ {:posts
+ {(list :post :with [s/post-1-id]) ;; only fetch id and page even though all the other keys have been returned here
+ {:post/id '?
+ :post/page '?}}}
+ '&?))
+=> {:posts
+ {:post
+ {:post/id #uuid "64cda032-b4e4-431e-bd85-0dbe34a8feeb"
+ :post/page :home}}}
+
The function (fn [post-id] (get-post db post-id))
returned all the post keys but we only select the post/id
and post/page
.
So we provided the required param s/post-1-id
to the endpoint :post
and we also specified what information we want to pull: :post/id
and :post/page
.
You can start to see how convenient that is as a frontend request to the backend. our post request body can just be a pull-pattern
! (more on this further down in the doc).
It is common to use malli schema to validate data.
Here is the malli schema for the post data structure we used above:
(def post-schema
+ [:map {:closed true}
+ [:post/id :uuid]
+ [:post/page :keyword]
+ [:post/css-class {:optional true} [:string {:min 3}]]
+ [:post/creation-date inst?]
+ [:post/last-edit-date {:optional true} inst?]
+ [:post/author user-schema]
+ [:post/last-editor {:optional true} user-schema]
+ [:post/md-content [:and
+ [:string {:min 10}]
+ [:fn
+ {:error/message "Level 1 Heading `#` missing in markdown."}
+ md/has-valid-h1-title?]]]
+ [:post/image-beside
+ {:optional true}
+ [:map
+ [:image/src [:string {:min 10}]]
+ [:image/src-dark [:string {:min 10}]]
+ [:image/alt [:string {:min 5}]]]]
+ [:post/default-order {:optional true} nat-int?]])
+
lasagna-pull
also allows us to provide schema alongside the pattern to validate 2 things:
This is very good because we can have a malli schema for the entire pullable-data
structure like so:
(def api-schema
+ "All keys are optional because it is just a data query schema.
+ maps with a property :preserve-required set to true have their keys remaining unchanged."
+ (all-keys-optional
+ [:map
+ {:closed true}
+ [:posts
+ [:map
+ [:post [:=> [:cat :uuid] post-schema]] ;; route from our get post example
+ [:all [:=> [:cat] [:vector post-schema]]]
+ [:new-post [:=> [:cat post-schema-create] post-schema]]
+ [:removed-post [:=> [:cat :uuid :string] post-schema]]]]
+ [:users
+ [:map
+ [:user [:=> [:cat :string] user-schema]]
+ [:all [:=> [:cat] [:vector user-schema]]]
+ [:removed-user [:=> [:cat :string] user-schema]]
+ [:auth [:map
+ [:registered [:=> [:cat :string user-email-schema :string :string] user-schema]]
+ [:logged [:=> [:cat] user-schema]]]]
+ [:new-role [:map
+ [:admin [:=> [:cat user-email-schema] user-schema]]
+ [:owner [:=> [:cat user-email-schema] user-schema]]]]
+ [:revoked-role [:map
+ [:admin [:=> [:cat user-email-schema] user-schema]]]]]]]))
+
If we go back to the scenario where we want to fetch a specific post from the DB, we can see that we are indeed having a function as params of the key :post
that expects one param: a uuid:
[:post [:=> [:cat :uuid] post-schema]]
+
It corresponds to the pattern part:
(list :post :with [s/post-1-id])
+
And lasagna-pull
provides validation of the function’s params which is very good to be sure the proper data is sent to the server!
Plus, in case the params given to one of the routes are not valid, the function won’t even be executed.
So now we have a way to do post request to our backend providing a pull-pattern as the request body and our server can validate this pattern format and content as the data is being pulled.
Earlier, I asked you to assume that the function from pullable-data
was returning a post data structure.
In reality, it is a bit more complex than this because what is returned by the different functions (endpoints) in pullable-data
is a map. For instance:
;; returned by get-post
+{:response (db/get-post db post-id)} ;; note the response key here
+
+;; returned by register-user
+{:response user
+ :effects {:db {:payload [user]}} ;; the db transaction description to be made
+ :session {:user-id user-id} ;; the user info to be added to the session
+}
+
This is actually a problem because our pattern for a post is:
{:posts
+ {(list :post :with [s/post-1-id])
+ {:post/id '?}}}
+
and with what is returned by (fn [post-id] (get-post db post-id))
, we should have:
{:posts
+ {(list :post :with [s/post-1-id])
+ {:response ;; note the response here
+ {:post/id '?}}}}
+
Also, in case of a user registration for instance, you saw that we have other useful information such as
However we do not want to pull the effects
and session
. We just want a way to accumulate them somewhere.
We could perform the transaction directly and return the post, but we don't want that.
We prefer to accumulate side effects descriptions and execute them all at once in a dedicated executor
.
The response
needs to be added to the pulled data, but the effects
and session
need to be stored elsewhere and executed later on.
This is possible via a modifier
and a finalizer
context in the pull/query
API.
In our case, we have a mk-query
function that uses a modifier
and finalizer
to achieve what I described above:
(defn mk-query
+ "Given the pattern, make an advance query using a context:
+ modifier: gather all the effects description in a coll
+ finalizer: assoc all effects descriptions in the second value of pattern."
+ [pattern]
+ (let [effects-acc (transient [])
+ session-map (transient {})]
+ (pull/query
+ pattern
+ (pull/context-of
+ (fn [_ [k {:keys [response effects session error] :as v}]]
+ (when error
+ (throw (ex-info "executor-error" error)))
+ (when session ;; assoc session to the map session
+ (reduce
+ (fn [res [k v]] (assoc! res k v))
+ session-map
+ session))
+ (when effects ;; conj the db transaction description to effects vector
+ (conj! effects-acc effects))
+ (if response
+ [k response]
+ [k v]))
+ #(assoc % ;; returned the whole pulled data and assoc the effects and session to it
+ :context/effects (persistent! effects-acc)
+ :context/sessions (persistent! session-map))))))
+
Let’s have a look at an example:
We want to add a new post. When we make a request for a new post, if everything works fine, the pullable-data function at the route :new-post
returns a map such as:
{:response full-post ;; the pullable data to return to the client
+ :effects {:db {:payload posts}} ;; the new posts to be added to the db
+}
+
The pull pattern for such request can be like this:
{:posts
+ {(list :new-post :with [post-in]) ;; post-in is a full post to be added with all required keys
+ {:post/id '?
+ :post/page '?
+ :post/default-order '?}}}
+
The post-in
is provided to the pullable-data function of the key :new-post
.
The function of add-post
actually determine all the new :post/default-order
of the posts given the new post. That is why we see in the side effects that several posts
are returned because we need to have their order updated in db.
Running this pattern with the pattern context above returns:
{&? {:posts {:new-post {:post/id #uuid "64cda032-3dae-4845-b7b2-e4a6f9009cbd"
+ :post/page :home
+ :post/creation-date #inst "2023-01-07T00:00:00.000-00:00"
+ :post/default-order 2}}}
+ :context/effects [{:db {:payload [{:post/id #uuid "64cda032-3dae-4845-b7b2-e4a6f9009cbd"
+ :post/page :home
+ :post/md-content "#Some content 3"
+ :post/creation-date #inst "2023-01-07T00:00:00.000-00:00"
+ :post/author {:user/id "bob-id"}
+ :post/default-order 2}]}}]
+ :context/sessions {}}
+
:context/effects
Then, in the ring response, we can just return the value of &?
Also, the effects can be executed in a dedicated executor functions all at once.
This allows us to deal with pure data until the very last moment when we run all the side effects (db transaction and session) in one place only we call executor
.
In our system, we have a component called the saturn-handler
. The component ring-handler
depends on it.
In order to isolate the side effects as much as we can, our endpoints from our pullable-data
, highlighted previously, do not perform side effects but return descriptions in pure data of the side effects to be done. These side effects are the ones we gather in :context/effects
and :context/sessions
using the pull-pattern's query context.
The saturn-handler returns a map with the response
(data pulled and requested in the client pattern) to be sent to the client, the effect-desc
to be perform (in our case, just db transactions) and the session
update to be done:
(defn saturn-handler
+ "A saturn handler takes a ring request enhanced with additional keys form the injectors.
+ The saturn handler is purely functional.
+ The description of the side effects to be performed are returned and they will be executed later on in the executors."
+ [{:keys [params body-params session db]}]
+ (let [pattern (if (seq params) params body-params)
+ data (op/pullable-data db session)
+ {:context/keys [effects sessions] :as resp}
+ (pull/with-data-schema v/api-schema ((mk-query pattern) data))]
+ {:response ('&? resp)
+ :effects-desc effects
+ :session (merge session sessions)}))
+
You can also notice that the data is being validated via pull/with-data-schema
. In case of validation error, since we do not have any side effects done during the pulling, an error will be thrown and no mutations will be done.
Having no side-effects at all makes it way easier to tests and debug and it is more predictable.
Finally, the ring-handler
will be the component responsible to execute all the side effects at once.
So the saturn-handler
purpose was to be sure the data is being pulled properly, validated using malli, and that the side effects descriptions are gathered in one place to be executed later on.
Our app skydread1/flybot.sg is a full-stack Clojure web and mobile app.
We opted for a mono-repo to host:
server
: Clojure appweb
client: Reagent (React) app using Re-Framemobile
client: Reagent Native (React Native) app using Re-FrameNote that the web app does not use NPM at all. However, the React Native mobile app does use NPM and the node_modules
need to be generated.
By using only one deps.edn
, we can easily starts the different parts of the app.
The goal of this document is to highlight the mono-repo structure and how to run the different parts (dev, test, build etc).
├── client
+│ ├── common
+│ │ ├── src
+│ │ │ └── flybot.client.common
+│ │ └── test
+│ │ └── flybot.client.common
+│ ├── mobile
+│ │ ├── src
+│ │ │ └── flybot.client.mobile
+│ │ └── test
+│ │ └── flybot.client.mobile
+│ └── web
+│ ├── src
+│ │ └── flybot.client.web
+│ └── test
+│ └── flybot.client.web
+├── common
+│ ├── src
+│ │ └── flybot.common
+│ └── test
+│ └── flybot.common
+├── server
+│ ├── src
+│ │ └── flybot.server
+│ └── test
+│ └── flybot.server
+
server
dir contains then .clj
filescommon
dir the .cljc
filesclients
dir the .cljs
files.You can have a look at the deps.edn.
We can use namespaced aliases in deps.edn
to make the process clearer.
I will go through the different aliases and explain their purposes and how to I used them to develop the app.
First, the root deps of the deps.edn, inherited by all aliases:
The deps above are used in both server/src
and common/src
(clj and cljc files).
So every time you start a deps
REPL or a deps+figwheel
REPL, these deps will be loaded.
In the common/test/flybot/common/testsampledata.cljc namespace, we have sample data that can be loaded in both backend dev system of frontend dev systems.
This is made possible by reader conditionals clj/cljs.
I use the calva
extension in VSCode to jack-in deps and figwheel REPLs but you can use Emacs if you prefer for instance.
What is important to remember is that, when you work on the backend only, you just need a deps
REPL. There is no need for figwheel since we do not modify the cljs content. So in this scenario, the frontend is fixed (the main.js is generated and not being reloaded) but the backend changes (the clj
files and cljc
files).
However, when you work on the frontend, you need to load the backend deps to have your server running but you also need to recompile the js when a cljs file is saved. Therefore your need both deps+figwheel
REPL. So in this scenario, the backend is fixed and running but the frontend changes (the cljs
files and cljc
files)
You can see that the common cljc
files are being watched in both scenarios which makes sense since they "become" clj or cljs code depending on what REPL type you are currently working in.
Following are the aliases used for the server:
:jvm-base
: JVM options to make datalevin work with java version > java8:server/dev
: clj paths for the backend systems and tests:server/test
: Run clj testsFollowing is the alias used for both web and mobile clients:
:client
: deps for frontend libraries common to web and react native.The extra-paths contains the cljs
files.
We can note the client/common/src
path that contains most of the re-frame
logic because most subscriptions and events work on both web and react native right away!
The main differences between the re-frame logic for Reagent and Reagent Native have to do with how to deal with Navigation and oauth2 redirection. That is the reason we have most of the logic in a common dir in client
.
Following are the aliases used for the mobile client:
:mobile/rn
: contains the cljs deps only used for react native. They are added on top of the client deps.:mobile/ios
: starts the figwheel REPL to work on iOS.Following are the aliases used for the web client:
:web/dev
: starts the dev REPL:web/prod
: generates the optimized js bundle main.js:web/test
: runs the cljs tests:web/test-headless
: runs the headless cljs tests (fot GitHub CI)Following is the alias used to build the js bundle or a uberjar:
:build
: clojure/tools.build is used to build the main.js and also an uber jar for local testing, we use .The build.clj contains the different build functions:
clj -T:build js-bundle
clj -T:build uber
clj -T:build uber+js
Following is the alias used to build an image and push it to local docker or AWS ECR:
:jib
: build image and push to image repoFollowing is the alias used to points out outdated dependencies
:outdated
: prints the outdated deps and their last available versionWe have not released the mobile app yet, that is why there is no aliases related to CD for react native yet.
This is one solution to handle server and clients in the same repo.
Feel free to consult the complete deps.edn content.
It is important to have a clear directory structure to only load required namespaces and avoid errors.
Using :extra-paths
and :extra-deps
in deps.edn is important because it prevent deploying unnecessary namespaces and libraries on the server and client.
Adding namespace to the aliases make the distinction between backend, common and client (web and mobile) clearer.
Using deps
jack-in for server only work and deps+figwheel
for frontend work is made easy using calva
in VSCode (work in other editors as well).
flybot-sg/lasagna-pull by @robertluo aims at precisely select from deep data structure in Clojure.
In this document, I will show you the benefit of pull-pattern
in pulling nested data.
In Clojure, it is very common to have to precisely select data in nested maps. the Clojure core select-keys
and get-in
functions do not allow to easily select in deeper levels of the maps with custom filters or parameters.
One of the libraries of the lasagna-stack
is flybot-sg/lasagna-pull. It takes inspiration from the datomic pull API and the library redplanetlabs/specter.
lasagna-pull
aims at providing a clearer pattern than the datomic pull API.
It also allows the user to add options on the selected keys (filtering, providing params to values which are functions etc). It supports less features than the specter
library but the syntax is more intuitive and covers all major use cases you might need to select the data you want.
Finally, a metosin/malli schema can be provided to perform data validation directly using the provided pattern. This allows the client to prevent unnecessary pulling if the pattern does not match the expected shape (such as not providing the right params to a function, querying the wrong type etc).
Selecting data in nested structure is made intuitive via a pattern that describes the data to be pulled following the shape of the data.
Here are some simple cases to showcase the syntax:
(require '[sg.flybot.pullable :as pull])
+
+((pull/query '{:a ? :b {:b1 ?}})
+ {:a 1 :b {:b1 2 :b2 3}})
+;=> {&? {:a 1, :b {:b1 2}}}
+
((pull/query '[{:a ? :b {:b1 ?}}])
+ [{:a 1 :b {:b1 2 :b2 3}}
+ {:a 2 :b {:b1 2 :b2 4}}])
+;=> {&? [{:a 1, :b {:b1 2}} {:a 2, :b {:b1 2}}]}
+
((pull/query '[{:a ?
+ :b [{:c ?}]}])
+ [{:a 1 :b [{:c 2}]}
+ {:a 11 :b [{:c 22}]}])
+;=> {&? [{:a 1, :b [{:c 2}]} {:a 11, :b [{:c 22}]}]}
+
Let’s compare datomic pull and lasagna pull query with a simple example:
(def sample-data
+ [{:a 1 :b {:b1 2 :b2 3}}
+ {:a 2 :b {:b1 2 :b2 4}}])
+
+(pull ?db
+ [:a {:b [:b1]}]
+ sample-data)
+
((pull/query '[{:a ? :b {:b1 ?}}])
+ sample-data)
+;=> {&? [{:a 1, :b {:b1 2}} {:a 2, :b {:b1 2}}]}
+
A few things to note
?
is just a placeholder on where the value will be after the pull.&?
.You might not want to fetch the whole path down to a leaf key, you might want to query that key and store it in a dedicated var. It is possible to do this by providing a var name after the placeholder ?
such as ?a
for instance. The key ?a
will then be added to the result map along side the &?
that contains the whole data structure.
Let’s have a look at an example.
Let’s say we want to fetch specific keys in addition to the whole data structure:
((pull/query '{:a ?a
+ :b {:b1 ?b1 :b2 ?}})
+ {:a 1 :b {:b1 2 :b2 3}})
+; => {?& {:a 1 :b {:b1 2 :b2 3}} ;; all nested data structure
+; ?a 1 ;; var a
+; ?b1 2 ;; var b1
+ }
+
The results now contain the logical variable we selected via ?a
and ?b1
. Note that the :b2
key has just a ?
placeholder so it does not appear in the results map keys.
It works also for sequences:
;; logical variable for a sequence
+((pull/query '{:a [{:b1 ?} ?b1]})
+ {:a [{:b1 1 :b2 2} {:b1 2} {}]})
+;=> {?b1 [{:b1 1} {:b1 2} {}]
+; &? {:a [{:b1 1} {:b1 2} {}]}}
+
Note that '{:a [{:b1 ?b1}]}
does not work because the logical value cannot be the same for all the b1
keys:
((pull/query '{:a [{:b1 ?b1}]})
+ {:a [{:b1 1 :b2 2} {:b1 2} {}]})
+;=> {&? {:a [{:b1 1} nil nil]}} ;; not your expected result
+
Most of the time, just selecting nested keys is not enough. We might want to select the key if certain conditions are met, or even pass a parameter if the value of the key is a function so we can run the function and get the value.
With library like redplanetlabs/specter, you have different possible transformations using diverse macros which is an efficient way to select/transform data. The downside is that it introduces yet another syntax to get familiar with.
lasagna-pull
supports most of the features at a key level.
Instead of just providing the key you want to pull in the pattern, you can provide a list with the key as first argument and the options as the rest of the list.
The transformation is done at the same time as the selection, the pattern can be enhanced with options:
((pull/query '{(:a :not-found ::not-found) ?}) {:b 5})
+;=> {&? {:a :user/not-found}}
+
((pull/query {(:a :when even?) '?}) {:a 5})
+;=> {&? {}} ;; empty because the value of :a is not even
+
If the value of a query is a function, using :with
option can invoke it and returns the result instead:
((pull/query '{(:a :with [5]) ?}) {:a #(* % 2)})
+;=> {&? {:a 10}} ;; the arg 5 was given to #(* % 2) and the result returned
+
Batched version of :with option:
((pull/query '{(:a :batch [[5] [7]]) ?}) {:a #(* % 2)})
+;=> {&? {:a (10 14)}}
+
Apply to sequence value of a query, useful for pagination:
((pull/query '[{:a ? :b ?} ? :seq [2 3]]) [{:a 0} {:a 1} {:a 2} {:a 3} {:a 4}])
+;=> {&? ({:a 2} {:a 3} {:a 4})}
+
As you can see with the different options above, the transformations are specified within the selected keys. Unlike specter however, we do not have a way to apply transformation to all the keys for instance.
We can optionally provide a metosin/malli schema to specify the shape of the data to be pulled.
The client malli schema provided is actually internally "merged" to a internal schema that checks the pattern shape so both the pattern syntax and the pattern shape are validated.
You can provide a context to the query. You can provide a modifier
and a finalizer
.
This context can help you gathering information from the query and apply a function on the results.
To see Lasagna Pull in action, refer to the doc Lasagna Pull applied to flybot.sg.
]]>This project is stored alongside the backend and the web frontend in the mono-repo: skydread1/flybot.sg
The codebase is a full-stack Clojure(Script) app. The backend is written in Clojure and the web and mobile clients are written in ClojureScript.
For the web app, we use reagent, a ClojureScript interface for React
.
For the mobile app, we use reagent-react-native, a ClojureScript interface for React Native
.
The mono-repo structure is as followed:
├── client
+│ ├── common
+│ │ ├── src
+│ │ │ └── flybot.client.common
+│ │ └── test
+│ │ └── flybot.client.common
+│ ├── mobile
+│ │ ├── src
+│ │ │ └── flybot.client.mobile
+│ │ └── test
+│ │ └── flybot.client.mobile
+│ └── web
+│ ├── src
+│ │ └── flybot.client.web
+│ └── test
+│ └── flybot.client.web
+├── common
+│ ├── src
+│ │ └── flybot.common
+│ └── test
+│ └── flybot.common
+├── server
+│ ├── src
+│ │ └── flybot.server
+│ └── test
+│ └── flybot.server
+
So far, the RN app has only been tested on iOS locally.
The goal was to have a mobile app targeting both iOS and Android, written in ClojureScript
, which can reuse most of our web frontend logic.
To do so, I used React Native
for the following reasons:
To get React Native working, you need to follow a few steps.
The setup steps are well described in the Figwheel doc.
The Figwheel doc has a dedicated section to install and setup NPM in a project. The best way to install npm is to use nvm.
To do mobile dev, some tools need to be installed and the react native doc has the instructions on how to prepare the environment.
The default Ruby version installed on MacOS is not enough to work with React Native. Actually, React Native needs a specific version of Ruby hence the use of a ruby version manager. I used rbenv.
~:brew install rbenv ruby-build
+
+~:rbenv -v
+rbenv 1.2.0
+
React Native uses this version of ruby so we need to download it.
# install proper ruby version
+~:rbenv install 2.7.6
+
+# set ruby version as default
+~:rbenv global 2.7.6
+
We also need to add these 2 lines to the .zshrc
export PATH="$HOME/.rbenv/bin:$PATH"
+eval "$(rbenv init -)"
+
Finally we make sure we have the correct version:
~:ruby -v
+ruby 2.7.6p219 (2022-04-12 revision c9c2245c0a) [arm64-darwin22]
+
From the doc:
Ruby's Bundler is a Ruby gem that helps managing the Ruby dependencies of your project. We need Ruby to install Cocoapods and using Bundler will make sure that all the dependencies are aligned and that the project works properly.
# install the bundler
+~:gem install bundler
+Fetching bundler-2.4.5.gem
+Successfully installed bundler-2.4.5
+...
+
+# Check the location where gems are being installed
+~:gem env home
+/Users/loicblanchard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0
+
From the doc:
The easiest way to install
Xcode
is via the Mac App Store . Installing Xcode will also install the iOS Simulator and all the necessary tools to build your iOS app.
I downloaded it from the apple store.
Xcode command line
tools also needs to be installed. It can be chosen in Xcode→Settings→Locations
~:xcode-select -p
+/Library/Developer/CommandLineTools
+
It should be already installed.
We can use npx
directly because it was shipped with npm
.
CocoaPods is required to use the Ruby’s Bundler and we can install it using rubygems:
sudo gem install cocoapods
+
+# check version
+~:gem which cocoapods
+/Users/loicblanchard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/cocoapods-1.11.3/lib/cocoapods.rb
+
In case of the error Multiple Profiles, we need to switch to the Xcode cli manually like so:
sudo xcode-select --switch /Applications/Xcode.app
+
We now should have all the tools installed to start a React Native project on Mac targeting iOS.
# setup project
+npx react-native init MyAwesomeProject
+
npx react-native run-ios
+
This should open a simulator with the welcome React Native display.
Add an alias to the deps.edn:
:cljs/ios {:main-opts ["--main" "figwheel.main"
+ "--build" "ios"
+ "--repl"]}
+
Note: We need to use cljs version 1.10.773
because the latest version causes this error which is hard to debug.
Also, we need to add the figwheel config for ios
in ios.cljs.edn
:
^{:react-native :cli
+ :watch-dirs ["client/mobile/src" "client/common/src"]}
+{:main flybot.client.mobile.core
+ :closure-defines {flybot.client.common.db.event/BASE-URI "http://localhost:9500"}}
+
And then we add the source files in the src folder like explained in the figwheel doc.
To run the project, we start a REPLs (clj and cljs) with the proper aliases and in another terminal, we can run run npm ios
to start the Xcode simulator.
For more details regarding the aliases: have a look at the README
If we want to add a npm package, we need 2 steps:
npm i my-npm-package
+cd ios
+pod install
+cd ..
+
In case of the error RNSScreenStackHeaderConfig, we need to:
npm i react-native-gesture-handler
+cd ios
+pod install
+cd ..
+
+# We restart the similutor and the error should be gone
+
Regarding the http request made by the re-frame fx http-xhrio
, it should work right away, same as for the web, but we just need to manually pass the cookie to the header as RN do not manage cookie for us like the web does.
Passing the cookie in the request was quite straight forward, I just added :headers {:cookie my-cookie}
to the :http-xhrio
fx for all the requests that require a session for the mobile app.
I use react-native-markdown-package
npm i react-native-markdown-package --save
+
On iOS, I had to add the fonts in the info.plist
like so:
<key>UIAppFonts</key>
+ <array>
+ <string>AntDesign.ttf</string>
+ <string>Entypo.ttf</string>
+ <string>EvilIcons.ttf</string>
+ <string>Feather.ttf</string>
+ <string>FontAwesome.ttf</string>
+ <string>FontAwesome5_Brands.ttf</string>
+ <string>FontAwesome5_Regular.ttf</string>
+ <string>FontAwesome5_Solid.ttf</string>
+ <string>Foundation.ttf</string>
+ <string>Ionicons.ttf</string>
+ <string>MaterialIcons.ttf</string>
+ <string>MaterialCommunityIcons.ttf</string>
+ <string>SimpleLineIcons.ttf</string>
+ <string>Octicons.ttf</string>
+ <string>Zocial.ttf</string>
+ </array>
+
As for now we have 2 Navigators:
login
screenblog
screen: Stack Navigatorpost-lists
screenpost-read
screenpost-edit
screenpreview
screenSo the Stack Navigator is inside the Tab Navigator blog screen.
For the navigation, we can use re-frame
dispatch to change the navigation object ref to the new route.
Since we are using re-frame, we might not be able to access props.navigation.navigate
.
However, we could store a reference to the navigation object in our re-frame DB so we can Navigate without the navigation prop.
Therefore, just using re-frame/dispatch
to store the navigation ref to the re-frame/db
and use re-frame/subscribe
to get the ref (and so the nav params) is enough to handle navigation in our case. Thus, we do not use the props at all.
Regarding the hot reloading, the only way I found is to store the js state and navigation objects in atoms via defonce
so we can remain on the same screen with same params as before the reload.
Note: Maybe I could use the AsyncStorage instead of the atoms even though it is only for dev purposes.
One of the env variables we need to define is for the uri
. For the web app, we can use relative path such as /posts/all
but on mobile, there is no such thing as path and we would need to pass an absolute path such as http://localhost:9500/posts/all
for instance in our case.
Therefore, we need to have some config to pass to the cljs build. It is possible to do so via the compiler option :closure-defines.
:closure-defines
is a ClojureScript compiler option that allows you to specify a list of key-value pairs to be passed as JavaScript defines to the Google Closure Compiler. These defines can be used to conditionally compile code based on the value of the defined key. For example, you can define :foo true
as a closure define and then use #?(:foo some-code)
in your ClojureScript code to include some-code
only when :foo
is true.
Luckily, figwheel allows us to setup the closures-define in the config files.
I redirect the request back to an intermediate end point that will directly fetch the user info and create a ring-session that contains the google tokens, the user-name and user-permissions. Then ring encrypts that for us and put that ring-session
in a cookie that is sent to the client.
Thus, my clients only receive this ring-session id that will be passed to every request made (automatic for browser, manually added to request for mobile).
When the user logout, ring still passes a ring-session
but it will be nil once decrypted by the server.
To go back to the app after OAuth2.0 success, I had to add the scheme following to the info.plist
for iOS:
<key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>flybot-app</string>
+ </array>
+ </dict>
+
Also, in ios/AppDelegate.mm
, I added:
#import <React/RCTLinkingManager.h>
+
+/// listen to incoming app links during your app's execution
+- (BOOL)application:(UIApplication *)application
+ openURL:(NSURL *)url
+ options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options
+{
+ return [RCTLinkingManager application:application openURL:url options:options];
+}
+
I store the cookie in async-storage for this because it is enough for our simple use case.
npm install @react-native-async-storage/async-storage
+
Once the ring-session
cookie is received from the server, a re-frame dispatch is triggered to set a cookie name ring-session
in the device AsyncStorage. This event also updates the re-frame db value of :user/cookie
.
One of the issues with AsyncStorage is that it returns a Promise
. Therefore, we cannot access the value directly but only do something in the .then
method. So, once the Promise is resolved, in the .then, we re-frame/dispatch
an event that will update the re-frame/db.
The Promises to get or set a cookie from storage, being side effects, are done in a re-frame reg-fx
. These reg-fx
will be done inside reg-event-fx
event. We want to respect the principle: reg-fx
for pulling with side effect and reg-event-fx
for pushing pure event.
We want to be sure the cookie is pulled from AsyncStorage before the db is initialised and all the posts and the user pulled. However, we cannot just dispatch the event to pull the cookie from AsyncStorage (returns a Promise that will then dispatch another event to update re-frame/db), and then dispatch the event to get all the posts from the server because there is no guarantee the cookie will be set before the request is made.
The solution is to dispatch the initialisation event inside the event from the Promise like so:
;; setup all db param and do get request to get posts, pages and user using cookie
+(rf/reg-event-fx
+ :evt.app/initialize
+ (fn [{:keys [db]} _]
+ {:db (assoc db ...)
+ :http-xhrio {:method :post
+ :uri (base-uri "/pages/all")
+ :headers {:cookie (:user/cookie db)}
+ :params ...
+ :format (edn-request-format {:keywords? true})
+ :response-format (edn-response-format {:keywords? true})
+ :on-success [:fx.http/all-success]
+ :on-failure [:fx.http/failure]}}))
+
+;; Impure fx to fet cookie from storage and dispatch new event to update db
+(rf/reg-fx ;; 2)
+ :fx.app/get-cookie-async-store
+ (fn [k]
+ (-> (async-storage/get-item k) ;; Promise
+ (.then #(rf/dispatch [:evt.cookie/get %])))))
+
+;; Pure event triggered at the start of the app
+(rf/reg-event-fx ;; 1)
+ :evt.app/initialize-with-cookie
+ (fn [_ [_ cookie-name]]
+ {:fx [[:fx.app/get-cookie-async-store cookie-name]]}))
+
+;; Pure event triggered by :fx.app/get-cookie-async-store
+(rf/reg-event-fx ;; 3)
+ :evt.cookie/get
+ (fn [{:keys [db]} [_ cookie-value]]
+ {:db (assoc db :user/cookie cookie-value)
+ :fx [[:dispatch [:evt.app/initialize]]]}))
+
As for now, the styling is directly done in the :style
keys of the RN component’s hiccups. Some more complex components have some styling that takes functions and or not in the :style
keyword.
I hope that this unusual mobile app stack made you want to consider ClojureScript
as a good alternative to build mobile apps.
It is important to note that the state management logic (re-frame) is the same at 90% for both the web app and the mobile app which is very convenient.
Finally, the web app is deployed but not the mobile app. All the codebase is open-source so feel free to take inspiration.
]]>Note: the steps for packing the code into nugget package, pushing it to remote github and fetching it in Unity are highlighted in another article.
Magic is a bootsrapped compiler writhen in Clojure that take Clojure code as input and produces dotnet assemblies (.dll) as output.
Compiler Bootstrapping is the technique for producing a self-compiling compiler that is written in the same language it intends to compile. In our case, MAGIC is a Clojure compiler that compiles Clojure code to .NET assemblies (.dll and .exe files).
It means we need the old dlls of MAGIC to generate the new dlls of the MAGIC compiler. We repeat this process until the compiler is good enough.
The very first magic dlls were generated with the clojure/clojure-clr project which is also a Clojure compiler to CLR but written in C# with limitations over the dlls generated (the problem MAGIC is intended to solve).
The already existing clojure->clr compiler clojure/clojure-clr. However, clojure-clr uses a technology called the DLR (dynamic language runtime) to optimize dynamic call sites but it emits self modifying code which make the assemblies not usable on mobile devices (IL2CPP in Unity). So we needed a way to have a compiler that emit assemblies that can target both Desktop and mobile (IL2CPP), hence the Magic compiler.
We don’t want separate branches for JVM and CLR so we use reader conditionals.
You can find how to use the reader conditionals in this guide.
You will mainly need them for the require
and import
as well as the function parameters.
Don’t forget to change the extension of your file from .clj
to .cljc
.
In Emacs
(with spacemacs
distribution), you might encounter some lint issues if you are using reader conditionals and some configuration might be needed.
The Clojure linter library clj-kondo/clj-kondo supports the reader conditionals.
All the instruction on how to integrate it to the editor you prefer here.
To use clj-kondo with syl20bnr/spacemacs, you need the layer borkdude/flycheck-clj-kondo.
However, there is no way to add configuration in the .spacemacs
config file.
The problem is that we need to set :clj
as the default language to be checked.
In VScode
I did not need any config to make it work.
It has nothing to do with the :default
reader conditional key such as:
#?(:clj (Clojure expression)
+ :cljs (ClojureScript expression)
+ :cljr (Clojure CLR expression)
+ :default (fallthrough expression))
+
In the code above, the :default
reader is used if none of the other reader matches the platform the code is run on. There is no need to add the :default
tag everywhere as the code will be ran only on 2 potential environment: :clj
and :cljr
.
For our linter, on your Clojure environment (in case of Emacs with syl20bnr/spacemacs distribution), you can highlight the codes for the :clj
reader only.
The :cljr
code will be displayed as comments.
To add the default :clj
reader, we need to add it in the config file : ~/.config/clj-kondo/config.edn
(to affect all our repos). It is possible to add config at project level as well as stated here.
Here is the config to setup :clj
as default reader:
{:cljc {:features #{:clj}}}
+
If you don’t specify a default reader, clj-kondo
will trigger lots of error if you don’t provide the :default
reader because it assumes that you might run the code on a platform that doesn’t match any of the provided reader.
Magic supports the same shorthands as in Clojure: Magic types shorthands.
We want to add Magic type hints in our Clojure code to prevent slow argument boxing at run time.
The main place we want to add the type hints are the function arguments such as in:
(defn straights-n
+ "Returns all possible straights with given length of cards."
+ [n cards wheel?]
+ #?(:clj [n cards wheel?]
+ :cljr [^int n cards ^Boolean wheel?])
+ (...))
+
Note the user conditionals here to not affect our Clojure codes and tests to be run on the JVM.
I did not remove the reader conditionals here (the shorthands being the same in both Clojure and Magic It would run), because we don’t want our Clojure tests to be affected and we want to keep the dynamic idiom of Clojure. Also wheel?
could very likely have the value nil
, passed by one of the tests, which is in fact not a boolean.
So we want to keep our type hints in the :cljr
reader to prevent Magic from doing slow reflection but we don’t want to affect our :clj
reader that must remain dynamic and so type free to not alter our tests.
One of the best benefit of type hinting for Magic is to type hint records and their fields.
Here is an example of a record fields type hinting:
(defrecord GameState #?(:clj [players next-pos game-over?]
+ :cljr [players ^long next-pos ^boolean game-over?])
+(...))
+
As you can see, not all fields are type hinted because for some, we don’t have a way to do so.
There is no way to type hints a collection parameter in Magic.
players
is a vector of Players
records. We don’t have a way to type hints such type. Actually we don’t have a way to type hints a collection in Magic. In Clojure (Java), we can type hint a collection of a known types such as:
;; Clojure file
+user> (defn f
+ "`poker-cards` is a vector of `PokerCard`."
+ [^"[Lmyproj.PokerCard;" poker-cards]
+ (map :num poker-cards))
+;=> #'myproj.combination/f
+
+;; Clojure REPL
+user> (f [(->PokerCard :d :3) (->PokerCard :c :4)])
+;=> (:3 :4)
+
However, in Magic, such thing is not possible.
parameters which are maps
do not benefit much from type hinting because a map could be a PersistentArrayMap
, a PersistentHashMap
or even a PersistentTreeMap
so we would need to just ^clojure.lang.APersistentMap
just to be generic which is not really relevant.
To type hint a record as parameter, it is advices to import
it first to avoid having to write the fully qualified namespace:
;; Import the Combination class so we can use type hint format ^Combination
+#?(:cljr (:import [myproj.combination Combination]))
+
Then we can type hint a parameter which is a record conveniently such as:
(defn pass?
+ "Returns true it the combi is a pass."
+ #?(:clj [combi]
+ :cljr [^Combination combi])
+ (combi/empty-combi? combi))
+
A record field can also a be a known record types such as:
(defrecord Player #?(:clj [combi penalty?]
+ :cljr [^Combination combi
+ ^boolean penalty?]))
+
Since in Clojure, we tend to use simplified parameters to our function to isolate the logic being tested (a map instead of a record, nil instead of false, a namespaced keyword instead of a map etc.), naturally lots of tests will fail in the CLR because of the type hints.
We don’t want to change our test suite with domain types so you can just add a reader conditionals to the tests affected by the type hints in the CLR.
For interop, you can use the reader conditionals such as in:
(defn round-perc
+ "Rounds the given `number`."
+ [number]
+ #?(:clj (-> number double Math/round)
+ :cljr (-> number double Math/Round long)))
+
For the deftype
to work in the CLR, we need to override different equals methods than the Java ones. In Java we use hashCode
and equal
but in .net we use hasheq
and equiv
.
Here is an example on how to override such methods:
(deftype MyRecord [f-conj m rm]
+ ;; Override equals method to compare two MyRecord.
+ #?@(:clj
+ [Object
+ (hashCode [_] (.hashCode m))
+ (equals [_ other]
+ (and (instance? MyRecord other) (= m (.m other))))]
+ :cljr
+ [clojure.lang.IHashEq
+ (hasheq [_] (hash m))
+ clojure.lang.IPersistentCollection
+ (equiv [_ other]
+ (and (instance? MyRecord other) (= m (.m other))))]))
+
For the defrecord
to work in case we target IL2CPP (all our apps), you need to override the default implementation of the empty
method such as:
(defrecord PokerCard [^clojure.lang.Keyword suit ^clojure.lang.Keyword num]
+ #?@(:cljr
+ [clojure.lang.IPersistentCollection
+ (empty [_] nil)]))
+
Note the vector required with the splicing reader conditional #?@
.
Since magic was created before tools.deps
or leiningen
, it has its own deps management system and the dedicated file for it is project.edn
.
Here is an example of a project.edn:
{:name "My project"
+ :source-paths ["src" "test"]
+ :dependencies [[:github skydread1/clr.test.check "magic"
+ :sha "a23fe55e8b51f574a63d6b904e1f1299700153ed"
+ :paths ["src"]]
+ [:gitlab my-private-lib1 "master"
+ :paths ["src"]
+ :sha "791ef67978796aadb9f7aa62fe24180a23480625"
+ :token "r7TM52xnByEbL6mfXx2x"
+ :domain "my.domain.sg"
+ :project-id "777"]]}
+
Refer to the Nostrand README for more details.
So you need to add a project.edn
at the root of your directory with other libraries.
nasser/nostrand is for magic what tools.deps or leiningen are for a regular Clojure project. Magic has its own dependency manager and does not use tools.deps or len because it was implemented before these deps manager came out!
You can find all the information you need to build and test your libraries in dotnet in the README.
In short, you need to clone nostrand and create a dedicated Clojure namespace at the root of your project to run function with Nostrand.
In my case I named my nostrand namespace dotnet.clj
.
You cna have a look at the clr.test.check/dotnet.clj, it is a port of clojure/test.check that compiles in both JVM and CLR.
We have the following require:
(:require [clojure.test :refer [run-all-tests]]
+ [magic.flags :as mflags])
+
Don’t forget to set the 2 magic flags to true:
(defn build
+ "Compiles the project to dlls.
+ This function is used by `nostrand` and is called from the terminal in the root folder as:
+ nos dotnet/build"
+ []
+ (binding [*compile-path* "build"
+ *unchecked-math* *warn-on-reflection*
+ mflags/*strongly-typed-invokes* true
+ mflags/*direct-linking* true
+ mflags/*elide-meta* false]
+ (println "Compile into DLL To : " *compile-path*)
+ (doseq [ns prod-namespaces]
+ (println (str "Compiling " ns))
+ (compile ns))))
+
To build to the *compile-path*
folder, just run the nos
command at the root of your project:
nos dotnet/build
+
Same remark as for the build section:
(defn run-tests
+ "Run all the tests on the CLR.
+ This function is used by `nostrand` and is called from the terminal in the root folder as:
+ nos dotnet/run-tests"
+ []
+ (binding [*unchecked-math* *warn-on-reflection*
+ mflags/*strongly-typed-invokes* true
+ mflags/*direct-linking* true
+ mflags/*elide-meta* false]
+ (doseq [ns (concat prod-namespaces test-namespaces)]
+ (require ns))
+ (run-all-tests)))
+
To run the tests, just run the nos
command at the root of your project:
nos dotnet/run-tests
+
An example of a Clojure library that has been ported to Magic is skydread1/clr.test.check, a fork of clojure/clr.test.check. My fork uses reader conditionals so it can be run and tested in both JVM and CLR.
Now that your library is compiled to dotnet, you can learn how to package it to nuget, push it in to your host repo and import in Unity in this article:
]]>This article introduces effective testing libraries and methods for those new to Clojure.
We'll explore using the kaocha test runner in both REPL and terminal, along with configurations to enhance feedback. Then we will explain how tests as documentation can be done using rich-comment-tests.
We will touch on how to do data validation, generation and instrumentation using malli.
Finally, I will talk about how I manage integrations tests with eventual external services involved.
First of all, always remember that it is important to have as many pure functions as possible. It means, the same input passed to a function always returns the same output. This will simplify the testing and make your code more robust.
Here is an example of unpredictable impure logic:
(defn fib
+ "Read the Fibonacci list length to be returned from a file,
+ Return the Fibonacci sequence."
+ [variable]
+ (when-let [n (-> (slurp "config/env.edn") edn/read-string (get variable) :length)]
+ (->> (iterate (fn [[a b]] [b (+' a b)])
+ [0 1])
+ (map first)
+ (take n))))
+
+(comment
+ ;; env.edn has the content {:FIB 10}
+ (fib :FIB) ;=> 10
+ ;; env.edn is empty
+ (fib :FIB) ;=> nil
+ )
+
For instance, reading the length
value from a file before computing the Fibonacci sequence is unpredictable for several reasons:
nil
We would need to test too many cases unrelated to the Fibonacci logic itself, which is bad practice.
The solution is to isolate the impure code:
(defn fib
+ "Return the Fibonacci sequence with a lenght of `n`."
+ [n]
+ (->> (iterate (fn [[a b]] [b (+' a b)])
+ [0 1])
+ (map first)
+ (take n)))
+
+^:rct/test
+(comment
+ (fib 10) ;=> [0 1 1 2 3 5 8 13 21 34]
+ (fib 0) ;=> []
+ )
+
+(defn config<-file
+ "Reads the `config/env.edn` file, gets the value of the given key `variable`
+ and returns it as clojure data."
+ [variable]
+ (-> (slurp "config/env.edn") edn/read-string (get variable)))
+
+(comment
+ ;; env.edn contains :FIB key with value {:length 10}
+ (config<-file :FIB) ;=> {:length 10}
+ ;; env.edn is empty
+ (config<-file :FIB) ;=> {:length nil}
+ )
+
The fib
function is now pure and the same input will always yield the same output. I can therefore write my unit tests and be confident of the result. You might have noticed I added ^:rct/test
above the comment block which is actually a unit test that can be run with RCT (more on this later).
The impure code is isolated in the config<-file
function, which handles reading the environment variable from a file.
This may seem basic, but it's the essential first step in testing: ensuring the code is as pure as possible for easier testing is one of the strengths of data-oriented programming!
For all my personal and professional projects, I have used kaocha as a test-runner.
There are 2 main ways to run the tests that developers commonly use:
Here is the deps.edn
I will use in this example:
{:deps {org.clojure/clojure {:mvn/version "1.11.3"}
+ org.slf4j/slf4j-nop {:mvn/version "2.0.15"}
+ metosin/malli {:mvn/version "0.16.1"}}
+ :paths ["src"]
+ :aliases
+ {:dev {:extra-paths ["config" "test" "dev"]
+ :extra-deps {io.github.robertluo/rich-comment-tests {:git/tag "v1.1.1", :git/sha "3f65ecb"}}}
+ :test {:extra-paths ["test"]
+ :extra-deps {lambdaisland/kaocha {:mvn/version "1.91.1392"}
+ lambdaisland/kaocha-cloverage {:mvn/version "1.1.89"}}
+ :main-opts ["-m" "kaocha.runner"]}
+ :jib {:paths ["jibbit" "src"]
+ :deps {io.github.atomisthq/jibbit {:git/url "https://github.com/skydread1/jibbit.git"
+ :git/sha "bd873e028c031dbbcb95fe3f64ff51a305f75b54"}}
+ :ns-default jibbit.core
+ :ns-aliases {jib jibbit.core}}
+ :outdated {:deps {com.github.liquidz/antq {:mvn/version "RELEASE"}}
+ :main-opts ["-m" "antq.core"]}
+ :cljfmt {:deps {io.github.weavejester/cljfmt {:git/tag "0.12.0", :git/sha "434408f"}}
+ :ns-default cljfmt.tool}}}
+
Regarding the bindings to run the tests From the REPL, refer to your IDE documentation. I have experience using both Emacs (spacemacs distribution) and VSCode and running my tests was always straight forward. If you are starting to learn Clojure, I recommend using VSCode, as the Clojure extension calva is of very good quality and well documented. I’ll use VSCode in the following example.
Let’s say we have the following test namespace:
(ns my-app.core.fib-test
+ (:require [clojure.test :refer [deftest is testing]]
+ [my-app.core :as sut]))
+
+(deftest fib-test
+ (testing "The Fib sequence is returned."
+ (is (= [0 1 1 2 3 5 8 13 21 34]
+ (sut/fib 10)))))
+
After I jack-in
using my dev alias form the deps.edn
file, I can load the my-app.core-test
namespace and run the tests. Using Calva, the flow will be like this:
dev
alias in my case)fib-test
namespace): load the ns in the REPLfib-test
namespace): run the testsIn the REPL, we see:
clj꞉user꞉>
+; Evaluating file: fib_test.clj
+#'my-app.core.fib-test/system-test
+clj꞉my-app.core.fib-test꞉>
+; Running tests for the following namespaces:
+; my-app.core.fib-test
+; my-app.core.fib
+
+; 1 tests finished, all passing 👍, ns: 1, vars: 1
+
Before committing code, it's crucial to run all project tests to ensure new changes haven't broken existing functionalities.
I added a few other namespaces and some tests.
Let’s run all the tests in the terminal:
clj -M:dev:test
+Loading namespaces: (my-app.core.cfg my-app.core.env my-app.core.fib my-app.core)
+Test namespaces: (:system :unit)
+Instrumented my-app.core.cfg
+Instrumented my-app.core.env
+Instrumented my-app.core.fib
+Instrumented my-app.core
+Instrumented 4 namespaces in 0.4 seconds.
+malli: instrumented 1 function vars
+malli: dev-mode started
+[(.)][(()(..)(..)(..))(.)(.)]
+4 tests, 9 assertions, 0 failures.
+
Note the Test namespaces: (:system :unit)
. By default, Kaocha runs all tests. When no metadata is specified on the deftest
, it is considered in the Kaocha :unit
group. However, as the project grows, we might have slower tests that are system tests, load tests, stress tests etc. We can add metadata to their deftest
in order to group them together. For instance:
(ns my-app.core-test
+ (:require [clojure.test :refer [deftest is testing]]
+ [malli.dev :as dev]
+ [malli.dev.pretty :as pretty]
+ [my-app.core :as sut]))
+
+(dev/start! {:report (pretty/reporter)})
+
+(deftest ^:system system-test ;; metadata to add this test in the `system` kaocha test group
+ (testing "The Fib sequence is returned."
+ (is (= [0 1 1 2 3 5 8 13 21 34]
+ (sut/system #:cfg{:app #:app{:name "app" :version "1.0.0"}
+ :fib #:fib{:length 10}})))))
+
We need to tell Kaocha when and how to run the system test. Kaocha configurations are provided in a tests.edn
file:
#kaocha/v1
+ {:tests [{:id :system :focus-meta [:system]} ;; only system tests
+ {:id :unit}]} ;; all tests
+
Then in the terminal:
clj -M:dev:test --focus :system
+malli: instrumented 1 function vars
+malli: dev-mode started
+[(.)]
+1 tests, 1 assertions, 0 failures.
+
We can add a bunch of metrics on top of the tests results. These metrics can be added via the :plugins
keys:
#kaocha/v1
+ {:tests [{:id :system :focus-meta [:system]}
+ {:id :unit}]
+ :plugins [:kaocha.plugin/profiling
+ :kaocha.plugin/cloverage]}
+
If I run the tests again:
clj -M:dev:test --focus :system
+Loading namespaces: (my-app.core.cfg my-app.core.env my-app.core.fib my-app.core)
+Test namespaces: (:system :unit)
+Instrumented my-app.core.cfg
+Instrumented my-app.core.env
+Instrumented my-app.core.fib
+Instrumented my-app.core
+Instrumented 4 namespaces in 0.4 seconds.
+malli: instrumented 1 function vars
+malli: dev-mode started
+[(.)]
+1 tests, 1 assertions, 0 failures.
+
+Top 1 slowest kaocha.type/clojure.test (0.02208 seconds, 97.0% of total time)
+ system
+ 0.02208 seconds average (0.02208 seconds / 1 tests)
+
+Top 1 slowest kaocha.type/ns (0.01914 seconds, 84.1% of total time)
+ my-app.core-test
+ 0.01914 seconds average (0.01914 seconds / 1 tests)
+
+Top 1 slowest kaocha.type/var (0.01619 seconds, 71.1% of total time)
+ my-app.core-test/system-test
+ 0.01619 seconds my_app/core_test.clj:9
+Ran tests.
+Writing HTML report to: /Users/loicblanchard/workspaces/clojure-proj-template/target/coverage/index.html
+
+|-----------------+---------+---------|
+| Namespace | % Forms | % Lines |
+|-----------------+---------+---------|
+| my-app.core | 44.44 | 62.50 |
+| my-app.core.cfg | 69.57 | 74.07 |
+| my-app.core.env | 11.11 | 44.44 |
+| my-app.core.fib | 100.00 | 100.00 |
+|-----------------+---------+---------|
+| ALL FILES | 55.26 | 70.59 |
+|-----------------+---------+---------|
+
There are a bunch of options to enhance the development experience such as:
clj -M:dev:test --watch --fail-fast
+
watch
mode makes Kaocha rerun the tests on file save.fail-fast
option makes Kaocha stop running the tests when it encounters a failing testThese 2 options are very convenient for unit testing.
However, when a code base contains slower tests, if the slower tests are run first, the watch mode is not so convenient because it won’t provide instant feedback.
We saw that we can focus
on tests with a specific metadata tag, we can also skip
tests. Let’s pretend our system
test is slow and we want to skip it to only run unit tests:
clj -M:dev:test --watch --fail-fast --skip-meta :system
+
Finally, I don’t want to use the plugins
(profiling and code coverage) on watch mode as it clutter the space in the terminal, so I want to exclude them from the report.
We can actually create another kaocha config file for our watch mode.
tests-watch.edn
:
#kaocha/v1
+ {:tests [{:id :unit-watch :skip-meta [:system]}] ;; ignore system tests
+ :watch? true ;; watch mode on
+ :fail-fast? true} ;; stop running on first failure
+
Notice that there is no plugins anymore, and watch mode and fail fast options are enabled. Also, the system
tests are skipped.
clj -M:dev:test --config-file tests_watch.edn
+SLF4J(I): Connected with provider of type [org.slf4j.nop.NOPServiceProvider]
+malli: instrumented 1 function vars
+malli: dev-mode started
+[(.)(()(..)(..)(..))]
+2 tests, 7 assertions, 0 failures.
+
We can now leave the terminal always on, change a file and save it and the tests will be rerun using all the options mentioned above.
Another approach to unit testing is to enhance the comment
blocks to contain tests. This means that we don’t need a test file, we can just write our tests right below our functions and it serves as both documentation and unit tests.
Going back to our first example:
(ns my-app.core.fib)
+
+(defn fib
+ "Return the Fibonacci sequence with a lenght of `n`."
+ [n]
+ (->> (iterate (fn [[a b]] [b (+' a b)])
+ [0 1])
+ (map first)
+ (take n)))
+
+^:rct/test
+(comment
+ (fib 10) ;=> [0 1 1 2 3 5 8 13 21 34]
+ (fib 0) ;=> []
+ )
+
The comment
block showcases example of what the fib
could return given some inputs and the values after ;=>
are actually verified when the tests are run.
We just need to evaluate (com.mjdowney.rich-comment-tests/run-ns-tests! *ns*)
in the namespace we want to test:
clj꞉my-app.core-test꞉>
+; Evaluating file: fib.clj
+nil
+clj꞉my-app.core.fib꞉>
+(com.mjdowney.rich-comment-tests/run-ns-tests! *ns*)
+;
+; Testing my-app.core.fib
+;
+; Ran 1 tests containing 2 assertions.
+; 0 failures, 0 errors.
+{:test 1, :pass 2, :fail 0, :error 0}
+
You might wonder how to run all the RC Tests of the project. Actually, we already did that, when we ran Kaocha unit tests in the terminal.
This is possible by wrapping the RC Tests in a deftest like so:
(ns my-app.rc-test
+ "Rich Comment tests"
+ (:require [clojure.test :refer [deftest testing]]
+ [com.mjdowney.rich-comment-tests.test-runner :as rctr]))
+
+(deftest ^rct rich-comment-tests
+ (testing "all white box small tests"
+ (rctr/run-tests-in-file-tree! :dirs #{"src"})))
+
And if we want to run just the rct
tests, we can focus on the metadata (see the metadata in the deftest above).
clj -M:dev:test --focus-meta :rct
+
It is possible to run the RC Tests without using Kaocha of course, refer to their doc for that.
I personally use a mix of both. When the function is not too complex and internal (not supposed to be called by the client), I would use RCT.
For system tests, which inevitably often involve side-effects, I have a dedicated test namespace. Using fixture
is often handy and also the tests are way more verbose which would have polluted the src namespaces with a comment
block.
In the short example I used in this article, the project tree is as follow:
├── README.md
+├── config
+│ └── env.edn
+├── deps.edn
+├── dev
+│ └── user.clj
+├── jib.edn
+├── project.edn
+├── src
+│ └── my_app
+│ ├── core
+│ │ ├── cfg.clj
+│ │ ├── env.clj
+│ │ └── fib.clj
+│ └── core.clj
+├── test
+│ └── my_app
+│ ├── core_test.clj
+│ └── rc_test.clj
+├── tests.edn
+└── tests_watch.edn
+
cfg.clj
, env.clj
and fib.clj
have RCT and core_test.clj
has regular deftest.
A rule of thumb could be: use regular deftest if the tests require at least one of the following:
When the implementation is easy to test, using RCT is good for a combo doc+test.
There are 2 main libraries I personally used for data validation an generative testing: clojure/spec.alpha and malli. I will not explain in details how both work because that could be a whole article on its own. However, you can guess which one I used in my example project as you might have noticed the instrumentation
logs when I ran the Kaocha tests: Malli.
Here is the config namespace that is responsible to validate the env variables passed to our hypothetical app:
(ns my-app.core.cfg
+ (:require [malli.core :as m]
+ [malli.registry :as mr]
+ [malli.util :as mu]))
+
+;; ---------- Schema Registry ----------
+
+(def domain-registry
+ "Registry for malli schemas."
+ {::app
+ [:map {:closed true}
+ [:app/name :string]
+ [:app/version :string]]
+ ::fib
+ [:map {:closed true}
+ [:fib/length :int]]})
+
+;; ---------- Validation ----------
+
+(mr/set-default-registry!
+ (mr/composite-registry
+ (m/default-schemas)
+ (mu/schemas)
+ domain-registry))
+
+(def cfg-sch
+ [:map {:closed true}
+ [:cfg/app ::app]
+ [:cfg/fib ::fib]])
+
+(defn validate
+ "Validates the given `data` against the given `schema`.
+ If the validation passes, returns the data.
+ Else, returns the error data."
+ [data schema]
+ (let [validator (m/validator schema)]
+ (if (validator data)
+ data
+ (throw
+ (ex-info "Invalid Configs Provided"
+ (m/explain schema data))))))
+
+(defn validate-cfg
+ [cfg]
+ (validate cfg cfg-sch))
+
+^:rct/test
+(comment
+ (def cfg #:cfg{:app #:app{:name "my-app"
+ :version "1.0.0-RC1"}
+ :fib #:fib{:length 10}})
+
+ (validate-cfg cfg) ;=>> cfg
+ (validate-cfg (assoc cfg :cfg/wrong 2)) ;throws=>> some?
+ )
+
Not going into too much details here but you can see that we define a schema
that follows our data structure. In this case, my data structure I want to spec is my config map.
Let’s have a look at a simple example of a test of our system which randomly generates a length and verifies that the result is indeed a sequence of numbers with length
element:
(ns my-app.core-test
+ (:require [clojure.test :refer [deftest is testing]]
+ [malli.dev :as dev]
+ [malli.dev.pretty :as pretty]
+ [malli.generator :as mg]
+ [my-app.core :as sut]
+ [my-app.core.cfg :as cfg]))
+
+(dev/start! {:report (pretty/reporter)})
+
+(deftest ^:system system-test
+ (testing "The Fib sequence is returned."
+ (is (= [0 1 1 2 3 5 8 13 21 34]
+ (sut/system #:cfg{:app #:app{:name "app" :version "1.0.0"}
+ :fib #:fib{:length 10}}))))
+ (testing "No matter the length of the sequence provided, the system returns the Fib sequence."
+ (let [length (mg/generate pos-int? {:size 10})
+ cfg #:cfg{:app #:app{:name "app" :version "1.0.0"}
+ :fib #:fib{:length length}}
+ rslt (sut/system cfg)]
+ (is (cfg/validate
+ rslt
+ [:sequential {:min length :max length} :int])))))
+
The second testing
highlights both data generation (the length
) and data validation (result must be a sequence of int
with length
elements).
The dev/start!
starts malli instrumentation. It automatically detects functions which have malli specs and validate it. Let’s see what it does exactly in the next section.
Earlier, we saw tests for the core/system
functions. Here is the core namespace:
(ns my-app.core
+ (:require [my-app.core.cfg :as cfg]
+ [my-app.core.env :as env]
+ [my-app.core.fib :as fib]))
+
+(defn system
+ {:malli/schema
+ [:=> [:cat cfg/cfg-sch] [:sequential :int]]}
+ [cfg]
+ (let [length (-> cfg :cfg/fib :fib/length)]
+ (fib/fib length)))
+
+(defn -main [& _]
+ (let [cfg (cfg/validate-cfg #:cfg{:app (env/config<-env :APP)
+ :fib (env/config<-env :FIB)})]
+ (system cfg)))
+
The system
function is straight forward. It takes a config map and returns the fib sequence.
Note the metadata of that function:
{:malli/schema
+ [:=> [:cat cfg/cfg-sch] [:sequential :int]]}
+
The arrow :=>
means it is a function schema. So in this case, we expect a config as unique argument and we expect a sequence of int as returned value.
When we instrument
our namespace, we tell malli to check the given argument and returned value and to throw an error if they do not respect the schema in the metadata. It is very convenient.
To enable the instrumentation, we call malli.dev/start!
as you can see in the core-test
namespace code snippet.
Clojure is a dynamically typed language, allowing us to write functions without being constrained by rigid type definitions. This flexibility encourages rapid development, experimentation, and iteration. Thus, it makes testing a bliss because we can easily mock function inputs or provide partial inputs.
However, if we start adding type check to all functions in all namespaces (in our case with malli metadata for instance), we introduce strict typing to our entire code base and therefore all the constraints that come with it.
Personally, I recommend adding validation for the entry point of the app only. For instance, if we develop a library, we will most likely have a top level namespace called my-app.core
or my-app.main
with the different functions our client can call. These functions are the ones we want to validate. All the internal logic, not supposed to be called by the clients, even though they can, do not need to be spec’ed as we want to maintain the flexibility I mentioned earlier.
A second example could be that we develop an app that has a -main
function that will be called to start our system. A system can be whatever our app needs to perform. It can start servers, connect to databases, perform batch jobs etc. Note that in that case the entry point of our program is the -main
function. What we want to validate is that the proper params are passed to the system that our -main
function will start. Going back to our Fib app example, our system is very simple, it just returns the Fib sequence given the length. The length is what need to be validated in our case as it is provided externally via env variable. That is why we saw that the system function had malli metadata. However, our internal function have tests but no spec to keep that dynamic language flexibility that Clojure offers.
Finally, note the distinction between instrumentation
, that is used for development (the metadata with the function schemas) and data validation for production (call to cfg/validate-cfg
). For overhead reasons, we don't want to instrument our functions in production, it is a development tool. However, we do want to have our system throws an error when wrong params are provided to our system, hence the call to cfg/validate-cfg
.
In functional programming, and especially in Clojure, it is important to avoid side effects (mutations, external factors, etc) as much as we can. Of course, we cannot avoid mutations as they are inevitable: start a server, connect to a database, IOs, update frontend web state and much more. What we can do is isolate these side effects so the rest of the code base remains pure and can enjoy the flexibility and thus predictable behavior.
Some might argue that we should never mock data. From my humble personal experience, this is impossible for complex apps. An app I worked on consumes messages from different kafka topics, does write/read from a datomic database, makes http calls to multiple remote servers and produces messages to several kafka topics. So if I don’t mock anything, I need to have several remote http servers in a test cluster just for testing. I need to have a real datomic database with production-like data. I need all the other apps that will produce kafka messages that my consumers will process. In other words, it is not possible.
We can mock functions using with-redefs which is very convenient for testing. Using the clojure.test use-fixtures is also great to start and tear down services after the tests are done.
I mentioned above, an app using datomic and kafka for instance. In my integration tests, I want to be able to produce kafka messages and I want to interact with an actual datomic db to ensure proper behavior of my app. The common approach for this is to use embedded
versions of these services. Our test fixtures can start/delete an embedded datomic database and start/stop kafka consumers/producers as well.
What about the http calls? We can with-redefs
those to return some valid but randomly generated values. Integration tests aim at ensuring that all components of our app work together as expected and embedded versions of external services and redefinitions of vars can make the tests predictable and suitable for CI.
I have not touch on running tests in the CI, but integration tests should be run in the CI and if all services are embedded, there should be no difficulty in setting up a pipeline.
To be sure an app performs well under heavy load, embedded services won’t work as they are limited in terms of performance, parallel processing etc. In our example above, If I want to start lots of kafka consumers and to use a big datomic transactor to cater lots of transactions, embedded datomic and embedded kafka won’t suffice. So I have to run a datomic transactor on my machine (maybe I want the DB to be pre-populated with millions or entities as well) and I will need to run kafka on my machine as well (maybe using confluent cp-all-in-one container setup). Let’s get fancy, and also run prometheus/grafana to monitor the performance of the stress tests.
Your intuition is correct, it would be a nightmare for each developer of the project to setup all services. One solution is to containerized all these services. a datomic transactor can be run in docker, confluent provides a docker-compose to run kafka zookeeper, broker, control center etc, prometheus scrapper can be run in a container as well as grafana. So providing docker-compose files in our repo so each developer can just run docker-compose up -d
to start all necessary services is the solution I recommend.
Note that I do not containerized my clojure app so I do not have to change anything in my workflow. I deal with load/stress tests the same way I deal with my unit tests. I just start the services in the containers and my Clojure REPL as per usual.
This setup is not the only solution to load/stress tests but it is the one I successfully implemented in my project and it really helps us being efficient.
I highlighted some common testing tools and methods that the Clojure community use and I explained how I personally incorporated these tools and methods to my projects. Tools are common to everybody, but how we use them is considered opinionated and will differ depending on the projects and team decision.
If you are starting your journey as a Clojure developer, I hope you can appreciate the quality of open-source testing libraries we have access to. Also, please remember that keeping things pure is the key to easy testing and debugging; a luxury not so common in the programming world. Inevitably, you will need to deal with side effects but isolate them as much as you can to make your code robust and your tests straight forward.
Finally, there are some tools I didn’t mention to keep things short so feel free to explore what the Clojure community has to offer. The last advice I would give is to not try to use too many tools or only the shiny new ones you might find. Keep things simple and evaluate if a library is worth being added to your deps.
]]>At Flybot Pte Ltd, we wanted to have a robot-player that can play several rounds of some of our card games (such as big-two
) at a decent level.
The main goal of this robot-player was to take over an AFK player for instance.
We are considering using it for an offline mode with different level of difficulty.
Vocabulary:
big-two
: popular Chinese Card game (锄大地)AI
or robot
: refer to a robot-player in the card game.2 approaches were used:
The repositories are closed-source because private to Flybot Pte. Ltd. The approaches used are generic enough so they can be applied to any kind of games.
In this article, I will explain the general principle of MCTS applied to our specific case of big-two
.
Monte Carlo Tree Search (MCTS) is an important algorithm behind many major successes of recent AI applications such as AlphaGo’s striking showdown in 2016.
Essentially, MCTS uses Monte Carlo simulation to accumulate value estimates to guide towards highly rewarding trajectories in the search tree. In other words, MCTS pays more attention to nodes that are more promising, so it avoids having to brute force all possibilities which is impractical to do.
At its core, MCTS consists of repeated iterations (ideally infinite, in practice constrained by computing time and resources) of 4 steps: selection
, expansion
, simulation
and update
.
For more information, this MCTS article explains the concept very well.
MCTS algorithm works very well on deterministic games with perfect information. In other words, games in which each player perfectly knows the current state of the game and there are no chance events (e.g. draw a card from a deck, dice rolling) during the game.
However, there are a lot of games in which there is not one or both of the two components: these types of games are called stochastic (chance events) and games with imperfect information (partial observability of states).
Thus, in big-two, we don’t know the cards of the other players, so it is a game with imperfect information (more info in this paper).
So we can apply the MCTS to big-two but we will need to do 1 of the 2 at least:
Our tree representation looks like this:
{:S0 {::sut/visits 11 ::sut/score [7 3] ::sut/chldn [:S1 :S2]}
+ :S1 {::sut/visits 5 ::sut/score [7 3] ::sut/chldn [:S3 :S4]}
+ :S3 {::sut/visits 1 ::sut/score [7 3]}}
+
In the big-two case, S0
is the init-state, S1
and S2
are the children states of S0
.
S1
is the new state after a possible play is played
S2
is the new state if another possible play is played etc.
S1
is a key of the tree map so it means it has been explored before to run simulations.
S1
has been selected 5 times.
S2
has never been explored before so it does not appear as a key.
In games when only the win matters (not the score), you could just use something like ::sut/wins
.
To select the child we want to run simulation from, we proceed like this:
UCT
is the UCB
(Upper Confidence Bound 1) applied to trees. It provides a way to balance exploration/exploitation. You can read more about it in this article.
In the algorithm behind AlphaGo, a UCB based policy is used. More specifically, each node has an associated UCB value and during selection we always chose the child node with the highest UCB value.
The UCB1 formula is the following:
With
xi
the mean node value,ni
the number of visits of nodei
,N
the number of visits of the parent node.
The equation includes the 2 following components:
The first part of the equation is the exploitation
based on the optimism in the fact of uncertainty.
The second part of the equation is the exploration
that allows the search to go through a very rarely visited branch from time to time to see if some good plays might be hidden there.
In the big-two case, the exploitation
is the total number of points divided by the number of visits of the node. For every simulation of the games, we add up the number of points the AI has made. We want the average points per game simulation so we divide by the number of times we have visited the node.
In the big-two case, the exploration
considers the number of visits of the parent node (previous state of the game) and the number of visits of the current node (current state of the game). The more we visit the parent without visiting the specific child the bigger the exploration term becomes. Thus, if we have not visited a child for a long time, since we take the log10
of N
, this term becomes dominant and the child will be visited once more.
The coefficient c
, called confidence value, allows us to change the proportion of exploration we want.
To recap, The UCB
will often return the state that led to the most points in the past simulation. However, from time to time, it will explore and return a child that did not lead to good reward in the past but that might lead to a stronger play.
The formula applied to big-two is the following:
This step just consists in adding the new selected child to the tree.
In the big-two case, the newly selected state is added to the tree.
For a given node (state), we run several games with everybody playing random moves and we evaluate the total score of the AI. The total amount of points taken from all the simulations is taken into account in the UCT formula explained above.
We do not consider the win because what matters in big-two, more than winning the game, is to score a lot of points (cards remaining in opponents hands) to make more money. Sometimes, it is even better to lose the game as long as the other losers have a lot of cards left in their hands. The win matters for your position in the next round however.
After all the simulations are done, we back-propagate all the rewards (sum up the scores of each simulation) to the branch nodes.
We call MCTS iteration
the 4 steps described above: expand->select->simulate->update
We run those 4 steps several times to have a tree that shows the path that has the most chance to lead to the best reward (highest score).
So, for each AI move, we run several MCTS iterations to build a good tree.
The more iterations we run, the more accurate the tree is but also the bigger the computing time.
We have 2 properties that can be changed:
nb-rollouts
: number of simulations per mcts iteration.budget
: number of mcts iterations (tree growth)Having more than 2 players (4 in big-two for instance) makes the process more complex as we need to consider the score of all the players. The default way of handling this case, is to back-propagate all the players scores after different simulations. Then, each robot (position) plays to maximize their score. The UCB value will be computed for the score of the concerned robot.
By caching the function that returns the possible children states, we don’t have to rerun that logic when we are visiting a similar node. The node could have been visited during the simulation of another player before so it saves time.
By caching the sample function, we do not simulate the same state again. Some states might have been simulated by players before during their mcts iterations. This allows us to go directly a level down the tree without simulating the state again and reusing the rewards back-propagated by a previous move.
In Clojure, even with caching, I was not able to run a full game because it was too slow, especially at the beginning of the game which can contain hundreds of different possible moves.
For {:nb-rollouts 10 :budget 30}
(10 simulations per state and 30 iterations of mcts), the first move can take more than a minute to compute.
As a workaround, I had the idea of using MCTS only if a few cards are remaining in the player's hands so at least the branches are not that big in the tree. I had decent results in Clojure for big-two.
For {:nb-rollouts 10 :budget 30 :max-cards 16}
(16 total cards remaining), in Clojure, it takes less than 3 seconds.
Because of this problem, I worked on a big-two AI that only uses the domain knowledge to play.
The problem with MCTS is that even if we don’t brute force all the possibilities, the computing time is still too big if we want to build the tree using random moves.
Most of the possible plays are dumb. Most of the time, we won’t break a fiver just to cover a single card for instance. In case there are no cards on table, we won’t care about having a branch for all the singles if we can play fivers. There are many situations like this. There are lots of branches we don’t need to explore at all.
As a human player, we always have a game-plan
, meaning we arrange our cards in our hands with some combinations we want to play if possible and the combination we don’t want to “break".
We can use this game-plan
as an alternative to MCTS, at least for the first moves of the games.
The details of this game-plan
are confidential for obvious reasons.
Having an hybrid approach, meaning using a game-plan
for the first moves of the game when the possible plays are too numerous, and then use MCTS at the end of the game allowed us to have a decent AI we can use.
As of the time I write this article, the implementation is being tested (as part of a bigger system) and not yet in production.
]]>I will use the flybot.sg website as example of app to deploy.
datalevin
as embedded database which resides alongside the Clojure code inside a containerInstead of using datomic pro and having the burden to have a separate containers for the app and transactor, we decided to use juji-io/datalevin and its embedded storage on disk. Thus, we only need to deploy one container with the app.
To do so, we can use the library atomisthq/jibbit baed on GoogleContainerTools/jib (Build container images for Java applications).
It does not use docker to generate the image, so there is no need to have docker installed to generate images.
jibbit can be added as alias
in deps.edn:
:jib
+ {:deps {io.github.atomisthq/jibbit {:git/tag "v0.1.14" :git/sha "ca4f7d3"}}
+ :ns-default jibbit.core
+ :ns-aliases {jib jibbit.core}}
+
The jib.edn
can be added in the project root with the configs to generate and push the image.
Example of jibbit config to just create a local docker image:
;; example to create an docker image to be run with docker locally
+{:main clj.flybot.core
+ :aliases [:jvm-base]
+ :user "root"
+ :group "root"
+ :base-image {:image-name "openjdk:11-slim-buster"
+ :type :registry}
+ :target-image {:image-name "flybot/image:test"
+ :type :docker}}
+
Then we can run the container:
docker run \
+--rm \
+-it \
+-p 8123:8123 \
+-v db-v2:/datalevin/dev/flybotdb \
+-e OAUTH2="secret" \
+-e ADMIN_USER="secret" \
+-e SYSTEM="{:http-port 8123, :db-uri \"datalevin/dev/flybotdb\", :oauth2-callback \"http://localhost:8123/oauth/google/callback\"}" \
+flybot/image:test
+
jibbit can also read your local AWS credentials to directly push the generated image to your ECR (Elastic Container Registry).
You need to have aws cli installed (v2 or v1) and you need an env variable $ECR_REPO
setup with the ECR repo string.
You have several possibilities to provide credentials to login to your AWS ECR.
Here is the jib.edn
for the CI:
{:main clj.flybot.core
+ :target-image {:image-name "$ECR_REPO"
+ :type :registry
+ :authorizer {:fn jibbit.aws-ecr/ecr-auth
+ :args {:type :profile
+ :profile-name "flybot"
+ :region "region"}}}}
+
I used repository secrets to handle AWS credentials on the GitHub repo:
AWS_ACCESS_KEY_ID
(must be named like that)AWS_SECRET_ACCESS_KEY
(must be named like that)ECR_REPO
This article explained quite well how to setup docker in EC2 and pull image from ECR.
The UserData to install docker at first launch of the EC2 instance is the following:
#! /bin/sh
+# For Amazon linux 2022 (might differ in 2023 but the principle remains)
+yum update -y
+amazon-linux-extras install docker
+service docker start
+usermod -a -G docker ec2-user
+chkconfig docker on
+
To allow the EC2 to pull from ECR we need to add an IAM policy
and IAM role
.
Let’s first create the policy flybot-ECR-repo-access
:
{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "ListImagesInRepository",
+ "Effect": "Allow",
+ "Action": [
+ "ecr:ListImages"
+ ],
+ "Resource": "arn:aws:ecr:region:acc:repository/flybot-website"
+ },
+ {
+ "Sid": "GetAuthorizationToken",
+ "Effect": "Allow",
+ "Action": [
+ "ecr:GetAuthorizationToken"
+ ],
+ "Resource": "*"
+ },
+ {
+ "Sid": "ManageRepositoryContents",
+ "Effect": "Allow",
+ "Action": [
+ "ecr:BatchCheckLayerAvailability",
+ "ecr:GetDownloadUrlForLayer",
+ "ecr:GetRepositoryPolicy",
+ "ecr:DescribeRepositories",
+ "ecr:ListImages",
+ "ecr:DescribeImages",
+ "ecr:BatchGetImage",
+ "ecr:InitiateLayerUpload",
+ "ecr:UploadLayerPart",
+ "ecr:CompleteLayerUpload",
+ "ecr:PutImage"
+ ],
+ "Resource": "arn:aws:ecr:region:acc:repository/flybot-website"
+ }
+ ]
+}
+
We then attached the policy flybot-ECR-repo-access
to a role flybot-ECR-repo-access-role
Finally, we attach the role flybot-ECR-repo-access-role
to our EC2 instance.
We also need a security group
to allow http(s) request and open our port 8123 for our aleph server.
We attached this SG to the EC2 instance as well.
Then inside the EC2 instance, we can pull the image from ECR and run it:
# Login to ECR, this command will return a token
+aws ecr get-login-password \
+--region region \
+| docker login \
+--username AWS \
+--password-stdin acc.dkr.ecr.region.amazonaws.com
+
+# Pull image
+docker pull acc.dkr.ecr.region.amazonaws.com/flybot-website:test
+
+# Run image
+docker run \
+--rm \
+-d \
+-p 8123:8123 \
+-v db-volume:/datalevin/prod/flybotdb \
+-e OAUTH2="secret" \
+-e ADMIN_USER="secret" \
+-e SYSTEM="{:http-port 8123, :db-uri \"/datalevin/prod/flybotdb\", :oauth2-callback \"https://www.flybot.sg/oauth/google/callback\"}" \
+acc.dkr.ecr.region.amazonaws.com/flybot-website:test
+
Even if we have one single EC2 instance running, there are several benefits we can get from AWS load balancers.
In our case, we have an Application Load Balancer (ALB) as target of a Network Load Balancer (NLB). Easily adding an ALB as target of NLB is a recent feature in AWS that allows us to combine the strength of both LBs.
The internal ALB purposes:
ACM
)ACM allows us to requests certificates for www.flybot.sg
and flybot.sg
and attach them to the ALB rules to perform path redirection in our case. This is convenient as we do not need to install any ssl certificates or handle any redirects in the instance directly or change the code base.
Since the ALB has dynamic IPs, we cannot use it in our goDaddy A
record for flybot.sg
. One solution is to use AWS route53 because AWS added the possibility to register the ALB DNS name in a A record (which is not possible with external DNS managers). However, we already use goDaddy as DNS host and we don’t want to depend on route53 for that.
Another solution is to place an internet-facing NLB behind the ALB because NLB provides static IP.
ALB works at level 7 but NLB works at level 4.
Thus, we have for the NLB:
The target group is where the traffic from the load balancers is sent. We have 3 target groups.
Since the ELB is the internet-facing entry points, we use a CNAME
record for www
resolving to the ELB DNS name.
For the root domain flybot.sg
, we use a A
record for @
resolving to the static IP of the ELB (for the AZ where the EC2 resides).
You can have a look at the open-source repo: skydread1/flybot.sg
]]>Your Clojure library is assumed to be already compiled to dotnet.
To know how to do this, refer to the article: Port your Clojure lib to the CLR with MAGIC
In this article, I will show you:
Just use the command nos dotnet/build
at the root of the Clojure project.
The dlls are by default generated in a /build
folder.
A .csproj
file (XML) must be added at the root of the Clojure project.
You can find an example here: clr.test.check.csproj
<Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFrameworks>netstandard2.0</TargetFrameworks>
+ </PropertyGroup>
+ <PropertyGroup>
+ <NuspecFile>clr.test.check.nuspec</NuspecFile>
+ <RestoreAdditionalProjectSources>
+ https://api.nuget.org/v3/index.json
+ </RestoreAdditionalProjectSources>
+ </PropertyGroup>
+</Project>
+
There is no need to add References as they were already built by Nostrand in the /build
folder.
Note the NuspecFile
that is required to use the nuspec.
A .nuspec
file (XML) must be added at the root of the Clojure project.
The references
are the references to the dlls in /build
.
You can find an example here: clr.test.check.nuspec
<?xml version="1.0" encoding="utf-8"?>
+<package>
+ <metadata>
+ <id>clr.test.check</id>
+ <version>1.1.1</version>
+ <title>clr.test.check</title>
+ <authors>skydread1</authors>
+ <description>Contains the core references for the Clojure lib test.check.</description>
+ <repository type="git" url="https://github.com/skydread1/clr.test.check" />
+ <dependencies>
+ <group targetFramework="netstandard2.0"></group>
+ </dependencies>
+ </metadata>
+ <files>
+ <file src="build\*.clj.dll" target="lib\netstandard2.0" />
+ </files>
+</package>
+
The dependency
tag is required to indicate the targeted framework.
The file
(using a wild card to avoid adding the files one by one) is required to add the dlls files that will be available for the consumer. So the target must be lib\TFM
.
In our case, Unity recommends to use netstandard2.0
so our target is lib\netstandard2.0
.
To push the package to a git host, one of the most convenient way is to have a nuget.config
(XML) locally at the root of the project.
<?xml version="1.0" encoding="utf-8"?>
+<configuration>
+ <packageSources>
+ <clear />
+ <add key="github" value="https://nuget.pkg.github.com/skydread1/index.json" />
+ </packageSources>
+ <packageSourceCredentials>
+ <github>
+ <add key="Username" value="skydread1" />
+ <add key="ClearTextPassword" value="PAT" />
+ </github>
+ </packageSourceCredentials>
+</configuration>
+
In order to push a Package to a Package Registry
to your GitHub project repo, you will need to create a PAT (Personal Access Token) with the write:packages
,:read:packages
and delete:packages
permissions.
Replace Username value by your Github username
Replace Token value by your newly created access token
Replace the repo URL by the path to your GitHub account page (not the repo).
Note: Do not push your config in GitHub as it contains sensitive info (your PAT), it is just for local use.
<?xml version="1.0" encoding="utf-8"?>
+<configuration>
+ <packageSources>
+ <clear />
+ <add key="gitlab" value="https://sub.domain.sg/api/v4/projects/777/packages/nuget/index.json" />
+ </packageSources>
+ <packageSourceCredentials>
+ <gitlab>
+ <add key="Username" value="deploy-token-name" />
+ <add key="ClearTextPassword" value="deploy-token-value" />
+ </gitlab>
+ </packageSourceCredentials>
+</configuration>
+
In order to push a Package to a Package Registry
to your GitLab project repo, you will need to create a deploy token (not access token) with the read_package_registry
and write_package_registry
permissions.
Replace Username value by your token username
Replace Token value by your newly created deploy token
Replace the domain (for private server) and project number in the GitLab URL. (don’t forget the index.json at the end)
Note: Do not push your config in GitLab as it contains sensitive info (your deploy token), it is just for local use.
At the root of the project, the dotnet.clj
contains the convenient function to be used with nasser/nostrand.
You can find an example here: dotnet.clj
We added to our Clojure library a convenient function to avoid having to manually use the dotnet commands, you can just run at the root at the Clojure directory:
nos dotnet/nuget-push
+
This will create the nuget code package .nupkg
file in the folder bin/Release
. the name is the package name and the version such as clr.test.check.1.1.1.nupkg
.
It will then push it to either Gitlab or Github depending on the host using the credentials in nuget.config
.
It is equivalent to the 2 dotnet commands:
dotnet pack --configuration Release
+dotnet nuget push "bin/Release/clr.test.check.1.1.1.nupkg" --source "github"
+
Note: for a Clojure project, you can let the default option for the packing. There is no need to build in theory as we already have our dlls ready in our /build
folder. The dotnet build
will just create a unique dll with the name of your library that you can just ignore.
Using package references is the new way of doing this but it does not work with Unity.
The new way of importing the nuget packages is to use the PackageReference
tag directly in the .csproj
file such as:
<PackageReference Include="Sitecore.Kernel" Version="12.0.*" />
+
But this method only works if you are using the .csproj
file which we don’t use in Unity as we use the manifest.json
.
Unity uses a json file in Packages/manifest.json
to download deps. However it does not work for nuget packages.
There is no .csproj
at the root so we cannot use the method above, and all the other underlying csproj
are generated by Unity so we cannot change them.
The only choice we have is to use the old way of importing the nuget packages which is to use a packages.config
and then use the command nuget restore
to fetch the packages last versions.
So we need to add 2 config files in our root of our Unity project:
nuget.config
: github/gitlab credentialspackages.config
: packages name and their version/targetIn order to fetch all the packages at once using nuget restore
, we need to add locally the nuget.config
with the different sources and credentials.
So to restore our GitHub and GitLab packages from our example, we use the following nuget.restore
:
<?xml version="1.0" encoding="utf-8"?>
+<configuration>
+ <config>
+ <add key="repositoryPath" value="Assets/ClojureLibs" />
+ </config>
+ <packageSources>
+ <clear />
+ <add key="gitlab" value="https://sub.domain.sg/api/v4/projects/777/packages/nuget/index.json" />
+ <add key="github" value="https://nuget.pkg.github.com/skydread1/index.json" />
+ </packageSources>
+ <packageSourceCredentials>
+ <gitlab>
+ <add key="Username" value="deploy-token-name" />
+ <add key="ClearTextPassword" value="deploy-token-value" />
+ </gitlab>
+ <github>
+ <add key="Username" value="skydread1" />
+ <add key="ClearTextPassword" value="PAT" />
+ </github>
+ </packageSourceCredentials>
+</configuration>
+
The repositoryPath
allows us to get our packages in a specific directory. In our case, we put it in Assets/ClojureLibs
(it needs to be in the Asset
dir anywhere)
To tell Unity which packages to import while running nuget restore
, we need to provide the packages.config
. Here is the config in our example:
<?xml version="1.0" encoding="utf-8"?>
+<packages>
+ <package id="Magic.Unity" version="1.0.0" targetFramework="netstandard2.0" />
+ <package id="my-private-proj" version="1.0.0" targetFramework="netstandard2.0" />
+ <package id="clr.test.check" version="1.1.1" targetFramework="netstandard2.0" />
+</packages>
+
To run clojure in Unity, you need Magic.Unity. It is a the runtime for Clojure compiles with Magic in Unity.
Note the Magic.Unity
in the packages.config
above. Magic.Unity has its own nuget package deployed the same way as you would deploy a Clojure library, so you import it along side your nuget packages with your compiles clojure libs.
Once you have the github/gitlab credentials ready in nuget.config
and the packages and their version/target listed in packages.config
, you can run the command nuget restore
at the root of the unity project.
If running nuget restore
do not fetch the last version, it is because it is using the local cache. In this case you need to force restore using those commands.
Most of the time, ignoring the cache is fixing this issue:
nuget restore -NoCache
+
Here is the packages tree of our project for instance:
~/workspaces/unity-projects/my-proj:
+.
+├── clr.test.check-legacy.1.1.1
+│ ├── clr.test.check-legacy.1.1.1.nupkg
+│ └── lib
+│ └── netstandard2.0
+│ ├── clojure.test.check.clj.dll
+│ ├── clojure.test.check.clojure_test.assertions.clj.dll
+│ ├── clojure.test.check.clojure_test.clj.dll
+│ ├── clojure.test.check.generators.clj.dll
+│ ├── clojure.test.check.impl.clj.dll
+│ ├── clojure.test.check.random.clj.dll
+│ ├── clojure.test.check.results.clj.dll
+│ └── clojure.test.check.rose_tree.clj.dll
+├── my-private-lib.1.0.0
+│ ├── my-private-lib.1.0.0.nupkg
+│ └── lib
+│ └── netstandard2.0
+│ ├── domain.my_prate_lib.core.clj.dll
+│ └── domain.my_prate_lib.core.utils.clj.dll
+
Finally, You can add Magic.Unity (runtime for magic inside Unity) in the manifest.json like so:
{
+ "dependencies": {
+ ...,
+ "sr.nas.magic.unity": "https://github.com/nasser/Magic.Unity.git"
+ }
+}
+
Once you have the proper required config files ready, you can use Nostrand
to Build your dlls:
nos dotnet/build
+
Pack your dlls in a nuget package and push to a remote host:
nos dotnet/nuget-push
+
Import your packages in Unity:
nuget restore
+
Magic.Unity
is the Magic runtime for Unity and is already nuget packaged on its public repo
The Lasagna stack library fun-map by @robertluo blurs the line between identity, state and function. As a results, it is a very convenient tool to define system
in your applications by providing an elegant way to perform associative dependency injections.
In this document, I will show you the benefit of fun-map
, and especially the life-cycle-map
as dependency injection system.
In any kind of programs, we need to manage the state. In Clojure, we want to keep the mutation parts of our code as isolated and minimum as possible. The different components of our application such as the db connections, queues or servers for instance are mutating the world and sometimes need each other to do so. The talk Components Just Enough Structure by Stuart Sierra explains this dependency injection problem very well and provides a Clojure solution to this problem with the library component.
fun-map is another way of dealing with inter-dependent components. In order to understand why fun-map
is very convenient, it is interesting to look at other existing solutions first.
Let’s first have a look at existing solution to deal with life cycle management of components in Clojure, especially the Component library which is a very good library to provide a way to define systems.
In the Clojure word, we have stateful components (atom, channel etc) and we don’t want it to be scattered in our code without any clear way to link them and also know the order of which to start these external resources.
The component
of the library component is just a record that implements a Lifecycle
protocol to properly start and stop the component. As a developer, you just implement the start
and stop
methods of the protocol for each of your components (DB, server or even domain model).
A DB component could look like this for instance
(defrecord Database [host port connection]
+ component/Lifecycle
+ (start [component]
+ (let [conn (connect-to-database host port)]
+ (assoc component :connection conn)))
+ (stop [component]
+ (.close connection)
+ (assoc component :connection nil)))
+
All these components are then combined together in a system
map that just bounds a keyword to each component. A system is a component that has its own start/stop implementation that is responsible to start all components in dependency order and shut them down in reverse order.
If a component has dependencies on other components, they are then associated to the system and started first. Since each component returns another state of the system; after all components are started, their return values are assoc back to the system.
Here is an example of a system with 3 components. The app
components depends on the db
and scheduler
components so they will be started first:
(defn system [config-options]
+ (let [{:keys [host port]} config-options]
+ (component/system-map
+ :db (new-database host port)
+ :scheduler (new-scheduler)
+ :app (component/using
+ (example-component config-options)
+ {:database :db
+ :scheduler :scheduler}))))
+
So, in the above example, db
and scheduler
have been injected to app
. Stuart Sierra mentioned that contrary to constructor
injections and setter
injections OOP often use, we could refer this component injections (immutable map) as associative
injections.
This is very convenient way to adapt a system to other different situations such as testing for instance. You could just assoc to an in-memory DB and a simplistic schedular in a test-system to run some tests:
(defn test-system
+ [...]
+ (assoc (system some-config)
+ :db (test-db)
+ :scheduler (test-scheduler)))
+
+;; then we can call (start test-system) to start all components in deps order.
+
Thus, you can isolate what you want to test and even run tests in parallel. So, it is more powerful than with-redefs
and binding
because it is not limited by time. Your tests could replace a big portion of your logic quite easily instead of individual vars allowing us to decouple the tests from the rest of the code.
Finally, we do not want to pass the whole system to every function in all namespaces. Instead, the components library allows you to specify just the component.
However, there are some limitations to this design, the main ones being:
stuartsierra/component
is a whole app buy-in. Your entire app needs to follow this design to get all the benefits from it.Other libraries were created as replacement of component such as mount and integrant.
fun-map is yet another replacement of component, but it does more than just providing state management.
The very first goal of fun-map
is to blur the line between identity, state and function, but in a good way. fun-map
combines the idea of lazy-map and plumbing to allow lazy access to map values regardless of the types or when these values are accessed.
In order to make the map’s values accessible on demand regardless of the type (delay, future, atom etc), map’s value arguments are wrapped to encapsulate the way the underlying values are accessed and return the values as if they were just data in the first place.
For instance:
(def m (fun-map {:numbers (delay [3 4])}))
+
+m
+;=> {:numbers [3 4]}
+
+(apply * (:numbers m))
+;=> 12
+
+;; the delay will be evaluated just once
+
You can see that the user of the map is not impacted by the delay
and only see the deref value as if it were just a vector in the first place.
Similar to what we discussed regarding how the component library assoc dependencies in order, fun-map as a wrapper macro fk
to use other :keys
as arguments of their function.
Let’s have a look at an example of fun-map
:
(def m (fun-map {:numbers [3 4]
+ :cnt (fw {:keys [numbers]}
+ (count numbers))
+ :average (fw {:keys [numbers cnt]}
+ (/ (reduce + 0 numbers) cnt))}))
+
In the fun-map above, you can see that the key :cnt
takes for argument the value of the key :numbers
. The key :average
takes for arguments the values of the key :numbers
and :cnt
.
Calling the :average
key will first call the keys it depends on, meaning :cnt
and :number
then call the :average
and returns the results:
(:average m)
+;=> 7/2
+
We recognize the same dependency injections process highlighted in the Component section.
Furthermore, fun-map provides a convenient wrapper fnk
macro to destructure directly the keys we want to focus on:
(def m (fun-map {:numbers [3 4]
+ :cnt (fnk [numbers]
+ (count numbers))
+ :average (fnk [numbers cnt]
+ (/ (reduce + 0 numbers) cnt))}))
+
As explained above, we could add some more diverse values, it wouldn’t be perceived by the user of the map:
(def m (fun-map {:numbers (delay [3 4])
+ :cnt (fnk [numbers]
+ (count numbers))
+ :multiply (fnk [numbers]
+ (atom (apply * numbers)))
+ :average (fnk [numbers cnt]
+ (/ (reduce + 0 numbers) cnt))}))
+
+(:multiply m)
+;=> 12
+
+m
+;=> {:numbers [3 4] :cnt 2 :multiply 12 :average 7/2}
+
+
Wrappers take care of getting other keys’s values (with eventual options we did not talk about so far). However, to get the life cycle we describe in the Component library section, we still need a way to
fun-map provides a life-cycle-map
that allows us to specify the action to perform when the component is getting started/closed via the closeable
.
touch
start the system, meaning it injects all the dependencies in order. the first argument of closeable
(eventually deref in case it is a delay or atom etc) is returned as value of the key.halt!
close the system, meaning it executes the second argument of closeable
which is a function taking no param. It does so in reverse order of the dependenciesHere is an example:
(def system
+ (life-cycle-map ;; to support the closeable feature
+ {:a (fnk []
+ (closeable
+ 100 ;; 1) returned at touch
+ #(println "a closed") ;; 4) evaluated at halt!
+ ))
+ :b (fnk [a]
+ (closeable
+ (inc a) ;; 2) returned at touch
+ #(println "b closed") ;; 3) evaluated at halt!
+ ))}))
+
+(touch system1)
+;=> {:a 100, :b 101}
+
+(halt! system1)
+;=> b closed
+; a closed
+; nil
+
closeable
takes 2 params:
Same as for Component, you can easily dissoc/assoc/merge keys in your system for testing purposes. You need to be sure to build your system before touch
.
(def test-system
+ (assoc system :a (fnk []
+ (closeable
+ 200
+ #(println "a closed v2")))))
+
+(touch test-system)
+;=> {:a 200, :b 201}
+
+(halt! test-system)
+;=> b closed
+; a closed v2
+; nil
+
fun-map also support other features such as function call tracing, value caching or lookup for instance. More info in the readme.
To see Fun Map in action, refer to the doc Fun-Map applied to flybot.sg.
]]>