The purpose of this project is to compare and demonstrate different technologies for client-server communication for control system components.
The easiest way to get the demo working is to check out the actual project from this repository and using the included gradle wrapper to start client and server.
From within the top directory, start the server by:
./gradlew :demo-server:run
Optionally, you can adjust the server (http) and the grpc port by calling
./gradlew :demo-server:run -D"server.port"=9090 -D"grpc.port"=5353
This will set the port to 9090 (default would be 8080), and grpc port to 5353 (default would be 5252).
and to start the client (javafx gui), type:
./gradlew :demo-client:run
Optionally, you can adjust the server (http) port, server host and grpc port by calling
./gradlew :demo-client:run -D"server.port"=9090 -D"server.host"=somehost -D"grpc.port"=5353
This will set the port to 9090 (default would be 8080).
On a linux system, it you might have to first make the gradle wrapper script executable:
chmod a+x ./gradlew
Important note:
The included pom files are NOT automatically generated from the gradle dependencies. Instead, they have to be kept in sync manually. To bootstrap big changes in deps, the 'updatePom' task can be used, which will create files 'pom-generated.xml', from which can then be copied by hand into pom.xml.
./gradlew updatePom
(Originally, the plan was to generate the full poms, without any manual editing. However, this turned out to be too much effort to set up properly ... therefore this intermediate solution.)
To run the server using maven, from the command line, one has to change into the server directory and then call the mvn exec command. So from the top dir somehow like this:
cd demo-server
mvn exec:java
Running the javafx client works analogous:
cd demo-client
mvn exec:java
For both, client and server, it is possible to specify the http port through a system property. This is particularly necessary, in case the port 8080 (default) is already occupied on your development machine. In this case, e.g. the port 9090 could be specified like this:
mvn exec:java -D"server.port"="9090"
(The quotes (") are only required on windows machines). Also the grpc port can be adjusted as well as the server-host for the client to use (see above gradle examples).
- No notification possible
Example Code for the RestController:
// For gets
@GetMapping("/standardDev")
public double getTuneStandardDev() {
// whatsoever
}
// for posts
@PostMapping("/standardDev/{stdDev}")
public void setTuneStandardDev(@PathVariable("stdDev") double stdDev) {
//whatsoever
}
Simplistic java client code (using spring 5 web client):
private final WebClient client = WebClient.create("http://" + BASE_URI);
// getting from the server
public double getStandardDev() {
return client.get()
.uri("/standardDev")
.retrieve()
.bodyToMono(Double.class)
.block();
}
// setting to the server (through POST)
@Override
public void setStandardDev(double standardDev) {
client.post()
.uri("/standardDev/" + standardDev)
.exchange()
.block();
}
Example Client code in javascript (using jquery):
$.get("http://" + location.host + "/standardDev", msg => {
console.log(msg);
});
Server-Sent events (specification here) seem to be very useful for many applications:
- Works nicely out of the box. The browser shows some nice updates immediately.
- Easy for variable number of endpoints
- Seems to reconnect automatically in javascript (not in java!?)
A RestController method in spring would look somehow like this:
@GetMapping(value = "/measuredTunes", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<Tune> measuredTunes() {
// whatever here
}
Simplistic (neglecting error handling, stream sharing etc) client code in Java:
private final WebClient client = WebClient.create("http://" + BASE_URI);
public Flux<Tune> measuredTunes() {
return client.get()
.uri("/measuredTunes")
.retrieve()
.bodyToFlux(Tune.class);
}
The client code in javascript:
var source = new EventSource("http://" + location.host + "/measuredTunes");
source.onmessage = e => {
console.log(e.data);
};
Open Questions:
- High data rates?
- Good for high data rates
- Endpoints to be known on startup
- Do not reconnect automatically.
- Some more in formation for when to use can be found in the spring docs.
Spring server configuration: WebSocketConfiguration.java
Java Client Code
@Override
public Flux<Tune> wsMeasuredTunes() {
StringFluxWsHandler handler = new StringFluxWsHandler();
wsClient.doHandshake(handler, "ws://" + BASE_URI + "/ws/measuredTunes");
// set message size limits to 1 MB ?
return handler.flux()
.map(v -> defaultDeserialization(v, Tune.class));
}
private class StringFluxWsHandler extends TextWebSocketHandler {
private final ReplayProcessor<String> sink = ReplayProcessor.cacheLast();
private final Flux<String> stream = sink.publishOn(Schedulers.elastic());
public Flux<String> flux() {
return this.stream;
}
@Override
protected void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
sink.onNext(message.getPayload());
}
}
Client Code in JavaScript:
var wsTune = new WebSocket("ws://"+location.host+"/ws/measuredTunes");
wsTune.onmessage = (msg) => {
console.log(msg.data);
}
Work in progress
NOTE:
The tests on this technology was paused for the moment, as the progress on starting with gRPC web was taking too long for the moment.
Notes to get started with gRPC development: grpc-develop.md
As a first try to check the effort of interacting a with other languages, python was tested amongst others. The server code + some description can be found here. This implementation was written using flask. The relevant code looks somehow like this:
# Conversion stuff
def json_response(obj):
return Response(json.dumps(obj), mimetype='application/json')
def sse_response(iterator):
return Response(('data: {0}\n\n'.format(json.dumps(o)) for o in iterator), mimetype='text/event-stream')
def empty_response():
return Response("{}", mimetype='application/json')
# Endpoints
@app.route("/api/measuredTune")
def measured_tune():
return json_response(tuneDto())
@app.route("/api/measuredTunes")
def measured_tunes():
return sse_response(tunes())
@app.route("/api/standardDev")
def standard_dev():
return json_response(simulator.get_std_def())
@app.route("/api/standardDev/<stddev>", methods=["POST"])
def set_standard_dev(stddev):
simulator.set_std_dev(float(stddev))
return empty_response()
@app.route("/")
def root():
return app.send_static_file("index.html")
Corresponding Python Client code (which also works with the webflux server), can be found here. The relevant code part looks somehow like:
def api_url(path):
return "http://localhost:8080/api" + path;
def sse_stream(path):
resp = requests.get(api_url(path), stream=True)
return map(lambda e: json.loads(e.data), sseclient.SSEClient(resp).events())
def get(path):
result = requests.get(api_url(path))
return result.json()
def post(path):
requests.post(api_url(path))
# API access
def get_tune():
tune = get("/measuredTune")
print("measured tune from get: ", tune)
def set_stddev(stddev):
print("setting stddev to {}.".format(stddev))
post("/standardDev/" + str(stddev))
def get_tunes():
return sse_stream("/measuredTunes")
Ongoing work!
Potential library candidates:
The demo project contains some simple utilities to probe the transportation capabilities of the different technologies. There exist dedicated endpoints (and some panel in the client) to tune some parameters of publications:
delayInMillis
: this is the delay in milliseconds between two publications of a tune item.payloadLength
: In addition to the tune value and error, the transported tune object contains an additional list of double values, which can be of variable length.
Using this input parameters, then the publication frequency on the client is calculated and displayed.
Here is a screenshot of the javafx-gui showing this testing view:
In the following table, we assume a requested update rate. Then, as a quick check, we tuned up the payload size and look when the requested speed cannot be maintained on the client side (breakdown). The streams on the server of the demo application are configured to drop in case of backpressure, so we will loose updates in this case.
The following table shows the approximate payload lengths, where the demanded publication freq is slowed down by 25%.
Tech | 25 Hz demand | 10 Hz demand | 4 Hz demand |
---|---|---|---|
gRPC | ˜ 400k | ˜ 400k | ˜ 1M |
Websockets | ˜ 65k | ˜ 160k | ˜ 400k |
Webflux | ˜ 55k | ˜ 140k | ˜ 350k |
This translates into (estimated) data rates as follows:
Tech | 25 Hz demand | 10 Hz demand | 4 Hz demand |
---|---|---|---|
gRPC | ˜ 80 MB/s | ˜ 32 MB/s | ˜ 32 MB/s |
Websockets | ˜ 13 MB/s | ˜ 12.8 MB/s | ˜ 12.8 MB/s |
Webflux | ˜ 11 MB/s | ˜ 11.2 MB/s | ˜ 11.2 MB/s |
So in summary, from we can expect:
Tech | data rate |
---|---|
gRPC | ˜ 32 MB/s |
Websockets | ˜ 13 MB/s |
Webflux | ˜ 11 MB/s |
DISCLAIMER
These tests were done on a setup with both, client and server running on the same machine. No tuning or optimization on any of the technologies was done. This tests represent more or less a quite naive implementation. Due to these restrictions, the absolute numbers might not be fully valuable. However, as a comparision between these technologies, we consider them still valuable.
Test Setup:
- Acer Predator G9-791
- Intel Core i7-6700HQ CPU @ 2.60 GHz (4 Cores, 7 Threads)
- 64.0 GB RAM
In summary, gRPC clearly is performing better for big data loads. Webflux and Websockets are very similar within the accuracy of this 'measurement';
As a complementary measurement, we looked at small packages only (payloadLength=0
), and scanned the publication
frequency space.
The resulting calculations can be found here. As a summary: All the three technologies (in the given setup) seem to perform very similar. The reduction in update frequency seems to be mainly dominated by some constant overhead per publication (0.2ms), which of course is more visible at higher update rates.
Pure Rest
(+) get/set
(-) no notification
(+) standard technology
Webflux
(+) get/set
(+) notification through SSE
(+) SSE reconnect nicely in web (java to be sorted out)
(+) standard technologies (http+SSE), available and easy to implement in many languages
(+) fits well the device/property model
Websockets
(+) get/set/notification
(-) endpoints to be known at implementation time
gRPC
(+) in principle all features
(+?) generated code/stubs
(-) setting up code generation is brittle
(-) complex system in web (additional proxy needed). Tricky to set up.
(-) Every developer needs special setup (code generation, proxy)
(-) Not straight forward to debug (e.g. no standard protocol in browser)
(*) Is code generation worth it? Usually we anyhow convert to internal domain objects right afterwards.
- combines REST + websockets.
- particularly useful when setable and getable
- Potentially an interesting combination could be REST + SSE.
- Integrierbarkeit (how well does it fit in the landscape)
- Language agnostic?
- Handhabbarkeit?
- Speed?
Also to be discussed with Hanno