This is a data streaming demo project implemented in Golang, Flink, Kafka, MongoDB, and JMeter.
- go-api: A restful-API application which records all trading deal result.
- The following data structure defines the dealing status of every forex transaction.
- For example, a AUDCAD trade which deals at price 0.9 and volume 50 units would be recorded as:
{BaseCurrency: AUD, QuotedCurrency: USD, Price: 0.9, Volume: 50}
.
type Deal struct {
BaseCurrency string `json:"base_currency"`
QuotedCurrency string `json:"quoted_currency"`
Price float64 `json:"price"`
Volume float64 `json:"volume"`
}
- flink: A data streaming application which profiling dealing summary such as the current turnover and volume of each BaseCurrency in real time.
- The following data structure defines the total trading summary for each BaseCurrency.
public class SummaryAccumulator {
public String key;
public double turnover;
public double volume;
public double count;
public ConcurrentHashMap<String, HashMap<String, Double>> currencyMap = new ConcurrentHashMap<>();
}
- Install and start Zookeeper and Kafka server on port 2181 and 9092 respectively.
- Install and start MongoDB on port 27017.
- Start restful API application in project "go-api" on port 8081.
- Start Flink data-streaming application in project "flink".
- Run JMeter auto-testing script under the dictionary "jmeter-scripts".
- Alternatively, you can also execute
docker-compose up
based on docker-compose.yml under the root path of this project to launch all applications mentioned above. - There are two dockerfiles under the following two projects, "flink" and "go-api". These files are used to package them.
- Hence, you can execute
docker build -t <imageName> .
anddocker run <imageName>
under the root path of project "flink" and "go-api", to launch these two applications individually, and remain your mongoDB and Kafka server launched on your host network, rather than container.- Steps to individually launch the project "flink" with docker. Before executing docker related commands, packaging jar file by executing
mvn
command is necessary.
docker build -t flink . docker run flink
- Steps to individually launch the project "go-api" with docker.
docker build -t go-api . docker run --rm -p 8081:8081 --add-host host.docker.internal:host-gateway -it go-api
- Steps to individually launch the project "flink" with docker. Before executing docker related commands, packaging jar file by executing
- Data persistence
- zookeeper:
./kafka/zookeeper:/bitnami/zookeeper
- kafka:
./kafka/kafka:/bitnami/kafka
- mongo:
./mongo/mongo-volume:/data/db
- zookeeper:
- Initialization
- kafka:
init-kafka
service inside docker-compose. - mongo:
./mongo/mongo-init.js
used inmongo
service inside docker-compose.
- kafka:
- Docker network: Sometimes, maybe you'd like to connect services (e.g. mongoDB and Kafka) launched on host device. You can use
--add-host host.docker.internal:host-gateway
in docker command orextra_hosts: - "host.docker.internal:host-gateway"
in docker-compose file to define a special DNShost.docker.internal
. - Directory permissions: if you encounter any permission issue when launching any service involved file or directory manipulation, you need to define
user: root
in docker-compose file. - If you encounter any java.lang.reflect.InaccessibleObjectException when launching flink project, please add the following vm options before starting the application again.
ENV JAVA_OPTS="--add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens java.base/jdk.internal.module=ALL-UNNAMED"
- Fully and more realistic Forex data integration.
- Initialization
- Download Kafka
- Start Zookeeper
.\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
- Start Kafka
.\bin\windows\kafka-server-start.bat .\config\server.properties
- Cunsumer & Producer
- Start Consumer
.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic rawData --from-beginning
- Start Producer
.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic cookedData --from-beginning
- Topic Manipulation
- Create topic
.\bin\windows\kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic tradingResult
- Search topic
.\bin\windows\kafka-topics.bat --list --bootstrap-server localhost:9092
- Delete topic
.\bin\windows\kafka-topics.bat --bootstrap-server localhost:9092 --delete --topic tradingResult