Skip to content

Structured Streaming using Apache Spark on Binance Blockchain Stream

Notifications You must be signed in to change notification settings

dhiraa/blockchain-streaming

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spark Blockchain Streaming Analytics

Choice of Tools

  • Akka
    • ReactiveKafka is been used to pull data from websockets
    • For websocket connectors, the initial plan was to use Spark Streaming Custom receiver, but later Akka took its place as it can be easily integrated to Kafka and can be controlled outside the Spark realm.
    • Though Akka is designed for distributed environment, how to make this work in cluster mode for 1000s of streams is a topic of its own Google search!
  • Kafka
    • Kafka was choosen as it is the buzz word in the market for Streaming and lot of community support. Nothing else!!!
  • Spark Streaming
  • Spark Structured Streaming
    • Initially to do some aggregations and statefull operations Spark Strucuted streaming was choosen, later I had to fall back to Streaming since there are some limitations

Future Work

Replay Function
Q) Is it possible to provide an API that has to use past data to get some insight on the streaming data?
A) Since we are using Kafka as our Message handling system, one possible solution given choice of tools and design, we can use offsets property while loading the source and register it as a SQL table and do ad-hoc query over the REST API
- https://stackoverflow.com/questions/46153105/how-to-get-kafka-offsets-for-structured-query-for-manual-and-reliable-offset-man
- https://blog.cloudera.com/blog/2017/06/offset-management-for-apache-kafka-with-apache-spark-streaming/

How to serve 1000s of consumer apps in the Down stream?

How to run?

Each of the following command needs to be executed on seprate terminal.

Kafka

Download Kafka here

  • Start the Zookeeper server:
cd /path/to/kafka_2.11-1.1.0
  
bin/zookeeper-server-start.sh config/zookeeper.properties
  • Start the Kafka Server:
bin/kafka-server-start.sh config/server.properties
  • Start the Kafka Consumer on port 9092 with any topic of choice:
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic binance --new-consumer

Akka

cd /path/to/blockchain-streaming/
sbt
runMain com.binance.kafka.BinanceProducer

Spark

  • Start Spark Streaming Analytics
    • It listens for XVGBTC streams and BTCUSDT streams and calculates XVGUSBT VWAP-Volume Weighted Average Price and Std. deviation of the same over given time period
  • Results are written @ /tmp/blockchain-streaming/binance/
  • Statistics of the stream is printed as Stats("XVGUSDT", numEvents, vwap, mean, sum, std)
cd /path/to/blockchain-streaming/
sbt
runMain com.binance.Streaming --batch-time-in-seconds 30

or (to save resources on single machine)

  • Start Spark Streaming Analytics
    • Due to limitation, this does not mimic above streaming logic
    • However there are other interesting stuff that can be done like MSG Streams ---> Kafka <---> Spark Streaming Analytics
cd /path/to/blockchain-streaming/
sbt
runMain com.binance.StructuredStreaming --batch-time-in-seconds 30

#on a seprate terminal from 
cd /path/to/kafka_2.11-1.1.0
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic vwap --new-consumer

References:

Tutorials

Git:

About

Structured Streaming using Apache Spark on Binance Blockchain Stream

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages