BCDHub is a set of microservices written in Golang:
indexer
Loads and decodes operations related to smart contracts and also keeps track of the block chain and handles protocol updates.metrics
Receives new contract/operation events from the indexer and calculates various metrics that are used for ranking, linking, and labelling contracts and operations.API
Exposes RESTful JSON API for accessing indexed data (with on-the-fly decoding). Also provides a set of methods for authentication and managing user profiles.
Those microservices are sharing access to databases and communicating via database:
ElasticSearch
cluster (single node) for storing all indexed data including blocks, protocols, contracts, operations, big_map diffs, and others.PostgreSQL
database for storing compilations and user data.
BCDHub also depends on several API endpoints exposed by TzKT although they are optional:
- List of blocks containing smart contract operations, used for boosting the indexing process (allows to skip blocks with no contract calls)
- Mempool operations
- Contract aliases and other metadata
Those services are obviously make sense for public networks only and not used for sandbox or other private environments.
BCD uses X.Y.Z
version format where:
X
changes every 3-5 months along with a big release with a significant addition of functionalityY
increasing signals about a possibly non-compatible update that requires reindexing (or restoring from snaphot) or syncing with frontendZ
bumped for every stable release candidate or hotfix
BCD web interface developed at https://github.com/baking-bad/bcd uses the same version scheme.
X.Y.*
versions of backend and frontent MUST BE compatible which means that for every change in API responses Y
has to be increased.
Is essentially tagging commits:
make release # forced tag update
For stable release:
git tag X.Y.Z
git push --tags
Although you can install and run each part of BCD Hub independently, as system services for instance, the simplest approach is to use dockerized versions orchestrated by docker-compose.
BCDHub docker images are being built on dockerhub. Tags for stable releases have format X.Y
.
Docker tags are essentially produced from Git tags using the following rules:
X.Y.*
→X.Y
make images # latest
make stable-images # requires STABLE_TAG variable in the .env file
Make sure you have installed:
- docker
- docker-compose
You will also need several ports to be not busy:
14000
API service9200
Elastic5432
PostgreSQL8000
Frontend GUI
- Clone this repo
git clone https://github.com/baking-bad/bcdhub.git
cd bcdhub
- Create and fill
.env
file (see Configuration)
your-text-editor .env
There are several predefined configurations serving different purposes.
- Stable docker images
X.Y
/configs/production.yml
file is used internally- Requires
STABLE_TAG
environment set - Deployed via
make stable
- Latest docker images
latest
/configs/you.yml
file is used internally- Deployed via
make latest
/configs/development.yml
file is used- You can spawn local instances of databases or ssh to staging host with port forwarding
- Run services
make {service}
(where service is one ofapi
indexer
metrics
)
/configs/sandbox.yml
file is used- Start via
COMPOSE_PROJECT_NAME=bcd-box docker-compose -f docker-compose.sandbox.yml up -d --build
- Stop via
COMPOSE_PROJECT_NAME=bcd-box docker-compose -f docker-compose.sandbox.yml down
It takes around 20-30 seconds to initialize all services, API endpoints might return errors until then.
NOTE that if you specified local RPC node that's not running, BCDHub will wait for it indefinitely.
Full indexing process requires about 2 hours, however there are cases when you cannot afford that.
Elastic Search has a built-in incremental snapshotting mechanism which we use together with the AWS S3 plugin.
NOTE: currently we don't provide public snapshots.
You can set up your own S3 repo: https://medium.com/@federicopanini/elasticsearch-backup-snapshot-and-restore-on-aws-s3-f1fc32fbca7f
Alternatively, contact us for granting access
- Make sure you have snapshot settings in your
.env
file - Elastic service should be up and initialized
make s3-creds
No further actions required
make s3-repo
Follow the instruction: you can choose an arbitrary name for your repo.
make s3-snapshot
Select an existing repository to store your snapshot.
make s3-policy
Select an existing repository and configure time intervals using cron expressions: https://www.elastic.co/guide/en/elasticsearch/reference/master/trigger-schedule.html#schedule-cron
In some cases it's not possible to apply snapshot on top of existing indices. You'd need to clear the data then.
WARNING: This will delete all data from your elastic instance.
make es-reset
Wait for Elastic to be initialized.
Follow steps 1 and 2 from the make snapshot instruction.
make s3-restore
Select the latest (by date) snapshot from the list. It's taking a while, don't worry about the seeming freeze.
This is mostly for production environment, for all others a simple "start from the scratch" would work.
E.g. applying hotfixes. No breaking changes in the database schema.
Make sure you are on master branch
git pull
make stable-images
make stable
make stable-pull
make stable
E.g. new field added to one of the elastic models. You'd need to write a migration script to update existing data.
git pull
make migration
Select your script.
In case you need to reindex from scratch you can set up a secondary BCDHub instance, fill the index, make a snapshot, and then apply it to the production instance.
Typically you'd use staging for that.
make upgrade
Wait for Elastic to be initialized after restart.
make s3-restore
Select the snapshot you made.
make stable