This project demonstrates Microservice Architecture Pattern using Spring Boot, Spring Cloud, Kubernetes, Istio and gRPC. It is derived from the MIT-licensed code of Alexander Lukyanchikov, PiggyMetrics.
PiggyMetrics was decomposed into three core microservices. All of them are independently deployable applications, organized around certain business domains.
Contains general user input logic and validation: incomes/expenses items, savings and account settings.
Method | Path | Description | User authenticated | Available from UI |
---|---|---|---|---|
GET | /accounts/{account} | Get specified account data | ||
GET | /accounts/current | Get current account data | × | × |
GET | /accounts/demo | Get demo account data (pre-filled incomes/expenses items, etc) | × | |
PUT | /accounts/current | Save current account data | × | × |
POST | /accounts/ | Register new account | × |
Performs calculations on major statistics parameters and captures time series for each account. Datapoint contains values, normalized to base currency and time period. This data is used to track cash flow dynamics in account lifetime.
Method | Path | Description | User authenticated | Available from UI |
---|---|---|---|---|
GET | /statistics/{account} | Get statistics for the specifed account | ||
GET | /statistics/current | Get statistics for the current account | × | × |
GET | /statistics/demo | Get demo account statistics | × | |
PUT | /statistics/{account} | Create or update time series datapoint for the specified account |
Stores users contact information and notification settings (like remind and backup frequency). Scheduled worker collects required information from other services and sends e-mail messages to subscribed customers.
Method | Path | Description | User authenticated | Available from UI |
---|---|---|---|---|
GET | /notifications/settings/current | Get current account notification settings | × | × |
PUT | /notifications/settings/current | Save current account notification settings | × | × |
- Each microservice has it's own database, so there is no way to bypass API and access persistance data directly.
- In this project, I use MongoDB as a primary database for each service. It might also make sense to have a polyglot persistence architecture (сhoose the type of db that is best suited to service requirements).
- Service-to-service communication is quite simplified: microservices talking using only synchronous REST API. Common practice in a real-world systems is to use combination of interaction styles. For example, perform synchronous GET request to retrieve data and use asynchronous approach via Message broker for create/update operations in order to decouple services and buffer messages. However, this brings us to the eventual consistency world.
There's a bunch of common patterns in distributed systems, which could help us to make described core services work. Spring cloud provides powerful tools that enhance Spring Boot applications behaviour to implement those patterns. I'll cover them briefly.
Spring Cloud Config is horizontally scalable centralized configuration service for distributed systems. It uses a pluggable repository layer that currently supports local storage, Git, and Subversion.
In this project, I use native profile
, which simply loads config files from the local classpath. You can see shared
directory in Config service resources. Now, when Notification-service requests it's configuration, Config service responses with shared/notification-service.yml
and shared/application.yml
(which is shared between all client applications).
Just build Spring Boot application with spring-cloud-starter-config
dependency, autoconfiguration will do the rest.
Now you don't need any embedded properties in your application. Just provide bootstrap.yml
with application name and Config service url:
spring:
application:
name: notification-service
cloud:
config:
uri: http://config:8888
fail-fast: true
For example, EmailService bean was annotated with @RefreshScope
. That means, you can change e-mail text and subject without rebuild and restart Notification service application.
First, change required properties in Config server. Then, perform refresh request to Notification service:
curl -H "Authorization: Bearer #token#" -XPOST http://127.0.0.1:8000/notifications/refresh
Also, you could use Repository webhooks to automate this process
- There are some limitations for dynamic refresh though.
@RefreshScope
doesn't work with@Configuration
classes and doesn't affect@Scheduled
methods fail-fast
property means that Spring Boot application will fail startup immediately, if it cannot connect to the Config Service.- There are significant security notes below
Authorization responsibilities are completely extracted to separate server, which grants OAuth2 tokens for the backend resource services. Auth Server is used for user authorization as well as for secure machine-to-machine communication inside a perimeter.
In this project, I use Password credentials
grant type for users authorization (since it's used only by native PiggyMetrics UI) and Client Credentials
grant for microservices authorization.
Spring Cloud Security provides convenient annotations and autoconfiguration to make this really easy to implement from both server and client side. You can learn more about it in documentation and check configuration details in Auth Server code.
From the client side, everything works exactly the same as with traditional session-based authorization. You can retrieve Principal
object from request, check user's roles and other stuff with expression-based access control and @PreAuthorize
annotation.
Each client in PiggyMetrics (account-service, statistics-service, notification-service and browser) has a scope: server
for backend services, and ui
- for the browser. So we can also protect controllers from external access, for example:
@PreAuthorize("#oauth2.hasScope('server')")
@RequestMapping(value = "accounts/{name}", method = RequestMethod.GET)
public List<DataPoint> getStatisticsByAccountName(@PathVariable String name) {
return statisticsService.findByAccountName(name);
}
As you can see, there are three core services, which expose external API to client. In a real-world systems, this number can grow very quickly as well as whole system complexity. Actually, hundreds of services might be involved in rendering of one complex webpage.
In theory, a client could make requests to each of the microservices directly. But obviously, there are challenges and limitations with this option, like necessity to know all endpoints addresses, perform http request for each peace of information separately, merge the result on a client side. Another problem is non web-friendly protocols which might be used on the backend.
Usually a much better approach is to use API Gateway. It is a single entry point into the system, used to handle requests by routing them to the appropriate backend service or by invoking multiple backend services and aggregating the results. Also, it can be used for authentication, insights, stress and canary testing, service migration, static response handling, active traffic management.
Netflix opensourced such an edge service, and now with Spring Cloud we can enable it with one @EnableZuulProxy
annotation. In this project, I use Zuul to store static content (ui application) and to route requests to appropriate microservices. Here's a simple prefix-based routing configuration for Notification service:
zuul:
routes:
notification-service:
path: /notifications/**
serviceId: notification-service
stripPrefix: false
That means all requests starting with /notifications
will be routed to Notification service. There is no hardcoded address, as you can see. Zuul uses Service discovery mechanism to locate Notification service instances and also Circuit Breaker and Load Balancer, described below.
In this project configuration, each microservice with Hystrix on board pushes metrics to Turbine via Spring Cloud Bus (with AMQP broker). The Monitoring project is just a small Spring boot application with Turbine and Hystrix Dashboard.
See below how to get it up and running.
Let's see our system behavior under load: Account service calls Statistics service and it responses with a vary imitation delay. Response timeout threshold is set to 1 second.
Centralized logging can be very useful when attempting to identify problems in a distributed environment. Elasticsearch, Logstash and Kibana stack lets you search and analyze your logs, utilization and network activity data with ease. Ready-to-go Docker configuration described in my other project.
An advanced security configuration is beyond the scope of this proof-of-concept project. For a more realistic simulation of a real system, consider to use https, JCE keystore to encrypt Microservices passwords and Config server properties content (see documentation for details).
Deploying microservices, with their interdependence, is much more complex process than deploying monolithic application. It is important to have fully automated infrastructure. We can achieve following benefits with Continuous Delivery approach:
- The ability to release software anytime
- Any build could end up being a release
- Build artifacts once - deploy as needed
Here is a simple Continuous Delivery workflow, implemented in this project:
In this configuration, Travis CI builds tagged images for each successful git push. So, there are always latest
image for each microservice on Docker Hub and older images, tagged with git commit hash. It's easy to deploy any of them and quickly rollback, if needed.
Keep in mind, that you are going to start 8 Spring Boot applications, 4 MongoDB instances and RabbitMq. Make sure you have 4 Gb
RAM available on your machine. You can always run just vital services though: Gateway, Registry, Config, Auth Service and Account Service.
- Install Minikube, Helm and Istio.
- To enable automatic Istio sidecar injection label the
default
namespace withistio-injection=enabled
:
kubectl label namespace default istio-injection=enabled
- Edit
istio-sidecar-injector
configMap:
kubectl edit configmap/istio-sidecar-injector -n istio-system --validate=true
and set its policy
value to disabled
. This disables automatic sidecar injection unless the pod template spec
contains
the sidecar.istio.io/inject
annotation with value true
.
- Add the
bitnami
andcodecentric
Helm repositories:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add codecentric https://codecentric.github.io/helm-charts
- Clone the repository.
- Deploy PostgreSQL, Apache Kafka and Keycloak to Kubernetes:
helm install charts/pgm-dependencies
In this mode, Kubernetes pods will be created using the latest images from Docker Hub. Just run
helm install charts/piggymetrics
Make sure all microservices runs. The command
kubectl get pods
should return something like:
and then go to http://localhost
.
If you'd like to build images yourself (with some changes in the code, for example), first install Skaffold.
To build and deployed all microservices to Kubernetes run skaffold dev
.