This README is just a fast quick-start document.
Developed by the people of A/V Software Solutions 360° for all human beings of the world.
Short answer: A service used to manage user data
It has been developed as an internal data storage to manage all kind of personal data and the container items that hold it. It is part of the Maverick stack of A/V Software Solutions 360°, a department of Bechtle AG & Co KG in Bonn.
We decided to push most of the code to Github to let others see what we are doing.
Most of it means, that there are still parts that are closed-source due to customers concerns.
For detailed documentation of the UserProfileService, click here.
- Manages user data including their relations
- Used entities are hierarchical structured and therefore are stored in a graph.
like:- Users are stored in groups
- Organizations can contain other organizations
- Users or groups can be assigned to Functions or Roles
- Functions limit access of their containing role by an additional organization
- and many more ...
- Uses state-of-the art design patterns to enable best performance in a cloud system without loosing data consistency.
- Can store the incoming requests in various ways due to its "data projections"
- Contains entity models for roles and functions that can be used as source of a RBAC system or OPA
The UserProfileService consists of three main components:
- UserProfileService.API - It represents the interface that can be accessed by public networks. It is a RestAPI and holds many endpoints to manage profile entities.
- UserProfileService.SagaWorker - It works in the background and executes commands that are triggered from the API or Sync. It validates and changes data. It listens to queues and therefore is not intended to be public. For security reasons its endpoints should be kept behind a firewall / inside an internal network only.
- UserProfileService.Sync - It enables the synchronization of data from external sources with the existing stored data. At the moment we implemented a LDAP connector as external source.
- Install the latest .NET 8.0 SDK
- Install Git
- Clone this repo
- Run
dotnet restore
anddotnet build
in solution folder
Please use the issue tracker for that.
Before running the UserProfileService you should install the third-party components.
This should be done by the approppriate documentation that can be found online.
Be aware: Both workers (SagaWorker and Sync) should be deployed as single instances. Scalability is not supported at the moment.
Following third-party systems are required:
- ArangoDb - open-source graph- and document database where user data will be stored
- RabbitMq - multi-protocol messaging and streaming broker - used to send messages between applications
- PostgreSQL - fast relational database used as volatile store and eventStore
- redis - open source, in-memory data structure store, used as a database, cache, and message broker.
The UserProfileService uses a graph database called ArangoDb.
Example configuration section (as part of the complete appsettings file):
{
"ProfileStorage": {
"ClusterConfiguration": {
"DocumentCollections": {
"*": {
"NumberOfShards": 3,
"ReplicationFactor": 2,
"WriteConcern": 1
}
},
"EdgeCollections": {
"*": {
"NumberOfShards": 3,
"ReplicationFactor": 2,
"WriteConcern": 1
}
}
},
"ConnectionString": "Endpoints=http://localhost:8529;UserName=myUser;Password=myPassword;database=UserProfileService",
"MinutesBetweenChecks": 60
}
}
The ConnectionString
contains the endpoint of the ArangoDb graph database, the credentials and the database to use.
Side note: The specified user must have manage permissions for this database. The service will create collections and therefore needs additional rights.
The ClusterConfiguration
contains information about sharding in a cluster environment when collections are created. It will be ignored on single-node installations.
MinutesBetweenChecks
defines the timespan the database initializer unit will wait until it will ensure all collections has been created. It will do this at the starting of the service as well.
This shall minimize the requests sending to ArangoDb during execution of the application.
The UserProfileService uses PostgreSQL as a relational database. It can be configured as follows:
The ConnectionString
defines all parameters to establish a database connection (see NpgSql docs - connection string parameters)
DatabaseSchema
defines the name of the schema to be used.
Example:
{
"Marten": {
"ConnectionString": "Host=localhost;Port=5432;Username=myUser;Password=myPassword;Database=UserProfileService",
"DatabaseSchema": "UserProfile"
}
}
The UPS can be configured to use two different queue communication systems for communication between its workers. You can use:
The important key is Type
. This key configures the communication system. The JSON to configure the messaging looks similar to these examples:
{
"Messaging": {
"MessageType": {
...
},
"Type": "MessageType"
}
}
For the Type
key, you can use the value RabbitMq
or ServiceBus
. The keywords are case-insensitive, so it doesn't matter how you write them.
If you want to use RabbitMq for communication, you need to set RabbitMQ
as the value for the Type
key. This will ensure that internal messaging uses RabbitMQ.
An example configuration section could look like this:
{
"Messaging": {
"RabbitMQ": {
"Host": "localhost",
"Password": "myPassword",
"Port": 5672,
"User": "myUser",
"VirtualHost": "/"
},
"Type": "RabbitMQ"
}
}
Host
- The RabbitMQ server's hostname or IP address.
Port
- The port number on which RabbitMQ is listening.
VirtualHost
- The virtual host in RabbitMQ to which you want to connect.
User
- The username used to authenticate with RabbitMQ.
Password
- The password associated with the RabbitMQ user.
If you want to use AzureServicBus for communication, you need to set ServiceBus
as the value for the Type
key. This will ensure that internal messaging uses Azure Service Bus.
An example configuration section could look like this:
{
"Messaging": {
"ServiceBus": {
"ConnectionString": "Endpoint=sb://<NamespaceName>.servicebus.windows.net/;
SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>",
},
"Type": "ServiceBus"
}
}
Endpoint
- The URL of your Service Bus namespace. This always starts with sb://, followed by your Service Bus namespace name, and ends with .servicebus.windows.net/.
SharedAccessKeyName
- The name of the shared access policy you are using. This is usually a policy that grants access to the Service Bus namespace or a specific queue/topic.
SharedAccessKey
- The key associated with the shared access policy. This is a secret value that acts like a password for the connection.
The UserProfileService-Sync uses Redis as a temporary storage for the synchronized data.
An example configuration section could look like this:
"Redis": {
"ServiceName": "redis",
"AbortOnConnectFail": "False",
"AllowAdmin": "True",
"ConnectRetry": 5,
"ConnectTimeout": 5000,
"EndpointUrls": [
"localhost:6379"
],
"ExpirationTime": 7200,
"Password": "",
"User": ""
}
The EndpointUrls
define the endpoints for Redis. Please note that EndpointUrls
is an array where you can store more than one Redis endpoint. In this section, Redis is only bound to localhost. The port 6379 is the standard port for Redis.
The User
and Password
are used for authentication to the Redis system.
The other configuration options for redis:
ServiceName
- The service name used to resolve a service via the Sentinel.
AbortOnConnectFail
- A connection will not be established while no servers are available.
AllowAdmin
- Enables a range of commands that are considered risky.
ConnectRetry
- The number of times to repeat connect attempts during initial Connect.
ConnectTimeout
- Timeout (ms) for connect operations.
ExpirationTime
- Expiration time after the stored values in redis expires and are deleted (in seconds).
The configuration uses the default .NET Core "Logging" configuration of the MEL stack and extends it in an easy way.
We are using NLog
internally to write logs.
Note: If no configuration is provided the logframework will only log
Information
or higher (higher meaning greater loglevel value)
LogLevel | Value | Method | Description |
---|---|---|---|
Trace | 0 | LogTrace | Contain the most detailed messages. These messages may contain sensitive app data. These messages are disabled by default and should not be enabled in production. |
Debug | 1 | LogDebug | For debugging and development. Use with caution in production due to the high volume. |
Information | 2 | LogInformation | Tracks the general flow of the app. May have long-term value. |
Warning | 3 | LogWarning | For abnormal or unexpected events. Typically includes errors or conditions that don't cause the app to fail. |
Error | 4 | LogError | For errors and exceptions that cannot be handled. These messages indicate a failure in the current operation or request, not an app-wide failure. |
Critical | 5 | LogCritical | For failures that require immediate attention. Examples: data loss scenarios, out of disk space. |
None | 6 | Specifies that a logging category should not write any messages. |
Note: In the
LogLevel
sub-section you can configure the category specific log level and the global log level. The most specific category configuration is used meaning ifSomeCategory
is set to Error butSomeCategory.SubSomeCategory
is set to trace, than everything inSomeCategory
only logs error or higher, but everything staring fromSomeCategory.SubSomeCategory
will log trace or higher
To specify the global loglevel use the Default
category name
{
"Logging": {
"LogLevel": {
"Default": "Trace",
"<CategoryName>": "<LogLevel as string>"
}
}
}
The log levels can be change at runtime and apply almost immediately.
The application does not need to take any action to enable this other than calling
loggingBuilder.UseMaverickLogging(config)
and providing a IConfiguration
instance.
{
"Logging": {
"LogLevel": {
"Default": "Information"
}
}
}
To include file logging you need to add the following config-keys inside the "Logging" section
{
"Logging": {
"EnableLogFile": true, // (Optional) default: false
"LogFilePath": "logs" // (Optional) default: "logs"
"LogFileMaxHistory": 3 // (Optional) default: 3
}
}
To change the log format from JSON (default) to Plaintext change the LogFormat
key to the value of text
or json
{
"Logging": {
"LogFormat": "text"
}
}
Basic configuration requires you to set at least a service name. The service name will be displayed in the trace graph and used to correlate the logs If you also provide an OtlpEndpoint URI the OtlpExporter will be setup to send traces via GRPC in the OTLP format to the provided endpoint
Additionally, you can opt-in to automatically handle routing base-paths.
This is useful to make your service available behind a reverse-proxy or similar setup.
To do this, you need to use this method:
- Full Path:
UserProfileService.Hosting.ApplicationBuilderExtensions.UseReverseProxyPathBases
- As extension:
appBuilder.UseReverseProxyPathBases(Configuration)
This extension will look for these settings, and configure your application accordingly:
{
"Routing": {
"PathBase": "",
"DiscardResponsePathBase": ""
}
}
When setting Routing:PathBase
, your application will accept requests to all of your usual endpoints (/api/foo
), but also those that have the configured Prefix (/service/api/foo
).
This ensures compatibility with or without reverse-proxy, and pushes consumers of your API to use endpoints with their respective PathBase
.
You can check HttpContext.Request.PathBase
to see if the current request was handled with or without base-path.
If Routing:PathBase
is set, all redirects will be relative to the configured PathBase
, even for those that were handled without it.
/api/foo
redirecting to theapi/bar
-endpoint will instead be redirected to/service/api/bar
/service/api/foo
redirecting to theapi/bar
-endpoint will also redirect to/service/api/bar
When setting Routing:DiscardResponsePathBase
your application will behave as if Routing:PathBase
was set, but instead of changing all redirects to use the configured prefix, it will be removed from all redirects.
/api/foo
redirecting to theapi/bar
-endpoint will still be redirected to/api/bar
/service/api/foo
redirecting to theapi/bar
-endpoint will instead redirect to/api/bar
There are three components to the Base-Path handling, one of which is hidden.
PathBase
- Endpoints will also be bound with this as prefix
ResponsePrefix
- Ensures outgoing responses use this prefix in their path
DiscardResponsePathBase
- Removes this path from the start of all outgoing responses
For simplicity's sake we chose to omit RequestPrefix
and always use it together with PathBase
, which leaves only Options 0 (no settings), 2 (PathBase
), and 4 (DiscardResponsePathBase
).
Using these three settings in different configurations results in these Use-Cases:
- No option used
- Endpoints bound using
Base=/
- Redirects always use
PathBase=/
Without any configuration services will use Option-0, they will only be bound and respond to the endpoints defined in the code.
To use this configuration each service needs its own hostname or ip to respond to.
PathBase=/service
- Endpoints bound using
Base=/
andBase=/service
- Redirects keep the Base they originally arrived with
Basically the same as Option-2, but doesn't show or encourage use of endpoints with a defined Base.
PathBase=/service
+RequestPrefix=/service
- Endpoints bound using
Base=/
andBase=/service
- Redirects will always use
Base=/service
Basically the same as Options-1, but pushes consumers to use endpoints with a defined Base.
We chose to implicitly use Option-2 and hide Option-1, to hopefully reduce bugs related to path-mapped services.
RequestPrefix=/service
- Endpoints bound using
Base=/
- Redirects will always use
Base=/service
While writing this extension we could find no real use-case for Option-3.
DiscardResponsePathBase=/service
+ (RequestPrefix=""
&PathBase=""
)- Using
DiscardResponsePathBase
with any other option did not lead to usable results
- Using
- Endpoints bound using
Base=/
andBase=/service
- Redirects will always use
Base=/
While we don't expect usage of Option-4, we can see scenarios where this configuration makes sense, so we added it to this extension.
Using DiscardResponsePathBase
with any other setting did not produce usable results, so we chose to treat its use with other settings as an error.
We are using following great third-party-libraries:
- Automapper
- MassTransit
- Marten
- JsonSubtypes
- Newtonsoft.Json
- Swashbuckle
- Hellang ProblemDetails
- prometheus-net
- StackExchange.Redis
- NLog
- FluentValidations
And for testing: