Skip to content

Commit

Permalink
Merge branch 'main' into remove-la-obr-except-54089-8
Browse files Browse the repository at this point in the history
  • Loading branch information
saquino0827 authored Dec 5, 2024
2 parents e177296 + 5e22325 commit 38b82ec
Show file tree
Hide file tree
Showing 10 changed files with 327 additions and 6 deletions.
2 changes: 1 addition & 1 deletion adr/001-architecture-decision-records.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,4 @@ The outcomes of the decision, both positive and negative. This section should ex
- #13
- #1247

### Please see [adr-guidelines.md](adr/adr-guidelines.md) before completing.
### Please see [adr-guidelines.md](adr-guidelines.md) before completing.
53 changes: 52 additions & 1 deletion adr/012-auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Date: 2023-06-19

## Decision

We will use an AuthN/AuthZ concept similar to ReportStream's "two legged" auth.
We will use an AuthN/AuthZ concept similar to ReportStream's "two legged" auth. This model uses JSON Web Tokens (JWT) for secure client authentication and authorization across API interactions.

## Status

Expand All @@ -18,13 +18,64 @@ We chose this because it is exactly what ReportStream does and is accepted pract

This form of AuthN/AuthZ goes through multiple steps.
1. Register the client in TI. The client sends us a public key and we associate it with that client. This occurs before any API communication.

2. When the client wants to send us information through an API call, they will first call our login endpoint with a payload containing their name and a JWT which is signed by their private key.

3. We validate that the JWT is valid given the public key they sent to us earlier.

4. We generate a short-lived JWT with our own private key and send it back to the client. This completes the call to the login endpoint.

5. The client then calls our authenticated endpoints with the JWT we sent previously and whatever payload that is required by the specific endpoint.

6. We validate that the JWT is valid given our public (or private) key.

7. We continue with the business of that specific endpoint.


## Impact

### Positive

- **Enhanced Security:** AuthN/AuthZ ensures that only authorized users can access specific resources, reducing the risk of unauthorized access.


- **Defined Access Control:** Allows for fine-grained control over which users or roles can access certain endpoints or resources, enabling a more flexible and secure application.


- **Auditing and Compliance:** provides mechanisms for tracking user access and actions within the API, which helps meet regulatory requirements (e.g., GDPR, HIPAA).


- **Scalability:** With proper token management (e.g., OAuth2, JWT), it is easier to scale applications while maintaining secure authentication and authorization mechanisms.


### Negative

- **Complexity:** Adding AuthN/AuthZ layers increases the complexity of the system, which can lead to higher development and maintenance costs.


- **Performance Overhead:** Authentication and authorization checks can introduce latency, especially in distributed systems where tokens need to be validated at every request.


- **Misconfiguration:** Incorrect configuration of scopes, permissions, or roles could inadvertently allow unauthorized access to sensitive data.


### Risks

- **Vendor Lock-in:** Using third-party AuthN/AuthZ services (e.g., Auth0, AWS Cognito) may lead to vendor lock-in, making it harder to switch providers or implement custom solutions.


- **Security Vulnerabilities:** Weak implementation can introduce vulnerabilities like token theft, replay attacks, or misconfigured access policies that expose sensitive resources.


- **Token Expiration and Revocation:** Improper handling of token expiration and revocation could result in unauthorized access or frustrated users if tokens expire too quickly.


- **Regulatory Compliance Risk:** Failing to meet security and privacy regulations (e.g., encrypting tokens, securely storing user credentials) could lead to legal penalties.


- **System Downtime:** If the authentication server fails or is slow to respond, it could bring down the entire API, making the application unavailable.


### Related Issues

- #148
45 changes: 43 additions & 2 deletions adr/013-wrapped-domain-objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,56 @@ Accepted.

## Context

Per [ADR 5](./005-oesa.md), one of the principles to follow is that less important code should depend upon more important code. Third party dependencies are less important than our business logic, so how do we isolate the HAPI library's resource classes from our business logic?
Per [ADR 5](./005-oea.md), one of the principles to follow is that less important code should depend upon more important code. Third party dependencies are less important than our business logic, so how do we isolate the HAPI library's resource classes from our business logic?

We typically isolate libraries by creating an Interface at the library boundary. To take advantage of Java's type safety we define the interface using Generic types to ensure that the correct classes are used. On a wrapper Interface we always include a method to retrieve the underlying object. Extra methods are defined based on business needs. The implementation of this interface is customized to use the underlying third party library.

Our business logic can then use the wrapper Interface without needing to know the actual underlying implementation, and that implementation can be changed without changing the Interface that the business logic relies on.

You can see an example of this in [Order.java](../etor/src/main/java/gov/hhs/cdc/trustedintermediary/etor/orders/Order.java) (the interface) and [HapiOrder.java](../etor/src/main/java/gov/hhs/cdc/trustedintermediary/external/hapi/HapiOrder.java) (the implementation).


## Impact

### Positive

- **Decoupling of Business Logic and Third-Party Libraries:** allows us to decouple our business logic from the specific implementation details of the third-party library. This enhances maintainability and readability.


- **Flexibility and Extensibility:** The use of interfaces provides flexibility to switch to a different FHIR library in the future without requiring significant changes to the business logic. New functionalities can also be added easily by extending the wrapper interface.


- **Enhanced Testability:** The wrapper interface makes it easier to mock (or stub) dependencies during testing, allowing for more effective unit tests without relying on the actual HAPI FHIR implementation.


- **Consistent API:** Wrapping the resource classes can help enforce a consistent API across different types of resources.

### Negative

- **Performance Overhead:** Wrapping resource classes could introduce slight performance overhead due to additional method calls. However, this is generally minimal and acceptable for the benefits gained.


- **Maintenance of Wrapper Classes:** As the HAPI FHIR library or business requirements evolve, the wrapper classes will need to be maintained to ensure they remain compatible with the underlying library.

### Risks

- **Increased Complexity:** Having an additional layer (the wrapper) may add complexity to the codebase. Developers must understand both the wrapper and the underlying library, which can increase the learning curve.


- **Dependency on Wrapper Design:** If the wrapper interface is not designed thoughtfully, it may limit flexibility or create tight coupling with specific implementations, countering the goal of decoupling.


- **Integration Challenges:** If we switch to a different FHIR library, there may be unforeseen integration challenges that arise from differences in implementation details or API design.


- **Increased Development Time:** Initially creating wrapper classes may require additional development time and effort, especially if the resource classes are extensive or complex.


### Related Issues

- #79
- [ADR 5 - Option-enabling Software Architecture (OeSA)](./005-oesa.md)
- [ADR 005 - Option-enabling Architecture (OeA)](./005-oea.md)

### Additional References

- [Flexion Case Study](https://flexion.us/ustc-dawson-case-study/)
29 changes: 29 additions & 0 deletions adr/014-keys.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,35 @@ Accepted.

This naming convention applies to all locations where our keys are stored. Previously, we didn't have a consistent naming convention across all our locations which caused confusion on which key was to be used in which context and environment.


## Impact


### Positive

- **Clarity and Consistency:** Standardized naming ensures keys are easily identifiable and reduces misconfigurations.


- **Improved Operations:** Teams will spend less time resolving key-related issues.


- **Scalability:** As more organizations integrate, the naming convention will simplify management and avoid duplication.

### Negative

- **Migration Effort:** Renaming existing keys and updating references in systems may require a one-time effort.


### Risks

- **Human Error in Implementation:** Incorrectly applying the naming convention during key creation or migration could lead to confusion or outages.


- **Lack of Enforcement:** Without clear processes or automation, teams might unintentionally deviate from the convention.


- **Backward Compatibility:** Older systems or scripts may fail if they rely on the previous key names.

### Related Issues

- #584
31 changes: 31 additions & 0 deletions adr/015-project-structure.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,37 @@ testing individual classes and methods.
This subproject must not depend on any of the other subprojects. It is kept isolated on purpose so that the tests can't
be "poisoned" by the implementation.


## Impact

### Positive

- **Scalability:** New plugins or features can be added without modifying the core subprojects (app and shared).


- **Maintainability:** Clear separation of responsibilities reduces complexity.


- **Testability:** Isolated testing at both subproject and system levels ensures robust verification.

### Negative

- **Initial Overhead:** Setting up the modular structure requires more effort compared to a monolithic approach.


- **Learning Curve:** New developers may need time to understand the modular setup and plugin architecture.


### Risks

- **Inter-Subproject Dependencies:** Improper management could lead to accidental coupling between subprojects, reducing modularity.


- **Under-utilization of Shared Components:** If shared becomes a dumping ground for unrelated utilities, it could introduce unnecessary complexity.


- **Plugin Overhead:** Excessive plugin fragmentation may complicate management and integration.

### Related Issues

_None_.
37 changes: 37 additions & 0 deletions adr/016-database.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,43 @@ Accepted
Postgres is an open-source, relational, SQL database. It provides a proven and robust feature set in the base implementation as well as add ons if specific features are required. Postgres is also well-supported by the community.
[Reference](https://www.postgresql.org/docs/)

## Impact


### Positive


- **Reliable Data Storage:** Ensures that project data is safely persisted across application restarts.


- **Rich Feature Set:** Provides advanced capabilities (e.g., ACID compliance, JSONB, and indexing options) to handle diverse requirements.


- **Community and Ecosystem:** Extensive libraries, tools, and community support minimize development effort and troubleshooting time.


### Negative


**Backup and Restore Challenges:** Large databases can make backups and restores resource-intensive and time-consuming, impacting operational readiness during failures.

**Cost of High Availability:** Achieving high availability (e.g., via replication, clustering) may require additional infrastructure, tools, and management, leading to increased operational costs.

**Security Management Overhead:** Properly securing a Postgres instance, including managing user permissions, network access, and encryption, requires dedicated effort. Misconfigurations could lead to vulnerabilities.


### Risks


- **Data Integrity Risks:** Poorly managed schema migrations or incorrect configurations could lead to data loss or corruption.


- **Performance Bottlenecks:** Inefficient queries or lack of proper indexing might result in slow response times as data grows.


- **Operational Downtime:** Mismanagement of updates, backups, or scaling operations could lead to downtime or data unavailability.


### Related Issues

- 672
49 changes: 49 additions & 0 deletions adr/017-connection-pooling.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,55 @@ Accepted
## Context
Connection pooling is vital to the project for many reasons including resource management and conservation, performance improvement, latency reduction, scalability, and most importantly here managing connections.

## Impact

### Positive

- **High Performance:** HikariCP outperforms other connection pool libraries in terms of throughput and latency, enhancing application responsiveness.


- **Stability and Reliability:** Active maintenance ensures stability, addressing potential bugs and vulnerabilities.


- **Ease of Use:** Comprehensive documentation simplifies integration and configuration for developers.


- **Resource Efficiency:** Limits the number of open connections, reducing load on the database server.


- **Scalability:** Handles large-scale applications effectively, supporting high concurrency with low overhead.


- **Enhanced Diagnostics:** Built-in monitoring and logging features facilitate troubleshooting and performance tuning.

### Negative

- **Dependency Overhead:** Adds an external library dependency that must be maintained and updated over time.


- **Configuration Complexity:** Suboptimal configuration (e.g., pool size, connection timeout) can degrade performance or cause issues under load.


- **Learning Curve:** Developers unfamiliar with connection pooling or HikariCP may require additional time for onboarding.


- **Troubleshooting Complexity**: Adds another layer to the stack, potentially complicating debugging of database-related issues.

### Risks

- **Connection Leaks:** Improperly handled connections could lead to resource exhaustion, affecting application stability.


- **Incorrect Configuration:** Poorly tuned settings may result in underutilized or overburdened pools.


- **Driver Compatibility:** Potential issues with certain database drivers if not thoroughly tested.


- **Operational Challenges:** Monitoring and maintaining the pool in production environments may require additional effort.


- **Dependency Lock-in:** Switching to another connection pooling library in the future may require significant refactoring.

### Related Issues

Expand Down
44 changes: 44 additions & 0 deletions adr/018-sql-migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,50 @@ Accepted
## Context
As part of having CI/CD pipelines we need tooling inside of the project to allow us to deploy and rollback changes in an automated way. Using Liquibase enables us to store SQL migration scripts inside of the project and update them as needed. We can then set up the project deployment pipelines to automatically pick up the new changes and make the updates to the database. Liquibase will also allow for non-manual rollbacks should the need for them arise. As part of this implementation we will be using the Liquibase generated Github Actions in our pipelines to achieve this automation.

## Impact

### Positive

- **Rollback Functionality:** Liquibase’s ability to perform rollbacks in the open-source version ensures we can quickly recover from migration issues without additional cost.


- **Improved CI/CD Integration:** Automating migrations reduces the need for manual intervention and decreases deployment errors.


- **Version Control of Schema:** Keeping migration scripts within the project ensures that database changes are tracked alongside application changes, improving traceability.


- **Extensive Documentation:** Liquibase’s comprehensive documentation simplifies onboarding and troubleshooting.


- **Flexibility:** Liquibase supports a wide range of databases and migration formats, making it adaptable for future needs.


### Negative

- **Learning Curve:** Teams unfamiliar with Liquibase may require time to learn its configuration and scripting syntax.


- **Setup Complexity:** Initial integration with CI/CD pipelines may require extra effort to align with existing workflows.


- **Dependency Overhead:** Adds a dependency to the project, which must be maintained and updated over time.

### Risks

- **Migration Errors:** Incorrectly defined migration scripts could cause deployment failures or data integrity issues.


- **Rollback Limitations:** Not all changes (e.g., destructive data operations) can be automatically rolled back, requiring careful planning for such scenarios.


- **Version Drift:** If environments are not consistently updated with migrations, discrepancies could arise between development, staging, and production.


- **GitHub Action Reliability:** Relying on prebuilt GitHub Actions introduces a dependency on external tools, which may have compatibility or update issues over time.


- **Changelog Structure and Dependencies:** A poorly structured changelog can create challenges in managing references between tables. For example, if the `message_link` table references the `received_message_id` column in the `metadata` table, structural changes in `metadata` could require significant changes to the migrations, increasing complexity. This risk can make future schema modifications or rollbacks harder to implement without breaking dependencies https://docs.liquibase.com/start/design-liquibase-project.html.

### Related Issues

Expand Down
Loading

0 comments on commit 38b82ec

Please sign in to comment.