Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion modules/ROOT/pages/connect-clients-to-proxy.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ The following sample client applications demonstrate how to use the Java driver

See your driver's documentation for code samples that are specific to your chosen driver, including cluster connection examples and statement execution examples.

You can use the provided sample client applications, in addition to your own client applications, to validate that your {product-proxy} deployment is orchestrating read and write requests as expected between the origin cluster, target cluster, and your client applications.
You can use the provided sample client applications, in addition to your own client applications, to validate that your {product-proxy} deployment orchestrates read and write requests as expected between the origin cluster, target cluster, and client applications.

{product-demo}::
https://github.com/alicel/zdm-demo-client/[{product-demo}] is a minimal Java web application which provides a simple, stripped-down example of an application built to work with {product-proxy}.
Expand Down
18 changes: 13 additions & 5 deletions modules/ROOT/pages/deploy-proxy-monitoring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ ubuntu@52772568517c:~$

. List (`ls`) the contents of the Ansible Control Host Docker container, and then find the `zdm-proxy-automation` directory.

. Change (`cd`) to the `zdm-proxy-automation/ansible` directory.
. Change (`cd`) to the `zdm-proxy-automation/ansible/vars` directory.

. List the contents of the `ansible` directory, and then find the following YAML configuration files:
. List the contents of the `vars` directory, and then find the following YAML configuration files:
+
* `zdm_proxy_container_config.yml`: Internal configuration for the proxy container itself.
* `zdm_proxy_cluster_config.yml`: Configuration properties to connect {product-proxy} to the origin and target clusters.
Expand Down Expand Up @@ -228,7 +228,13 @@ For more information, see xref:ROOT:manage-proxy-instances.adoc[].
Typically the advanced configuration variables don't need to be changed.
Only modify the variables in `zdm_proxy_advanced_config.yml` if you have a specific use case that requires changing them.

If the following advanced configuration variables need to be changed, only do so _before_ deploying {product-proxy}:
[IMPORTANT]
====
The following advanced configuration variables are immutable.
If you need to change these variables, {company} recommends that you do so _before_ deploying {product-proxy}.
Future changes require you to recreate your entire {product-proxy} deployment.
For more information, see xref:ROOT:manage-proxy-instances.adoc#change-immutable-configuration-variables[Change immutable configuration variables].
====

Multi-datacenter clusters::
For xref:ROOT:deployment-infrastructure.adoc#multiple-datacenter-clusters[multi-datacenter origin clusters], specify the name of the datacenter that {product-proxy} should consider local.
Expand All @@ -241,8 +247,8 @@ For information about downloading a region-specific {scb-short}, see xref:astra-

[#ports]
Ports::
Each {product-proxy} instance listens on port 9042 by default, like a regular {cass-short} cluster.
This can be overridden by setting `zdm_proxy_listen_port` to a different value.
Each {product-proxy} instance listens on port 9042 by default, like a default {cass-short} cluster.
You can override this by setting `zdm_proxy_listen_port` to your preferred port.
This can be useful if the origin nodes listen on a port that is not 9042 and you want to configure {product-proxy} to listen on that same port to avoid changing the port in your client application configuration.
+
{product-proxy} exposes metrics on port 14001 by default.
Expand Down Expand Up @@ -467,6 +473,8 @@ If you want to enable TLS after the initial deployment, you must rerun the deplo
After modifying all necessary configuration variables, you are ready to deploy your {product-proxy} instances.

. From your shell connected to the Control Host, make sure you are in the `ansible` directory at `/home/ubuntu/zdm-proxy-automation/ansible`.
+
If you are in the `vars` directory, then you must go up one level to the `ansible` directory.

. Run the deployment playbook:
+
Expand Down
98 changes: 94 additions & 4 deletions modules/ROOT/pages/feasibility-checklists.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,98 @@ You might need to adjust your data model or application logic to ensure compatib

If you cannot meet these requirements, particularly the cluster and schema compatibility requirements, see xref:ROOT:components.adoc[] for alternative migration tools and strategies.

[#supported-cassandra-native-protocol-versions]
== Supported {cass-short} Native Protocol versions

{product-proxy} supports protocol versions `v3`, `v4`, `DSE_V1`, and `DSE_V2`.
include::ROOT:partial$cassandra-protocol-versions.adoc[]

{product-proxy} technically doesn't support `v5`.
If `v5` is requested, the proxy handles protocol negotiation so that the client application properly downgrades the protocol version to `v4`.
This means that you can use {product-proxy} with any client application that uses a driver version supporting protocol version `v5`, as long as the application doesn't use `v5`-specific functionality.
When a specific protocol version is requested, {product-proxy} handles protocol negotiation to ensure the requested version is supported by both clusters.
For example, to use protocol `V5` with {product-proxy}, both the origin and target clusters must support `V5`, such as {hcd-short} or open source {cass-reg} 4.0 or later.
Otherwise, a lower protocol version must be used.

If the requested version isn't mutually supported, then {product-proxy} can force the client application to downgrade to a mutually supported protocol version.
If automatic forced downgrade isn't possible, then the connection fails, and you must modify your client application to request a different protocol version.

.Determine your client application's supported and negotiated protocol versions
[%collapsible]
====
Outside of a migration scenario (without {product-proxy}), the supported protocol versions depend on your origin cluster's version and client application's driver version.

Generally, when connecting to a cluster, the driver requests the highest protocol version that it supports.
If the cluster supports that version, then the connection uses that version.
If the cluster doesn't support that version, then the driver progressively requests lower versions until it finds a mutually supported version.

For example, if the cluster and driver both support `V5`, then your client application uses `V5` automatically unless you explicitly disable `V5` in your driver configuration.

If you upgrade your cluster, driver, or both to a version with a higher mutually supported protocol version, then the driver automatically starts using the higher version unless you explicitly disable it in your driver configuration.

When you introduce {product-proxy}, the target cluster is integrated into the protocol negotiation process to ensure that the negotiated protocol version is supported by the origin cluster, target cluster, and driver.
====

=== Considerations and requirements for `V5`

Required {product-proxy} version::
Official support for `V5` requires {product-proxy} version 2.4.0 or later.

Use cases requiring `V5`::
You are required to use `V5` only if your client application uses `V5`-specific functionality.

Potential performance impact between `V5` and earlier versions::
Protocol `V5` has improved integrity checks compared to earlier versions.
This can cause slight performance degradation when your client application begins to use `V5` after using an earlier version.
+
{company} performance tests showed potential throughput reductions ranging from 0 to 15 percent.
This performance impact can occur with and without {product-proxy}.
+
[TIP]
====
If your client application already uses `V5`, it is likely that you already adjusted to any potential performance impact, and the protocol version will have little or no impact on performance during your migration.
====
+
If you plan to upgrade to a `V5`-compatible driver before or during your migration, then the potential performance impact depends on which clusters support `V5`:
+
--
* **Neither cluster supports `V5`**: You won't notice any protocol-related performance impact before or during the migration because the driver and {product-proxy} cannot negotiate `V5` in this scenario.

* **Only the target cluster supports `V5`**: You won't notice any protocol-related performance impact during the migration because {product-proxy} must negotiate a protocol version that is supported by both clusters.
If the origin cluster doesn't support `V5`, then {product-proxy} cannot negotiate `V5` during the migration, and the driver cannot negotiate `V5` before the migration.
+
However, you might experience a protocol-related performance impact at the end of the migration when you connect your client application directly to the target cluster.
This phase removes {product-proxy} and the origin cluster from the protocol negotiation, allowing the driver to negotiate directly with the target cluster.
If the target cluster supports `V5`, the driver can use `V5` automatically.

* **Both clusters support `V5`**: Unless you <<disallow-or-explicitly-downgrade-the-protocol-version,block `V5`>>, you might experience performance impacts because the driver and {product-proxy} can use `V5` automatically in this scenario.
Consider upgrading the driver before or after the migration so you can isolate the impact of that change without the added complexity of the migration.
As a best practice for any significant version upgrade, run performance tests in lower environments to evaluate the potential impact before making the change in production.
--

[#disallow-or-explicitly-downgrade-the-protocol-version]
=== Disallow or explicitly downgrade the protocol version

You can restrict protocol versions in the driver and {product-proxy} configuration:

Driver configuration::
You can explicitly downgrade the protocol version in your client application's driver configuration.
Make sure the enforced protocol version is supported by both clusters.
+
Use this option if you need to enforce the protocol version outside of the migration.
For example:
+
* Both clusters and the driver support `V5` but you don't want to use `V5`: Configure the protocol version in the driver before the migration if you haven't done so already.
* The origin cluster _doesn't_ support `V5` and you want to ensure `V5` isn't used automatically after the migration: Configure the protocol version in the driver at any point before the end of the migration when you connect your client application directly to the target cluster.
* You observe unacceptable performance degradation when using `V5` before the migration (without {product-proxy}):
Either mitigate the performance issues before the migration, or configure the protocol version in the driver before the migration.

{product-proxy} configuration::
You can use the `xref:ROOT:manage-proxy-instances.adoc#blocked-protocol-versions[blocked_protocol_versions]` configuration variable to block specific protocol versions at the proxy level.
Make sure at least one mutually supported protocol version isn't blocked.
+
This option applies _only_ while {product-proxy} is in use.
It _doesn't_ persist after the migration.
+
Use this option if you observe unacceptable performance degradation when {product-proxy} is active _and_ it negotiates `V5`.
If unacceptable performance degradation occurs _without_ {product-proxy}, then configure the protocol version in the driver instead.
However, be aware that {product-proxy} itself can have a performance impact, regardless of the protocol version.

=== Thrift isn't supported by {product-proxy}

Expand Down Expand Up @@ -160,6 +245,11 @@ For more information, see xref:datastax-drivers:developing:query-idempotence.ado
[#client-compression]
== Client compression

[IMPORTANT]
====
LZ4 and Snappy compression algorithms require {product-proxy} version 2.4.0 or later.
====

The binary protocol used by {astra}, {dse-short}, {hcd-short}, and open-source {cass-short} supports optional compression of transport-level requests and responses that reduces network traffic at the cost of CPU overhead.

When establishing connections from client applications, {product-proxy} responds with a list of compression algorithms supported by both clusters.
Expand Down
Loading