diff --git a/daprdocs/content/en/_index.md b/daprdocs/content/en/_index.md index 642107a7b98..0e125140352 100644 --- a/daprdocs/content/en/_index.md +++ b/daprdocs/content/en/_index.md @@ -87,6 +87,13 @@ you tackle the challenges that come with building microservices and keeps your c +
+
+
Roadmap
+

Learn about Dapr's roadmap and change process.

+ +
+
diff --git a/daprdocs/content/en/concepts/configuration-concept.md b/daprdocs/content/en/concepts/configuration-concept.md index f9b89ad4fa0..a4a85939592 100644 --- a/daprdocs/content/en/concepts/configuration-concept.md +++ b/daprdocs/content/en/concepts/configuration-concept.md @@ -6,9 +6,13 @@ weight: 400 description: "Change the behavior of Dapr application sidecars or globally on Dapr control plane system services" --- -Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services. For example, you can set an ACL policy on the application sidecar configuration which indicates which methods can be called from another application, or on the Dapr control plane configuration you can change the certificate renewal period for all certificates that are deployed to application sidecar instances. +With Dapr configurations, you use settings and policies to change: +- The behavior of individual Dapr applications +- The global behavior of the Dapr control plane system services -Configurations are defined and deployed as a YAML file. An application configuration example is shown below, which demonstrates an example of setting a tracing endpoint for where to send the metrics information, capturing all the sample traces. +For example, set a sampling rate policy on the application sidecar configuration to indicate which methods can be called from another application. If you set a policy on the Dapr control plane configuration, you can change the certificate renewal period for all certificates that are deployed to application sidecar instances. + +Configurations are defined and deployed as a YAML file. In the following application configuration example, a tracing endpoint is set for where to send the metrics information, capturing all the sample traces. ```yaml apiVersion: dapr.io/v1alpha1 @@ -23,9 +27,11 @@ spec: endpointAddress: "http://localhost:9411/api/v2/spans" ``` -This configuration configures tracing for metrics recording. It can be loaded in local self-hosted mode by editing the default configuration file called `config.yaml` file in your `.dapr` directory, or by applying it to your Kubernetes cluster with kubectl/helm. +The above YAML configures tracing for metrics recording. You can load it in local self-hosted mode by either: +- Editing the default configuration file called `config.yaml` file in your `.dapr` directory, or +- Applying it to your Kubernetes cluster with `kubectl/helm`. -Here is an example of the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace. +The following example shows the Dapr control plane configuration called `daprsystem` in the `dapr-system` namespace. ```yaml apiVersion: dapr.io/v1alpha1 @@ -40,8 +46,14 @@ spec: allowedClockSkew: "15m" ``` -Visit [overview of Dapr configuration options]({{}}) for a list of the configuration options. +By default, there is a single configuration file called `daprsystem` installed with the Dapr control plane system services. This configuration file applies global control plane settings and is set up when Dapr is deployed to Kubernetes. + +[Learn more about configuration options.]({{< ref "configuration-overview.md" >}}) -{{% alert title="Note" color="primary" %}} -Dapr application and control plane configurations should not be confused with the configuration building block API that enables applications to retrieve key/value data from configuration store components. Read the [Configuration building block]({{< ref configuration-api-overview >}}) for more information. +{{% alert title="Important" color="warning" %}} +Dapr application and control plane configurations should not be confused with the [configuration building block API]({{< ref configuration-api-overview >}}), which enables applications to retrieve key/value data from configuration store components. {{% /alert %}} + +## Next steps + +{{< button text="Learn more about configuration" page="configuration-overview" >}} \ No newline at end of file diff --git a/daprdocs/content/en/concepts/overview.md b/daprdocs/content/en/concepts/overview.md index 106a772e5c4..ff0f7ad3114 100644 --- a/daprdocs/content/en/concepts/overview.md +++ b/daprdocs/content/en/concepts/overview.md @@ -108,7 +108,7 @@ Deploying and running a Dapr-enabled application into your Kubernetes cluster is ### Clusters of physical or virtual machines -The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses [Hashicorp Consul service]({{< ref setup-nr-consul >}}), also running in HA mode. +The Dapr control plane services can be deployed in high availability (HA) mode to clusters of physical or virtual machines in production. In the diagram below, the Actor `Placement` and security `Sentry` services are started on three different VMs to provide HA control plane. In order to provide name resolution using DNS for the applications running in the cluster, Dapr uses multicast DNS by default, but can also optionally support [Hashicorp Consul service]({{< ref setup-nr-consul >}}). Architecture diagram of Dapr control plane and Consul deployed to VMs in high availability mode diff --git a/daprdocs/content/en/contributing/roadmap.md b/daprdocs/content/en/contributing/roadmap.md index d3a7909357f..6c1093ecbd9 100644 --- a/daprdocs/content/en/contributing/roadmap.md +++ b/daprdocs/content/en/contributing/roadmap.md @@ -2,47 +2,9 @@ type: docs title: "Dapr Roadmap" linkTitle: "Roadmap" -description: "The Dapr Roadmap is a tool to help with visibility into investments across the Dapr project" +description: "The Dapr Roadmap gives the community visibility into the different priorities of the projecs" weight: 30 no_list: true --- - -Dapr encourages the community to help with prioritization. A GitHub project board is available to view and provide feedback on proposed issues and track them across development. - -[Screenshot of the Dapr Roadmap board](https://aka.ms/dapr/roadmap) - -{{< button text="View the backlog" link="https://aka.ms/dapr/roadmap" color="primary" >}} -
- -Please vote by adding a 👍 on the GitHub issues for the feature capabilities you would most like to see Dapr support. This will help the Dapr maintainers understand which features will provide the most value. - -Contributions from the community is also welcomed. If there are features on the roadmap that you are interested in contributing to, please comment on the GitHub issue and include your solution proposal. - -{{% alert title="Note" color="primary" %}} -The Dapr roadmap includes issues only from the v1.2 release and onwards. Issues closed and released prior to v1.2 are not included. -{{% /alert %}} - -## Stages - -The Dapr Roadmap progresses through the following stages: - -{{< cardpane >}} -{{< card title="**[📄 Backlog](https://github.com/orgs/dapr/projects/52#column-14691591)**" >}} - Issues (features) that need 👍 votes from the community to prioritize. Updated by Dapr maintainers. -{{< /card >}} -{{< card title="**[⏳ Planned (Committed)](https://github.com/orgs/dapr/projects/52#column-14561691)**" >}} - Issues with a proposal and/or targeted release milestone. This is where design proposals are discussed and designed. -{{< /card >}} -{{< card title="**[👩‍💻 In Progress (Development)](https://github.com/orgs/dapr/projects/52#column-14561696)**" >}} - Implementation specifics have been agreed upon and the feature is under active development. -{{< /card >}} -{{< /cardpane >}} -{{< cardpane >}} -{{< card title="**[☑ Done](https://github.com/orgs/dapr/projects/52#column-14561700)**" >}} - The feature capability has been completed and is scheduled for an upcoming release. -{{< /card >}} -{{< card title="**[✅ Released](https://github.com/orgs/dapr/projects/52#column-14659973)**" >}} - The feature is released and available for use. -{{< /card >}} -{{< /cardpane >}} +See [this document](https://github.com/dapr/community/blob/master/roadmap.md) to view the Dapr project's roadmap. diff --git a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md index bb96b9b2326..7bb1bcf0e85 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/actors/actors-overview.md @@ -104,7 +104,7 @@ The Dapr actor runtime provides a simple turn-based access model for accessing a ### State -Transactional state stores can be used to store actor state. To specify which state store to use for actors, specify value of property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors. +Transactional state stores can be used to store actor state. Regardless of whether you intend to store any state in your actor, you must specify a value for property `actorStateStore` as `true` in the state store component's metadata section. Actors state is stored with a specific scheme in transactional state stores, allowing for consistent querying. Only a single state store component can be used as the state store for all actors. Read the [state API reference]({{< ref state_api.md >}}) and the [actors API reference]({{< ref actors_api.md >}}) to learn more about state stores for actors. ### Actor timers and reminders diff --git a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md index 27dc31fc2b1..0a5dba3b10c 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md +++ b/daprdocs/content/en/developing-applications/building-blocks/jobs/howto-schedule-and-handle-triggered-jobs.md @@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example: +#### HTTP + +When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at +`/job/`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job +events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is +triggered. For example: + +*Note: The following example is in Go but applies to any programming language.* + +```go + +func main() { + ... + http.HandleFunc("/job/", handleJob) + http.HandleFunc("/job/", specificJob) + ... +} + +func specificJob(w http.ResponseWriter, r *http.Request) { + // Handle specific triggered job +} + +func handleJob(w http.ResponseWriter, r *http.Request) { + // Handle the triggered jobs +} +``` + +#### gRPC + +When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following +callback function: + +*Note: The following example is in Go but applies to any programming language with gRPC support.* + +```go +import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1" +... +func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) { + // Handle the triggered job +} +``` + +This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that +you register the callback server, which will invoke this function when a job is triggered: + +```go +... +js := &JobService{} +rtv1.RegisterAppCallbackAlphaServer(server, js) +``` + +In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly +through this gRPC method. + +#### SDKs + +For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the +event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this: + +```go +... +if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil { + log.Fatalf("failed to register job event handler: %v", err) +} +``` + +Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with +the triggered job data. Here’s an example of handling the triggered job: + ```go // ... @@ -103,11 +172,9 @@ func prodDBBackupHandler(ctx context.Context, job *common.JobEvent) error { if err := json.Unmarshal(job.Data, &jobData); err != nil { // ... } - decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value) - // ... var jobPayload api.DBBackup - if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil { + if err := json.Unmarshal(job.Data, &jobPayload); err != nil { // ... } fmt.Printf("job %d received:\n type: %v \n typeurl: %v\n value: %v\n extracted payload: %v\n", jobCount, job.JobType, jobData.TypeURL, jobData.Value, jobPayload) @@ -146,4 +213,4 @@ dapr run --app-id=distributed-scheduler \ ## Next steps - [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}}) -- [Jobs API reference]({{< ref jobs_api.md >}}) \ No newline at end of file +- [Jobs API reference]({{< ref jobs_api.md >}}) diff --git a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md index 28c3cb8f1ec..680b0361152 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md +++ b/daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-non-dapr-endpoints.md @@ -38,11 +38,9 @@ The diagram below is an overview of how Dapr's service invocation works when inv Diagram showing the steps of service invocation to non-Dapr endpoints 1. Service A makes an HTTP call targeting Service B, a non-Dapr endpoint. The call goes to the local Dapr sidecar. -2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL. -3. Dapr forwards the message to Service B. -4. Service B runs its business logic code. -5. Service B sends a response to Service A's Dapr sidecar. -6. Service A receives the response. +2. Dapr discovers Service B's location using the `HTTPEndpoint` or FQDN URL then forwards the message to Service B. +3. Service B sends a response to Service A's Dapr sidecar. +4. Service A receives the response. ## Using an HTTPEndpoint resource or FQDN URL for non-Dapr endpoints There are two ways to invoke a non-Dapr endpoint when communicating either to Dapr applications or non-Dapr applications. A Dapr application can invoke a non-Dapr endpoint by providing one of the following: diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md index b4fa5a44388..ed9f747b882 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-overview.md @@ -106,8 +106,25 @@ Want to skip the quickstarts? Not a problem. You can try out the workflow buildi ## Limitations -- **State stores:** As of the 1.12.0 beta release of Dapr Workflow, using the NoSQL databases as a state store results in limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. -- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, if you scale out Dapr sidecars or your application pods to more than 2, then the concurrency of the workflow execution drops. It is recommended to test with 1 or 2 instances, and no more than 2. +- **State stores:** Due to underlying limitations in some database choices, more commonly NoSQL databases, you might run into limitations around storing internal states. For example, CosmosDB has a maximum single operation item limit of only 100 states in a single request. +- **Horizontal scaling:** As of the 1.12.0 beta release of Dapr Workflow, it is recommended to use a maximum of two instances of Dapr per workflow application. This limitation is resolved in Dapr 1.14.x when enabling the scheduler service. + +To enable the scheduler service to work for Dapr Workflows, make sure you're using Dapr 1.14.x or later and assign the following configuration to your app: + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Configuration +metadata: + name: schedulerconfig +spec: + tracing: + samplingRate: "1" + features: + - name: SchedulerReminders + enabled: true +``` + +See more info about [enabling preview features]({{}}). ## Watch the demo diff --git a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md index fe6f69b63c2..ba3ab432f1b 100644 --- a/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md +++ b/daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md @@ -749,7 +749,7 @@ def status_monitor_workflow(ctx: wf.DaprWorkflowContext, job: JobStatus): ctx.call_activity(send_alert, input=f"Job '{job.job_id}' is unhealthy!") next_sleep_interval = 5 # check more frequently when unhealthy - yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(seconds=next_sleep_interval)) + yield ctx.create_timer(fire_at=ctx.current_utc_datetime + timedelta(minutes=next_sleep_interval)) # restart from the beginning with a new JobStatus input ctx.continue_as_new(job) @@ -896,7 +896,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { } if status == "healthy" { job.IsHealthy = true - sleepInterval = time.Second * 60 + sleepInterval = time.Minutes * 60 } else { if job.IsHealthy { job.IsHealthy = false @@ -905,7 +905,7 @@ func StatusMonitorWorkflow(ctx *workflow.WorkflowContext) (any, error) { return "", err } } - sleepInterval = time.Second * 5 + sleepInterval = time.Minutes * 5 } if err := ctx.CreateTimer(sleepInterval).Await(nil); err != nil { return "", err diff --git a/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md b/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md index 554ca118a23..c7504b56cc2 100644 --- a/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md +++ b/daprdocs/content/en/developing-applications/integrations/Diagrid/diagrid-conductor.md @@ -26,6 +26,4 @@ By studying past resource behavior, recommend application resource optimization The application graph facilitates collaboration between dev and ops by providing a dynamic overview of your services and infrastructure components. -Try out [Conductor Free](https://www.diagrid.io/pricing), ideal for individual developers building and testing Dapr applications on Kubernetes. - {{< button text="Learn more about Diagrid Conductor" link="https://www.diagrid.io/conductor" >}} diff --git a/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md index 8b52eedb1d6..9435df1944f 100644 --- a/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/jobs-quickstart.md @@ -273,23 +273,20 @@ func deleteJob(ctx context.Context, in *common.InvocationEvent) (out *common.Con // Handler that handles job events func handleJob(ctx context.Context, job *common.JobEvent) error { - var jobData common.Job - if err := json.Unmarshal(job.Data, &jobData); err != nil { - return fmt.Errorf("failed to unmarshal job: %v", err) - } - decodedPayload, err := base64.StdEncoding.DecodeString(jobData.Value) - if err != nil { - return fmt.Errorf("failed to decode job payload: %v", err) - } - var jobPayload JobData - if err := json.Unmarshal(decodedPayload, &jobPayload); err != nil { - return fmt.Errorf("failed to unmarshal payload: %v", err) - } + var jobData common.Job + if err := json.Unmarshal(job.Data, &jobData); err != nil { + return fmt.Errorf("failed to unmarshal job: %v", err) + } - fmt.Println("Starting droid:", jobPayload.Droid) - fmt.Println("Executing maintenance job:", jobPayload.Task) + var jobPayload JobData + if err := json.Unmarshal(job.Data, &jobPayload); err != nil { + return fmt.Errorf("failed to unmarshal payload: %v", err) + } - return nil + fmt.Println("Starting droid:", jobPayload.Droid) + fmt.Println("Executing maintenance job:", jobPayload.Task) + + return nil } ``` diff --git a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md index 5a025aed855..da1ec1590f9 100644 --- a/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md +++ b/daprdocs/content/en/getting-started/quickstarts/workflow-quickstart.md @@ -66,12 +66,18 @@ Install the Dapr Python SDK package: pip3 install -r requirements.txt ``` +Return to the `python/sdk` directory: + +```bash +cd .. +``` + + ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `python/sdk` directory, run the following command: ```bash -cd workflows/python/sdk dapr run -f . ``` @@ -308,12 +314,11 @@ Install the dependencies: cd ./javascript/sdk npm install npm run build -cd .. ``` ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `javascript/sdk` directory, run the following command: ```bash dapr run -f . @@ -515,15 +520,28 @@ Clone the [sample provided in the Quickstarts repo](https://github.com/dapr/quic git clone https://github.com/dapr/quickstarts.git ``` -In a new terminal window, navigate to the `sdk` directory: +In a new terminal window, navigate to the `order-processor` directory: + +```bash +cd workflows/csharp/sdk/order-processor +``` + +Install the dependencies: ```bash -cd workflows/csharp/sdk +dotnet restore +dotnet build +``` + +Return to the `csharp/sdk` directory: + +```bash +cd .. ``` ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `csharp/sdk` directory, run the following command: ```bash dapr run -f . @@ -628,25 +646,24 @@ OrderPayload orderInfo = new OrderPayload(itemToPurchase, 15000, ammountToPurcha // Start the workflow Console.WriteLine("Starting workflow {0} purchasing {1} {2}", orderId, ammountToPurchase, itemToPurchase); -await daprClient.StartWorkflowAsync( - workflowComponent: DaprWorkflowComponent, - workflowName: nameof(OrderProcessingWorkflow), +await daprWorkflowClient.ScheduleNewWorkflowAsync( + name: nameof(OrderProcessingWorkflow), input: orderInfo, instanceId: orderId); // Wait for the workflow to start and confirm the input -GetWorkflowResponse state = await daprClient.WaitForWorkflowStartAsync( - instanceId: orderId, - workflowComponent: DaprWorkflowComponent); +WorkflowState state = await daprWorkflowClient.WaitForWorkflowStartAsync( + instanceId: orderId); -Console.WriteLine("Your workflow has started. Here is the status of the workflow: {0}", state.RuntimeStatus); +Console.WriteLine($"{nameof(OrderProcessingWorkflow)} (ID = {orderId}) started successfully with {state.ReadInputAs()}"); // Wait for the workflow to complete +using var ctx = new CancellationTokenSource(TimeSpan.FromSeconds(5)); state = await daprClient.WaitForWorkflowCompletionAsync( instanceId: orderId, - workflowComponent: DaprWorkflowComponent); + cancellation: ctx.Token); -Console.WriteLine("Workflow Status: {0}", state.RuntimeStatus); +Console.WriteLine("Workflow Status: {0}", state.ReadCustomStatusAs()); ``` #### `order-processor/Workflows/OrderProcessingWorkflow.cs` @@ -697,7 +714,7 @@ class OrderProcessingWorkflow : Workflow nameof(UpdateInventoryActivity), new PaymentRequest(RequestId: orderId, order.Name, order.Quantity, order.TotalCost)); } - catch (TaskFailedException) + catch (WorkflowTaskFailedException) { // Let them know their payment was processed await context.CallActivityAsync( @@ -779,9 +796,15 @@ Install the dependencies: mvn clean install ``` +Return to the `java/sdk` directory: + +```bash +cd .. +``` + ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `java/sdk` directory, run the following command: ```bash cd workflows/java/sdk @@ -1114,7 +1137,7 @@ cd workflows/go/sdk ### Step 3: Run the order processor app -In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}): +In the terminal, start the order processor app alongside a Dapr sidecar using [Multi-App Run]({{< ref multi-app-dapr-run >}}). From the `go/sdk` directory, run the following command: ```bash dapr run -f . diff --git a/daprdocs/content/en/operations/configuration/api-allowlist.md b/daprdocs/content/en/operations/configuration/api-allowlist.md index 75930dba8bf..b0a02d9bf1f 100644 --- a/daprdocs/content/en/operations/configuration/api-allowlist.md +++ b/daprdocs/content/en/operations/configuration/api-allowlist.md @@ -6,17 +6,17 @@ weight: 4500 description: "Choose which Dapr sidecar APIs are available to the app" --- -In certain scenarios, such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs that are being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application. +In scenarios such as zero trust networks or when exposing the Dapr sidecar to external traffic through a frontend, it's recommended to only enable the Dapr sidecar APIs being used by the app. Doing so reduces the attack surface and helps keep the Dapr APIs scoped to the actual needs of the application. -Dapr allows developers to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{}}). +Dapr allows you to control which APIs are accessible to the application by setting an API allowlist or denylist using a [Dapr Configuration]({{< ref "configuration-schema.md" >}}). ### Default behavior If no API allowlist or denylist is specified, the default behavior is to allow access to all Dapr APIs. -- If only a denylist is defined, all Dapr APIs are allowed except those defined in the denylist -- If only an allowlist is defined, only the Dapr APIs listed in the allowlist are allowed -- If both an allowlist and a denylist are defined, the allowed APIs are those defined in the allowlist, unless they are also included in the denylist. In other words, the denylist overrides the allowlist for APIs that are defined in both. +- If you've only defined a denylist, all Dapr APIs are allowed except those defined in the denylist +- If you've only defined an allowlist, only the Dapr APIs listed in the allowlist are allowed +- If you've defined both an allowlist and a denylist, the denylist overrides the allowlist for APIs that are defined in both. - If neither is defined, all APIs are allowed. For example, the following configuration enables all APIs for both HTTP and gRPC: @@ -119,14 +119,18 @@ See this list of values corresponding to the different Dapr APIs: | [Service Invocation]({{< ref service_invocation_api.md >}}) | `invoke` (`v1.0`) | `invoke` (`v1`) | | [State]({{< ref state_api.md>}})| `state` (`v1.0` and `v1.0-alpha1`) | `state` (`v1` and `v1alpha1`) | | [Pub/Sub]({{< ref pubsub.md >}}) | `publish` (`v1.0` and `v1.0-alpha1`) | `publish` (`v1` and `v1alpha1`) | +| [Output Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) | | Subscribe | n/a | `subscribe` (`v1alpha1`) | -| [(Output) Bindings]({{< ref bindings_api.md >}}) | `bindings` (`v1.0`) |`bindings` (`v1`) | | [Secrets]({{< ref secrets_api.md >}})| `secrets` (`v1.0`) | `secrets` (`v1`) | | [Actors]({{< ref actors_api.md >}}) | `actors` (`v1.0`) |`actors` (`v1`) | | [Metadata]({{< ref metadata_api.md >}}) | `metadata` (`v1.0`) |`metadata` (`v1`) | | [Configuration]({{< ref configuration_api.md >}}) | `configuration` (`v1.0` and `v1.0-alpha1`) | `configuration` (`v1` and `v1alpha1`) | | [Distributed Lock]({{< ref distributed_lock_api.md >}}) | `lock` (`v1.0-alpha1`)
`unlock` (`v1.0-alpha1`) | `lock` (`v1alpha1`)
`unlock` (`v1alpha1`) | -| Cryptography | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) | +| [Cryptography]({{< ref cryptography_api.md >}}) | `crypto` (`v1.0-alpha1`) | `crypto` (`v1alpha1`) | | [Workflow]({{< ref workflow_api.md >}}) | `workflows` (`v1.0-alpha1`) |`workflows` (`v1alpha1`) | | [Health]({{< ref health_api.md >}}) | `healthz` (`v1.0`) | n/a | | Shutdown | `shutdown` (`v1.0`) | `shutdown` (`v1`) | + +## Next steps + +{{< button text="Configure Dapr to use gRPC" page="grpc" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/configuration-overview.md b/daprdocs/content/en/operations/configuration/configuration-overview.md index 0cba2414ca4..7225fc11f2f 100644 --- a/daprdocs/content/en/operations/configuration/configuration-overview.md +++ b/daprdocs/content/en/operations/configuration/configuration-overview.md @@ -1,30 +1,44 @@ --- type: docs -title: "Overview of Dapr configuration options" +title: "Dapr configuration" linkTitle: "Overview" weight: 100 -description: "Information on Dapr configuration and how to set options for your application" +description: "Overview of Dapr configuration" --- -## Sidecar configuration +Dapr configurations are settings and policies that enable you to change both the behavior of individual Dapr applications, or the global behavior of the Dapr control plane system services. -### Setup sidecar configuration +[for more information, read the configuration concept.]({{< ref configuration-concept.md >}}) -#### Self-hosted sidecar +## Application configuration -In self hosted mode the Dapr configuration is a configuration file, for example `config.yaml`. By default the Dapr sidecar looks in the default Dapr folder for the runtime configuration eg: `$HOME/.dapr/config.yaml` in Linux/MacOS and `%USERPROFILE%\.dapr\config.yaml` in Windows. +### Set up application configuration -A Dapr sidecar can also apply a configuration by using a `--config` flag to the file path with `dapr run` CLI command. +You can set up application configuration either in self-hosted or Kubernetes mode. -#### Kubernetes sidecar +{{< tabs "Self-hosted" Kubernetes >}} -In Kubernetes mode the Dapr configuration is a Configuration resource, that is applied to the cluster. For example: + +{{% codetab %}} + +In self hosted mode, the Dapr configuration is a [configuration file]({{< ref configuration-schema.md >}}) - for example, `config.yaml`. By default, the Dapr sidecar looks in the default Dapr folder for the runtime configuration: +- Linux/MacOs: `$HOME/.dapr/config.yaml` +- Windows: `%USERPROFILE%\.dapr\config.yaml` + +An application can also apply a configuration by using a `--config` flag to the file path with `dapr run` CLI command. + +{{% /codetab %}} + + +{{% codetab %}} + +In Kubernetes mode, the Dapr configuration is a Configuration resource, that is applied to the cluster. For example: ```bash kubectl apply -f myappconfig.yaml ``` -You can use the Dapr CLI to list the Configuration resources +You can use the Dapr CLI to list the Configuration resources for applications. ```bash dapr configurations -k @@ -40,11 +54,15 @@ A Dapr sidecar can apply a specific configuration by using a `dapr.io/config` an dapr.io/config: "myappconfig" ``` -Note: There are more [Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service. +> **Note:** [See all Kubernetes annotations]({{< ref "arguments-annotations-overview.md" >}}) available to configure the Dapr sidecar on activation by sidecar Injector system service. + +{{% /codetab %}} -### Sidecar configuration settings +{{< /tabs >}} -The following configuration settings can be applied to Dapr application sidecars: +### Application configuration settings + +The following menu includes all of the configuration settings you can set on the sidecar. - [Tracing](#tracing) - [Metrics](#metrics) @@ -68,7 +86,7 @@ The `tracing` section under the `Configuration` spec contains the following prop tracing: samplingRate: "1" otel: - endpointAddress: "https://..." + endpointAddress: "otelcollector.observability.svc.cluster.local:4317" zipkin: endpointAddress: "http://zipkin.default.svc.cluster.local:9411/api/v2/spans" ``` @@ -79,15 +97,22 @@ The following table lists the properties for tracing: |--------------|--------|-------------| | `samplingRate` | string | Set sampling rate for tracing to be enabled or disabled. | `stdout` | bool | True write more verbose information to the traces -| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to +| `otel.endpointAddress` | string | Set the Open Telemetry (OTEL) server address to send traces to. This may or may not require the https:// or http:// depending on your OTEL provider. | `otel.isSecure` | bool | Is the connection to the endpoint address encrypted | `otel.protocol` | string | Set to `http` or `grpc` protocol -| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to +| `zipkin.endpointAddress` | string | Set the Zipkin server address to send traces to. This should include the protocol (http:// or https://) on the endpoint. + +##### `samplingRate` -`samplingRate` is used to enable or disable the tracing. To disable the sampling rate , -set `samplingRate : "0"` in the configuration. The valid range of samplingRate is between 0 and 1 inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. `samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001) or 1 in 10,000 traces. +`samplingRate` is used to enable or disable the tracing. The valid range of `samplingRate` is between `0` and `1` inclusive. The sampling rate determines whether a trace span should be sampled or not based on value. -The OpenTelemetry (otel) endpoint can also be configured via an environment variables. The presence of the OTEL_EXPORTER_OTLP_ENDPOINT environment variable +`samplingRate : "1"` samples all traces. By default, the sampling rate is (0.0001), or 1 in 10,000 traces. + +To disable the sampling rate, set `samplingRate : "0"` in the configuration. + +##### `otel` + +The OpenTelemetry (`otel`) endpoint can also be configured via an environment variable. The presence of the `OTEL_EXPORTER_OTLP_ENDPOINT` environment variable turns on tracing for the sidecar. | Environment Variable | Description | @@ -100,9 +125,9 @@ See [Observability distributed tracing]({{< ref "tracing-overview.md" >}}) for m #### Metrics -The metrics section can be used to enable or disable metrics for an application. +The `metrics` section under the `Configuration` spec can be used to enable or disable metrics for an application. -The `metrics` section under the `Configuration` spec contains the following properties: +The `metrics` section contains the following properties: ```yml metrics: @@ -122,7 +147,7 @@ metrics: excludeVerbs: false ``` -In the examples above this path filter `/orders/{orderID}/items/{itemID}` would return a single metric count matching all the orderIDs and all the itemIDs rather than multiple metrics for each itemID. For more information see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}) +In the examples above, the path filter `/orders/{orderID}/items/{itemID}` would return _a single metric count_ matching all the `orderID`s and all the `itemID`s, rather than multiple metrics for each `itemID`. For more information, see [HTTP metrics path matching]({{< ref "metrics-overview.md#http-metrics-path-matching" >}}) The following table lists the properties for metrics: @@ -135,7 +160,7 @@ The following table lists the properties for metrics: | `http.pathMatching` | array | Array of paths for path matching, allowing users to define matching paths to manage cardinality. | | `http.excludeVerbs` | boolean | When set to true (default is false), the Dapr HTTP server ignores each request HTTP verb when building the method metric label. | -To further help managing cardinality, path matching allows specified paths matched according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption. +To further help manage cardinality, path matching allows you to match specified paths according to defined patterns, reducing the number of unique metrics paths and thus controlling metric cardinality. This feature is particularly useful for applications with dynamic URLs, ensuring that metrics remain meaningful and manageable without excessive memory consumption. Using rules, you can set regular expressions for every metric exposed by the Dapr sidecar. For example: @@ -154,9 +179,9 @@ See [metrics documentation]({{< ref "metrics-overview.md" >}}) for more informat #### Logging -The logging section can be used to configure how logging works in the Dapr Runtime. +The `logging` section under the `Configuration` spec is used to configure how logging works in the Dapr Runtime. -The `logging` section under the `Configuration` spec contains the following properties: +The `logging` section contains the following properties: ```yml logging: @@ -178,8 +203,7 @@ See [logging documentation]({{< ref "logs.md" >}}) for more information. #### Middleware -Middleware configuration set named HTTP pipeline middleware handlers -The `httpPipeline` and the `appHttpPipeline` section under the `Configuration` spec contains the following properties: +Middleware configuration sets named HTTP pipeline middleware handlers. The `httpPipeline` and the `appHttpPipeline` section under the `Configuration` spec contain the following properties: ```yml httpPipeline: # for incoming http calls @@ -203,13 +227,13 @@ The following table lists the properties for HTTP handlers: | `name` | string | Name of the middleware component | `type` | string | Type of middleware component -See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information +See [Middleware pipelines]({{< ref "middleware.md" >}}) for more information. #### Name resolution component -You can set name resolution component to use within the configuration YAML. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below. +You can set name resolution components to use within the configuration file. For example, to set the `spec.nameResolution.component` property to `"sqlite"`, pass configuration options in the `spec.nameResolution.configuration` dictionary as shown below. -This is the basic example of a configuration resource: +This is a basic example of a configuration resource: ```yaml apiVersion: dapr.io/v1alpha1 @@ -226,7 +250,7 @@ spec: For more information, see: - [The name resolution component documentation]({{< ref supported-name-resolution >}}) for more examples. -- - [The Configuration YAML documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component. +- [The Configuration file documentation]({{< ref configuration-schema.md >}}) to learn more about how to configure name resolution per component. #### Scope secret store access @@ -234,11 +258,11 @@ See the [Scoping secrets]({{< ref "secret-scope.md" >}}) guide for information a #### Access Control allow lists for building block APIs -See the [selectively enable Dapr APIs on the Dapr sidecar]({{< ref "api-allowlist.md" >}}) guide for information and examples on how to set ACLs on the building block APIs lists. +See the guide for [selectively enabling Dapr APIs on the Dapr sidecar]({{< ref "api-allowlist.md" >}}) for information and examples on how to set access control allow lists (ACLs) on the building block APIs lists. #### Access Control allow lists for service invocation API -See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists with ACLs which using service invocation API. +See the [Allow lists for service invocation]({{< ref "invoke-allowlist.md" >}}) guide for information and examples on how to set allow lists with ACLs which use the service invocation API. #### Disallow usage of certain component types @@ -258,13 +282,23 @@ spec: - secretstores.local.file ``` -You can optionally specify a version to disallow by adding it at the end of the component name. For example, `state.in-memory/v1` disables initializing components of type `state.in-memory` and version `v1`, but does not disable a (hypothetical) `v2` version of the component. +Optionally, you can specify a version to disallow by adding it at the end of the component name. For example, `state.in-memory/v1` disables initializing components of type `state.in-memory` and version `v1`, but does not disable a (hypothetical) `v2` version of the component. + +{{% alert title="Note" color="primary" %}} + When you add the component type `secretstores.kubernetes` to the denylist, Dapr forbids the creation of _additional_ components of type `secretstores.kubernetes`. -> Note: One special note applies to the component type `secretstores.kubernetes`. When you add that component to the denylist, Dapr forbids the creation of _additional_ components of type `secretstores.kubernetes`. However, it does not disable the built-in Kubernetes secret store, which is created by Dapr automatically and is used to store secrets specified in Components specs. If you want to disable the built-in Kubernetes secret store, you need to use the `dapr.io/disable-builtin-k8s-secret-store` [annotation]({{< ref arguments-annotations-overview.md >}}). + However, it does not disable the built-in Kubernetes secret store, which is: + - Created by Dapr automatically + - Used to store secrets specified in Components specs + + If you want to disable the built-in Kubernetes secret store, you need to use the `dapr.io/disable-builtin-k8s-secret-store` [annotation]({{< ref arguments-annotations-overview.md >}}). +{{% /alert %}} #### Turning on preview features -See the [preview features]({{< ref "preview-features.md" >}}) guide for information and examples on how to opt-in to preview features for a release. Preview feature enable new capabilities to be added that still need more time until they become generally available (GA) in the runtime. +See the [preview features]({{< ref "preview-features.md" >}}) guide for information and examples on how to opt-in to preview features for a release. + +Enabling preview features unlock new capabilities to be added for dev/test, since they still need more time before becoming generally available (GA) in the runtime. ### Example sidecar configuration @@ -316,7 +350,9 @@ spec: ## Control plane configuration -There is a single configuration file called `daprsystem` installed with the Dapr control plane system services that applies global settings. This is only set up when Dapr is deployed to Kubernetes. +A single configuration file called `daprsystem` is installed with the Dapr control plane system services that applies global settings. + +> **This is only set up when Dapr is deployed to Kubernetes.** ### Control plane configuration settings @@ -353,3 +389,7 @@ spec: allowedClockSkew: 15m workloadCertTTL: 24h ``` + +## Next steps + +{{< button text="Learn about concurrency and rate limits" page="control-concurrency" >}} diff --git a/daprdocs/content/en/operations/configuration/control-concurrency.md b/daprdocs/content/en/operations/configuration/control-concurrency.md index 85b240c19b5..976b78ab980 100644 --- a/daprdocs/content/en/operations/configuration/control-concurrency.md +++ b/daprdocs/content/en/operations/configuration/control-concurrency.md @@ -3,30 +3,57 @@ type: docs title: "How-To: Control concurrency and rate limit applications" linkTitle: "Concurrency & rate limits" weight: 2000 -description: "Control how many requests and events will invoke your application simultaneously" +description: "Learn how to control how many requests and events can invoke your application simultaneously" --- -A common scenario in distributed computing is to only allow for a given number of requests to execute concurrently. -Using Dapr, you can control how many requests and events will invoke your application simultaneously. +Typically, in distributed computing, you may only want to allow for a given number of requests to execute concurrently. Using Dapr's `app-max-concurrency`, you can control how many requests and events can invoke your application simultaneously. -*Note that this rate limiting is guaranteed for every event that's coming from Dapr, meaning Pub/Sub events, direct invocation from other services, bindings events etc. Dapr can't enforce the concurrency policy on requests that are coming to your app externally.* +Default `app-max-concurreny` is set to `-1`, meaning no concurrency. -*Note that rate limiting per second can be achieved by using the **middleware.http.ratelimit** middleware. However, there is an important difference between the two approaches. The rate limit middleware is time bound and limits the number of requests per second, while the `app-max-concurrency` flag specifies the number of concurrent requests (and events) at any point of time. See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}). * +## Different approaches -Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting ". +While this guide focuses on `app-max-concurrency`, you can also limit request rate per second using the **`middleware.http.ratelimit`** middleware. However, it's important to understand the difference between the two approaches: + +- `middleware.http.ratelimit`: Time bound and limits the number of requests per second +- `app-max-concurrency`: Specifies the number of concurrent requests (and events) at any point of time. + +See [Rate limit middleware]({{< ref middleware-rate-limit.md >}}) for more information about that approach. + +## Demo + +Watch this [video](https://youtu.be/yRI5g6o_jp8?t=1710) on how to control concurrency and rate limiting.
-## Setting app-max-concurrency +## Configure `app-max-concurrency` + +Without using Dapr, you would need to create some sort of a semaphore in the application and take care of acquiring and releasing it. + +Using Dapr, you don't need to make any code changes to your application. + +Select how you'd like to configure `app-max-concurrency`. + +{{< tabs "CLI" Kubernetes >}} + + +{{% codetab %}} + +To set concurrency limits with the Dapr CLI for running on your local dev machine, add the `app-max-concurrency` flag: + +```bash +dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py +``` -Without using Dapr, a developer would need to create some sort of a semaphore in the application and take care of acquiring and releasing it. -Using Dapr, there are no code changes needed to an app. +The above example effectively turns your app into a single concurrent service. -### Setting app-max-concurrency in Kubernetes +{{% /codetab %}} -To set app-max-concurrency in Kubernetes, add the following annotation to your pod: + +{{% codetab %}} + +To configure concurrency limits in Kubernetes, add the following annotation to your pod: ```yaml apiVersion: apps/v1 @@ -50,15 +77,22 @@ spec: dapr.io/app-id: "nodesubscriber" dapr.io/app-port: "3000" dapr.io/app-max-concurrency: "1" -... +#... ``` -### Setting app-max-concurrency using the Dapr CLI +{{% /codetab %}} -To set app-max-concurrency with the Dapr CLI for running on your local dev machine, add the `app-max-concurrency` flag: +{{< /tabs >}} -```bash -dapr run --app-max-concurrency 1 --app-port 5000 python ./app.py -``` +## Limitations + +### Controlling concurrency on external requests +Rate limiting is guaranteed for every event coming _from_ Dapr, including pub/sub events, direct invocation from other services, bindings events, etc. However, Dapr can't enforce the concurrency policy on requests that are coming _to_ your app externally. + +## Related links + +[Arguments and annotations]({{< ref arguments-annotations-overview.md >}}) + +## Next steps -The above examples will effectively turn your app into a single concurrent service. +{{< button text="Limit secret store access" page="secret-scope" >}} diff --git a/daprdocs/content/en/operations/configuration/grpc.md b/daprdocs/content/en/operations/configuration/grpc.md index 59c51a4cec3..5ab2df15f07 100644 --- a/daprdocs/content/en/operations/configuration/grpc.md +++ b/daprdocs/content/en/operations/configuration/grpc.md @@ -3,20 +3,21 @@ type: docs title: "How-To: Configure Dapr to use gRPC" linkTitle: "Use gRPC interface" weight: 5000 -description: "How to configure Dapr to use gRPC for low-latency, high performance scenarios" +description: "Configure Dapr to use gRPC for low-latency, high performance scenarios" --- -Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. - -You can find a list of auto-generated clients [here]({{< ref sdks >}}). +Dapr implements both an HTTP and a gRPC API for local calls. gRPC is useful for low-latency, high performance scenarios and has language integration using the proto clients. [You can see the full list of auto-generated clients (Dapr SDKs)]({{< ref sdks >}}). The Dapr runtime implements a [proto service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/dapr.proto) that apps can communicate with via gRPC. -In addition to calling Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implements the [Dapr appcallback service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto) +Not only can you call Dapr via gRPC, Dapr can communicate with an application via gRPC. To do that, the app needs to host a gRPC server and implement the [Dapr `appcallback` service](https://github.com/dapr/dapr/blob/master/dapr/proto/runtime/v1/appcallback.proto) ## Configuring Dapr to communicate with an app via gRPC -### Self hosted +{{< tabs "Self-hosted" Kubernetes >}} + + +{{% codetab %}} When running in self hosted mode, use the `--app-protocol` flag to tell Dapr to use gRPC to talk to the app: @@ -25,8 +26,10 @@ dapr run --app-protocol grpc --app-port 5005 node app.js ``` This tells Dapr to communicate with your app via gRPC over port `5005`. +{{% /codetab %}} -### Kubernetes + +{{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: @@ -52,5 +55,13 @@ spec: dapr.io/app-id: "myapp" dapr.io/app-protocol: "grpc" dapr.io/app-port: "5005" -... -``` \ No newline at end of file +#... +``` + +{{% /codetab %}} + +{{< /tabs >}} + +## Next steps + +{{< button text="Handle large HTTP header sizes" page="increase-read-buffer-size" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md b/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md index a8528e09bad..9fcb80c4fb0 100644 --- a/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md +++ b/daprdocs/content/en/operations/configuration/increase-read-buffer-size.md @@ -1,20 +1,23 @@ --- type: docs -title: "How-To: Handle large http header size" +title: "How-To: Handle large HTTP header size" linkTitle: "HTTP header size" weight: 6000 -description: "Configure a larger http read buffer size" +description: "Configure a larger HTTP read buffer size" --- -Dapr has a default limit of 4KB for the http header read buffer size. When sending http headers that are bigger than the default 4KB, you can increase this value. Otherwise, you may encounter a `Too big request header` service invocation error. You can change the http header size by using the `dapr.io/http-read-buffer-size` annotation or `--dapr-http-read-buffer-size` flag when using the CLI. - +Dapr has a default limit of 4KB for the HTTP header read buffer size. If you're sending HTTP headers larger than the default 4KB, you may encounter a `Too big request header` service invocation error. +You can increase the HTTP header size by using: +- The `dapr.io/http-read-buffer-size` annotation, or +- The `--dapr-http-read-buffer-size` flag when using the CLI. {{< tabs Self-hosted Kubernetes >}} + {{% codetab %}} -When running in self hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size: +When running in self-hosted mode, use the `--dapr-http-read-buffer-size` flag to configure Dapr to use non-default http header size: ```bash dapr run --dapr-http-read-buffer-size 16 node app.js @@ -23,10 +26,11 @@ This tells Dapr to set maximum read buffer size to `16` KB. {{% /codetab %}} - + {{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: + ```yaml apiVersion: apps/v1 kind: Deployment @@ -49,7 +53,7 @@ spec: dapr.io/app-id: "myapp" dapr.io/app-port: "8000" dapr.io/http-read-buffer-size: "16" -... +#... ``` {{% /codetab %}} @@ -57,4 +61,8 @@ spec: {{< /tabs >}} ## Related links -- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) +[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Handle large HTTP body requests" page="increase-request-size" >}} diff --git a/daprdocs/content/en/operations/configuration/increase-request-size.md b/daprdocs/content/en/operations/configuration/increase-request-size.md index 2faadecf085..25461e3e83f 100644 --- a/daprdocs/content/en/operations/configuration/increase-request-size.md +++ b/daprdocs/content/en/operations/configuration/increase-request-size.md @@ -6,15 +6,16 @@ weight: 6000 description: "Configure http requests that are bigger than 4 MB" --- -By default Dapr has a limit for the request body size which is set to 4 MB, however you can change this by defining `dapr.io/http-max-request-size` annotation or `--dapr-http-max-request-size` flag. - - +By default, Dapr has a limit for the request body size, set to 4MB. You can change this by defining: +- The `dapr.io/http-max-request-size` annotation, or +- The `--dapr-http-max-request-size` flag. {{< tabs Self-hosted Kubernetes >}} + {{% codetab %}} -When running in self hosted mode, use the `--dapr-http-max-request-size` flag to configure Dapr to use non-default request body size: +When running in self-hosted mode, use the `--dapr-http-max-request-size` flag to configure Dapr to use non-default request body size: ```bash dapr run --dapr-http-max-request-size 16 node app.js @@ -23,10 +24,11 @@ This tells Dapr to set maximum request body size to `16` MB. {{% /codetab %}} - + {{% codetab %}} On Kubernetes, set the following annotations in your deployment YAML: + ```yaml apiVersion: apps/v1 kind: Deployment @@ -49,7 +51,7 @@ spec: dapr.io/app-id: "myapp" dapr.io/app-port: "8000" dapr.io/http-max-request-size: "16" -... +#... ``` {{% /codetab %}} @@ -57,4 +59,9 @@ spec: {{< /tabs >}} ## Related links -- [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +[Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Install sidecar certificates" page="install-certificates" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/install-certificates.md b/daprdocs/content/en/operations/configuration/install-certificates.md index 071753ef93d..7c2b79f8c86 100644 --- a/daprdocs/content/en/operations/configuration/install-certificates.md +++ b/daprdocs/content/en/operations/configuration/install-certificates.md @@ -6,20 +6,26 @@ weight: 6500 description: "Configure the Dapr sidecar container to trust certificates" --- -The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted. For example, using an HTTP binding or configuring an outbound proxy for the sidecar. Both certificate authority (CA) certificates and leaf certificates are supported. +The Dapr sidecar can be configured to trust certificates for communicating with external services. This is useful in scenarios where a self-signed certificate needs to be trusted, such as: +- Using an HTTP binding +- Configuring an outbound proxy for the sidecar + +Both certificate authority (CA) certificates and leaf certificates are supported. {{< tabs Self-hosted Kubernetes >}} + {{% codetab %}} -When the sidecar is not running inside a container, certificates must be directly installed on the host operating system. +You can make the following configurations when the sidecar is running as a container. + +1. Configure certificates to be available to the sidecar container using volume mounts. +1. Point the environment variable `SSL_CERT_DIR` in the sidecar container to the directory containing the certificates. + +> **Note:** For Windows containers, make sure the container is running with administrator privileges so it can install the certificates. -When the sidecar is running as a container: -1. Certificates must be available to the sidecar container. This can be configured using volume mounts. -1. The environment variable `SSL_CERT_DIR` must be set in the sidecar container, pointing to the directory containing the certificates. -1. For Windows containers, the container needs to run with administrator privileges to be able to install the certificates. +The following example uses Docker Compose to install certificates (present locally in the `./certificates` directory) in the sidecar container: -Below is an example that uses Docker Compose to install certificates (present locally in the `./certificates` directory) in the sidecar container: ```yaml version: '3' services: @@ -39,16 +45,22 @@ services: # user: ContainerAdministrator ``` -{{% /codetab %}} +> **Note:** When the sidecar is not running inside a container, certificates must be directly installed on the host operating system. +{{% /codetab %}} + {{% codetab %}} On Kubernetes: -1. Certificates must be available to the sidecar container using a volume mount. -1. The environment variable `SSL_CERT_DIR` must be set in the sidecar container, pointing to the directory containing the certificates. -The YAML below is an example of a deployment that attaches a pod volume to the sidecar, and sets `SSL_CERT_DIR` to install the certificates. +1. Configure certificates to be available to the sidecar container using a volume mount. +1. Point the environment variable `SSL_CERT_DIR` in the sidecar container to the directory containing the certificates. + +The following example YAML shows a deployment that: +- Attaches a pod volume to the sidecar +- Sets `SSL_CERT_DIR` to install the certificates + ```yaml apiVersion: apps/v1 kind: Deployment @@ -77,23 +89,21 @@ spec: - name: certificates-vol hostPath: path: /certificates -... +#... ``` -**Note**: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers. +> **Note**: When using Windows containers, the sidecar container is started with admin privileges, which is required to install the certificates. This does not apply to Linux containers. {{% /codetab %}} {{< /tabs >}} -
- -All the certificates in the directory pointed by `SSL_CERT_DIR` are installed. +After following these steps, all the certificates in the directory pointed by `SSL_CERT_DIR` are installed. -1. On Linux containers, all the certificate extensions supported by OpenSSL are supported. For more information, see https://www.openssl.org/docs/man1.1.1/man1/openssl-rehash.html -1. On Windows container, all the certificate extensions supported by certoc.exe are supported. For more information, see certoc.exe present in [Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore) +- **On Linux containers:** All the certificate extensions supported by OpenSSL are supported. [Learn more.](https://www.openssl.org/docs/man1.1.1/man1/openssl-rehash.html) +- **On Windows container:** All the certificate extensions supported by `certoc.exe` are supported. [See certoc.exe present in Windows Server Core](https://hub.docker.com/_/microsoft-windows-servercore). -## Example +## Demo Watch the demo on using installing SSL certificates and securely using the HTTP binding in community call 64: @@ -106,3 +116,7 @@ Watch the demo on using installing SSL certificates and securely using the HTTP - [HTTP binding spec]({{< ref http.md >}}) - [(Kubernetes) How-to: Mount Pod volumes to the Dapr sidecar]({{< ref kubernetes-volume-mounts.md >}}) - [Dapr Kubernetes pod annotations spec]({{< ref arguments-annotations-overview.md >}}) + +## Next steps + +{{< button text="Enable preview features" page="preview-features" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/invoke-allowlist.md b/daprdocs/content/en/operations/configuration/invoke-allowlist.md index 725c8308491..f9afe029926 100644 --- a/daprdocs/content/en/operations/configuration/invoke-allowlist.md +++ b/daprdocs/content/en/operations/configuration/invoke-allowlist.md @@ -3,71 +3,87 @@ type: docs title: "How-To: Apply access control list configuration for service invocation" linkTitle: "Service Invocation access control" weight: 4000 -description: "Restrict what operations *calling* applications can perform, via service invocation, on the *called* application" +description: "Restrict what operations calling applications can perform" --- -Access control enables the configuration of policies that restrict what operations *calling* applications can perform, via service invocation, on the *called* application. To limit access to a called applications from specific operations and HTTP verbs from the calling applications, you can define an access control policy specification in configuration. +Using access control, you can configure policies that restrict what the operations _calling_ applications can perform, via service invocation, on the _called_ application. You can define an access control policy specification in the Configuration schema to limit access: +- To a called application from specific operations, and +- To HTTP verbs from the calling applications. -An access control policy is specified in configuration and be applied to Dapr sidecar for the *called* application. Example access policies are shown below and access to the called app is based on the matched policy action. You can provide a default global action for all calling applications and if no access control policy is specified, the default behavior is to allow all calling applications to access to the called app. +An access control policy is specified in Configuration and applied to the Dapr sidecar for the _called_ application. Access to the called app is based on the matched policy action. -## Concepts +You can provide a default global action for all calling applications. If no access control policy is specified, the default behavior is to allow all calling applications to access to the called app. -**TrustDomain** - A "trust domain" is a logical group to manage trust relationships. Every application is assigned a trust domain which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert. +[See examples of access policies.](#example-scenarios) -**App Identity** - Dapr requests the sentry service to generate a [SPIFFE](https://spiffe.io/) id for all applications and this id is attached in the TLS cert. The SPIFFE id is of the format: `**spiffe://\/ns/\/\**`. For matching policies, the trust domain, namespace and app ID values of the calling app are extracted from the SPIFFE id in the TLS cert of the calling app. These values are matched against the trust domain, namespace and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. +## Terminology + +### `trustDomain` + +A "trust domain" is a logical group that manages trust relationships. Every application is assigned a trust domain, which can be specified in the access control list policy spec. If no policy spec is defined or an empty trust domain is specified, then a default value "public" is used. This trust domain is used to generate the identity of the application in the TLS cert. + +### App Identity + +Dapr requests the sentry service to generate a [SPIFFE](https://spiffe.io/) ID for all applications. This ID is attached in the TLS cert. + +The SPIFFE ID is of the format: `**spiffe://\/ns/\/\**`. + +For matching policies, the trust domain, namespace, and app ID values of the calling app are extracted from the SPIFFE ID in the TLS cert of the calling app. These values are matched against the trust domain, namespace, and app ID values specified in the policy spec. If all three of these match, then more specific policies are further matched. ## Configuration properties -The following tables lists the different properties for access control, policies and operations: +The following tables lists the different properties for access control, policies, and operations: ### Access Control | Property | Type | Description | |---------------|--------|-------------| -| defaultAction | string | Global default action when no other policy is matched -| trustDomain | string | Trust domain assigned to the application. Default is "public". -| policies | string | Policies to determine what operations the calling app can do on the called app +| `defaultAction` | string | Global default action when no other policy is matched +| `trustDomain` | string | Trust domain assigned to the application. Default is "public". +| `policies` | string | Policies to determine what operations the calling app can do on the called app ### Policies | Property | Type | Description | |---------------|--------|-------------| -| app | string | AppId of the calling app to allow/deny service invocation from -| namespace | string | Namespace value that needs to be matched with the namespace of the calling app -| trustDomain | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public" -| defaultAction | string | App level default action in case the app is found but no specific operation is matched -| operations | string | operations that are allowed from the calling app +| `app` | string | AppId of the calling app to allow/deny service invocation from +| `namespace` | string | Namespace value that needs to be matched with the namespace of the calling app +| `trustDomain` | string | Trust domain that needs to be matched with the trust domain of the calling app. Default is "public" +| `defaultAction` | string | App level default action in case the app is found but no specific operation is matched +| `operations` | string | operations that are allowed from the calling app ### Operations | Property | Type | Description | | -------- | ------ | ------------------------------------------------------------ | -| name | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used in a path to match. Wildcard "\**" can be used to match under multiple paths. | -| httpVerb | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation. | -| action | string | Access modifier. Accepted values "allow" (default) or "deny" | +| `name` | string | Path name of the operations allowed on the called app. Wildcard "\*" can be used in a path to match. Wildcard "\**" can be used to match under multiple paths. | +| `httpVerb` | list | List specific http verbs that can be used by the calling app. Wildcard "\*" can be used to match any http verb. Unused for grpc invocation. | +| `action` | string | Access modifier. Accepted values "allow" (default) or "deny" | ## Policy rules -1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app -2. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy specified and the default behavior is to allow all apps to access to all methods on the called app. -3. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app. -4. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect. -5. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect. +1. If no access policy is specified, the default behavior is to allow all apps to access to all methods on the called app. +1. If no global default action is specified and no app specific policies defined, the empty access policy is treated like no access policy is specified. The default behavior is to allow all apps to access to all methods on the called app. +1. If no global default action is specified but some app specific policies have been defined, then we resort to a more secure option of assuming the global default action to deny access to all methods on the called app. +1. If an access policy is defined and if the incoming app credentials cannot be verified, then the global default action takes effect. +1. If either the trust domain or namespace of the incoming app do not match the values specified in the app policy, the app policy is ignored and the global default action takes effect. ## Policy priority The action corresponding to the most specific policy matched takes effect as ordered below: 1. Specific HTTP verbs in the case of HTTP or the operation level action in the case of GRPC. -2. The default action at the app level -3. The default action at the global level +1. The default action at the app level +1. The default action at the global level ## Example scenarios Below are some example scenarios for using access control list for service invocation. See [configuration guidance]({{< ref "configuration-concept.md" >}}) to understand the available configuration settings for an application sidecar. -Scenario 1: Deny access to all apps except where trustDomain = public, namespace = default, appId = app1 +### Scenario 1: -With this configuration, all calling methods with appId = app1 are allowed and all other invocation requests from other applications are denied +Deny access to all apps except where `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1` + +With this configuration, all calling methods with `appId` = `app1` are allowed. All other invocation requests from other applications are denied. ```yaml apiVersion: dapr.io/v1alpha1 @@ -85,9 +101,11 @@ spec: namespace: "default" ``` -Scenario 2: Deny access to all apps except trustDomain = public, namespace = default, appId = app1, operation = op1 +### Scenario 2: + +Deny access to all apps except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `op1` -With this configuration, only method op1 from appId = app1 is allowed and all other method requests from all other apps, including other methods on app1, are denied +With this configuration, only the method `op1` from `appId` = `app1` is allowed. All other method requests from all other apps, including other methods on `app1`, are denied. ```yaml apiVersion: dapr.io/v1alpha1 @@ -109,12 +127,16 @@ spec: action: allow ``` -Scenario 3: Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched +### Scenario 3: + +Deny access to all apps except when a specific verb for HTTP and operation for GRPC is matched + +With this configuration, only the scenarios below are allowed access. All other method requests from all other apps, including other methods on `app1` or `app2`, are denied. + +- `trustDomain` = `public`, `namespace` = `default`, `appID` = `app1`, `operation` = `op1`, `httpVerb` = `POST`/`PUT` +- `trustDomain` = `"myDomain"`, `namespace` = `"ns1"`, `appID` = `app2`, `operation` = `op2` and application protocol is GRPC -With this configuration, the only scenarios below are allowed access and and all other method requests from all other apps, including other methods on app1 or app2, are denied -* trustDomain = public, namespace = default, appID = app1, operation = op1, http verb = POST/PUT -* trustDomain = "myDomain", namespace = "ns1", appID = app2, operation = op2 and application protocol is GRPC -, only HTTP verbs POST/PUT on method op1 from appId = app1 are allowed and all other method requests from all other apps, including other methods on app1, are denied +Only the `httpVerb` `POST`/`PUT` on method `op1` from `appId` = `app1` are allowe. All other method requests from all other apps, including other methods on `app1`, are denied. ```yaml apiVersion: dapr.io/v1alpha1 @@ -143,7 +165,9 @@ spec: action: allow ``` -Scenario 4: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/*, all http verbs +### Scenario 4: + +Allow access to all methods except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `/op1/*`, all `httpVerb` ```yaml apiVersion: dapr.io/v1alpha1 @@ -165,9 +189,11 @@ spec: action: deny ``` -Scenario 5: Allow access to all methods for trustDomain = public, namespace = ns1, appId = app1 and deny access to all methods for trustDomain = public, namespace = ns2, appId = app1 +### Scenario 5: + +Allow access to all methods for `trustDomain` = `public`, `namespace` = `ns1`, `appId` = `app1` and deny access to all methods for `trustDomain` = `public`, `namespace` = `ns2`, `appId` = `app1` -This scenario shows how applications with the same app ID but belonging to different namespaces can be specified +This scenario shows how applications with the same app ID while belonging to different namespaces can be specified. ```yaml apiVersion: dapr.io/v1alpha1 @@ -189,7 +215,9 @@ spec: namespace: "ns2" ``` -Scenario 6: Allow access to all methods except trustDomain = public, namespace = default, appId = app1, operation = /op1/**/a, all http verbs +### Scenario 6: + +Allow access to all methods except `trustDomain` = `public`, `namespace` = `default`, `appId` = `app1`, `operation` = `/op1/**/a`, all `httpVerb` ```yaml apiVersion: dapr.io/v1alpha1 @@ -211,14 +239,15 @@ spec: action: deny ``` -## Hello world examples +## "hello world" examples -These examples show how to apply access control to the [hello world](https://github.com/dapr/quickstarts#quickstarts) quickstart samples where a python app invokes a node.js app. -Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE id for authentication, which means the Sentry service either has to be running locally or deployed to your hosting environment such as a Kubernetes cluster. +In these examples, you learn how to apply access control to the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials) tutorials. -The nodeappconfig example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the python app is in the `myDomain` trust domain and `default` namespace. The nodeapp is in the `public` trust domain. +Access control lists rely on the Dapr [Sentry service]({{< ref "security-concept.md" >}}) to generate the TLS certificates with a SPIFFE ID for authentication. This means the Sentry service either has to be running locally or deployed to your hosting environment, such as a Kubernetes cluster. -**nodeappconfig.yaml** +The `nodeappconfig` example below shows how to **deny** access to the `neworder` method from the `pythonapp`, where the Python app is in the `myDomain` trust domain and `default` namespace. The Node.js app is in the `public` trust domain. + +### nodeappconfig.yaml ```yaml apiVersion: dapr.io/v1alpha1 @@ -242,7 +271,7 @@ spec: action: deny ``` -**pythonappconfig.yaml** +### pythonappconfig.yaml ```yaml apiVersion: dapr.io/v1alpha1 @@ -258,95 +287,119 @@ spec: ``` ### Self-hosted mode -This example uses the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) quickstart. -The following steps run the Sentry service locally with mTLS enabled, set up necessary environment variables to access certificates, and then launch both the node app and python app each referencing the Sentry service to apply the ACLs. +When walking through this tutorial, you: +- Run the Sentry service locally with mTLS enabled +- Set up necessary environment variables to access certificates +- Launch both the Node app and Python app each referencing the Sentry service to apply the ACLs + +#### Prerequisites + +- Become familiar with running [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled +- Clone the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) tutorial - 1. Follow these steps to run the [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled +#### Run the Node.js app - 2. In a command prompt, set these environment variables: +1. In a command prompt, set these environment variables: {{< tabs "Linux/MacOS" Windows >}} {{% codetab %}} - ```bash - export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` - export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` - export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` - export NAMESPACE=default - ``` + + ```bash + export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` + export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` + export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` + export NAMESPACE=default + ``` {{% /codetab %}} - {{% codetab %}} - ```powershell - $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) - $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) - $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) - $env:NAMESPACE="default" - ``` + {{% codetab %}} + + ```powershell + $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) + $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) + $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) + $env:NAMESPACE="default" + ``` {{% /codetab %}} {{< /tabs >}} -3. Run daprd to launch a Dapr sidecar for the node.js app with mTLS enabled, referencing the local Sentry service: +1. Run daprd to launch a Dapr sidecar for the Node.js app with mTLS enabled, referencing the local Sentry service: ```bash daprd --app-id nodeapp --dapr-grpc-port 50002 -dapr-http-port 3501 --log-level debug --app-port 3000 --enable-mtls --sentry-address localhost:50001 --config nodeappconfig.yaml ``` -4. Run the node app in a separate command prompt: +1. Run the Node.js app in a separate command prompt: ```bash node app.js ``` -5. In another command prompt, set these environment variables: +#### Run the Python app + +1. In another command prompt, set these environment variables: {{< tabs "Linux/MacOS" Windows >}} {{% codetab %}} - ```bash - export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` - export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` - export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` - export NAMESPACE=default - ``` + + ```bash + export DAPR_TRUST_ANCHORS=`cat $HOME/.dapr/certs/ca.crt` + export DAPR_CERT_CHAIN=`cat $HOME/.dapr/certs/issuer.crt` + export DAPR_CERT_KEY=`cat $HOME/.dapr/certs/issuer.key` + export NAMESPACE=default + ``` {{% /codetab %}} {{% codetab %}} + ```powershell $env:DAPR_TRUST_ANCHORS=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\ca.crt) $env:DAPR_CERT_CHAIN=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.crt) $env:DAPR_CERT_KEY=$(Get-Content -raw $env:USERPROFILE\.dapr\certs\issuer.key) $env:NAMESPACE="default" - ``` + ``` + {{% /codetab %}} {{< /tabs >}} -6. Run daprd to launch a Dapr sidecar for the python app with mTLS enabled, referencing the local Sentry service: +1. Run daprd to launch a Dapr sidecar for the Python app with mTLS enabled, referencing the local Sentry service: ```bash daprd --app-id pythonapp --dapr-grpc-port 50003 --metrics-port 9092 --log-level debug --enable-mtls --sentry-address localhost:50001 --config pythonappconfig.yaml ``` - -7. Run the python app in a separate command prompt: +1. Run the Python app in a separate command prompt: ```bash python app.py ``` -8. You should see the calls to the node app fail in the python app command prompt based due to the **deny** operation action in the nodeappconfig file. Change this action to **allow** and re-run the apps and you should then see this call succeed. +You should see the calls to the Node.js app fail in the Python app command prompt, due to the **deny** operation action in the `nodeappconfig` file. Change this action to **allow** and re-run the apps to see this call succeed. ### Kubernetes mode -This example uses the [hello kubernetes](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes/README.md) quickstart. -You can create and apply the above configuration files `nodeappconfig.yaml` and `pythonappconfig.yaml` as described in the [configuration]({{< ref "configuration-concept.md" >}}) to the Kubernetes deployments. +#### Prerequisites + +- Become familiar with running [Sentry service in self-hosted mode]({{< ref "mtls.md" >}}) with mTLS enabled +- Clone the [hello world](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-world/README.md) tutorial + +#### Configure the Node.js and Python apps + +You can create and apply the above [`nodeappconfig.yaml`](#nodeappconfigyaml) and [`pythonappconfig.yaml`](#pythonappconfigyaml) configuration files, as described in the [configuration]({{< ref "configuration-concept.md" >}}). + +For example, the Kubernetes Deployment below is how the Python app is deployed to Kubernetes in the default namespace with this `pythonappconfig` configuration file. -For example, below is how the pythonapp is deployed to Kubernetes in the default namespace with this pythonappconfig configuration file. -Do the same for the nodeapp deployment and then look at the logs for the pythonapp to see the calls fail due to the **deny** operation action set in the nodeappconfig file. Change this action to **allow** and re-deploy the apps and you should then see this call succeed. +Do the same for the Node.js deployment and look at the logs for the Python app to see the calls fail due to the **deny** operation action set in the `nodeappconfig` file. + +Change this action to **allow** and re-deploy the apps to see this call succeed. + +##### Deployment YAML example ```yaml apiVersion: apps/v1 @@ -375,9 +428,14 @@ spec: image: dapriosamples/hello-k8s-python:edge ``` -## Community call demo +## Demo + Watch this [video](https://youtu.be/j99RN_nxExA?t=1108) on how to apply access control list for service invocation.
-
\ No newline at end of file + + +## Next steps + +{{< button text="Dapr APIs allow list" page="api-allowlist" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/preview-features.md b/daprdocs/content/en/operations/configuration/preview-features.md index 387ba0fa6b9..1e442dcc587 100644 --- a/daprdocs/content/en/operations/configuration/preview-features.md +++ b/daprdocs/content/en/operations/configuration/preview-features.md @@ -6,23 +6,21 @@ weight: 7000 description: "How to specify and enable preview features" --- -## Overview -Preview features in Dapr are considered experimental when they are first released. These preview features require explicit opt-in in order to be used. The opt-in is specified in Dapr's configuration. +[Preview features]({{< ref support-preview-features >}}) in Dapr are considered experimental when they are first released. These preview features require you to explicitly opt-in to use them. You specify this opt-in in Dapr's Configuration file. Preview features are enabled on a per application basis by setting configuration when running an application instance. -### Preview features -The current list of preview features can be found [here]({{}}). - ## Configuration properties + The `features` section under the `Configuration` spec contains the following properties: | Property | Type | Description | |----------------|--------|-------------| -|name|string|The name of the preview feature that is enabled/disabled -|enabled|bool|Boolean specifying if the feature is enabled or disabled +|`name`|string|The name of the preview feature that is enabled/disabled +|`enabled`|bool|Boolean specifying if the feature is enabled or disabled ## Enabling a preview feature + Preview features are specified in the configuration. Here is an example of a full configuration that contains multiple features: ```yaml @@ -42,7 +40,11 @@ spec: enabled: true ``` -### Standalone +{{< tabs Self-hosted Kubernetes >}} + + +{{% codetab %}} + To enable preview features when running Dapr locally, either update the default configuration or specify a separate config file using `dapr run`. The default Dapr config is created when you run `dapr init`, and is located at: @@ -55,8 +57,11 @@ Alternately, you can update preview features on all apps run locally by specifyi dapr run --app-id myApp --config ./previewConfig.yaml ./app ``` +{{% /codetab %}} + + +{{% codetab %}} -### Kubernetes In Kubernetes mode, the configuration must be provided via a configuration component. Using the same configuration as above, apply it via `kubectl`: ```bash @@ -94,3 +99,11 @@ spec: - containerPort: 3000 imagePullPolicy: Always ``` + +{{% /codetab %}} + +{{< /tabs >}} + +## Next steps + +{{< button text="Configuration schema" page="configuration-schema" >}} \ No newline at end of file diff --git a/daprdocs/content/en/operations/configuration/secret-scope.md b/daprdocs/content/en/operations/configuration/secret-scope.md index 39796447268..5e129e1d7f9 100644 --- a/daprdocs/content/en/operations/configuration/secret-scope.md +++ b/daprdocs/content/en/operations/configuration/secret-scope.md @@ -3,12 +3,14 @@ type: docs title: "How-To: Limit the secrets that can be read from secret stores" linkTitle: "Limit secret store access" weight: 3000 -description: "To limit the secrets to which the Dapr application has access, users can define secret scopes by augmenting existing configuration resource with restrictive permissions." +description: "Define secret scopes by augmenting the existing configuration resource with restrictive permissions." --- -In addition to scoping which applications can access a given component, for example a secret store component (see [Scoping components]({{< ref "component-scopes.md">}})), a named secret store component itself can be scoped to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` list, applications can be restricted to access only specific secrets. +In addition to [scoping which applications can access a given component]({{< ref "component-scopes.md">}}), you can also scop a named secret store component to one or more secrets for an application. By defining `allowedSecrets` and/or `deniedSecrets` lists, you restrict applications to access only specific secrets. -Follow [these instructions]({{< ref "configuration-overview.md" >}}) to define a configuration resource. +For more information about configuring a Configuration resource: +- [Configuration overview]({{< ref configuration-overview.md >}}) +- [Configuration schema]({{< ref configuration-schema.md >}}) ## Configure secrets access @@ -38,38 +40,44 @@ When an `allowedSecrets` list is present with at least one element, only those s ## Permission priority -The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`. +The `allowedSecrets` and `deniedSecrets` list values take priorty over the `defaultAccess`. See how this works in the following example scenarios: -| Scenarios | defaultAccess | allowedSecrets | deniedSecrets | permission -|----- | ------- | -----------| ----------| ------------ -| 1 - Only default access | deny/allow | empty | empty | deny/allow -| 2 - Default deny with allowed list | deny | ["s1"] | empty | only "s1" can be accessed -| 3 - Default allow with denied list | allow | empty | ["s1"] | only "s1" cannot be accessed -| 4 - Default allow with allowed list | allow | ["s1"] | empty | only "s1" can be accessed -| 5 - Default deny with denied list | deny | empty | ["s1"] | deny -| 6 - Default deny/allow with both lists | deny/allow | ["s1"] | ["s2"] | only "s1" can be accessed +| | Scenarios | `defaultAccess` | `allowedSecrets` | `deniedSecrets` | `permission` +|--| ----- | ------- | -----------| ----------| ------------ +| 1 | Only default access | `deny`/`allow` | empty | empty | `deny`/`allow` +| 2 | Default deny with allowed list | `deny` | [`"s1"`] | empty | only `"s1"` can be accessed +| 3 | Default allow with denied list | `allow` | empty | [`"s1"`] | only `"s1"` cannot be accessed +| 4 | Default allow with allowed list | `allow` | [`"s1"`] | empty | only `"s1"` can be accessed +| 5 | Default deny with denied list | `deny` | empty | [`"s1"`] | `deny` +| 6 | Default deny/allow with both lists | `deny`/`allow` | [`"s1"`] | [`"s2"`] | only `"s1"` can be accessed ## Examples -### Scenario 1 : Deny access to all secrets for a secret store +### Scenario 1: Deny access to all secrets for a secret store -In Kubernetes cluster, the native Kubernetes secret store is added to Dapr application by default. In some scenarios it may be necessary to deny access to Dapr secrets for a given application. To add this configuration follow the steps below: +In a Kubernetes cluster, the native Kubernetes secret store is added to your Dapr application by default. In some scenarios, it may be necessary to deny access to Dapr secrets for a given application. To add this configuration: -Define the following `appconfig.yaml` and apply it to the Kubernetes cluster using the command `kubectl apply -f appconfig.yaml`. +1. Define the following `appconfig.yaml`. -```yaml -apiVersion: dapr.io/v1alpha1 -kind: Configuration -metadata: - name: appconfig -spec: - secrets: - scopes: - - storeName: kubernetes - defaultAccess: deny -``` + ```yaml + apiVersion: dapr.io/v1alpha1 + kind: Configuration + metadata: + name: appconfig + spec: + secrets: + scopes: + - storeName: kubernetes + defaultAccess: deny + ``` + +1. Apply it to the Kubernetes cluster using the following command: -For applications that need to be denied access to the Kubernetes secret store, follow [these instructions]({{< ref kubernetes-overview >}}), and add the following annotation to the application pod. + ```bash + kubectl apply -f appconfig.yaml`. + ``` + +For applications that you need to deny access to the Kubernetes secret store, follow [the Kubernetes instructions]({{< ref kubernetes-overview >}}), adding the following annotation to the application pod. ```yaml dapr.io/config: appconfig @@ -77,7 +85,7 @@ dapr.io/config: appconfig With this defined, the application no longer has access to Kubernetes secret store. -### Scenario 2 : Allow access to only certain secrets in a secret store +### Scenario 2: Allow access to only certain secrets in a secret store To allow a Dapr application to have access to only certain secrets, define the following `config.yaml`: @@ -94,7 +102,7 @@ spec: allowedSecrets: ["secret1", "secret2"] ``` -This example defines configuration for secret store named vault. The default access to the secret store is `deny`, whereas some secrets are accessible by the application based on the `allowedSecrets` list. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar. +This example defines configuration for secret store named `vault`. The default access to the secret store is `deny`. Meanwhile, some secrets are accessible by the application based on the `allowedSecrets` list. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar. ### Scenario 3: Deny access to certain sensitive secrets in a secret store @@ -113,4 +121,8 @@ spec: deniedSecrets: ["secret1", "secret2"] ``` -The above configuration explicitly denies access to `secret1` and `secret2` from the secret store named vault while allowing access to all other secrets. Follow [these instructions]({{< ref configuration-overview.md >}}) to apply configuration to the sidecar. +This configuration explicitly denies access to `secret1` and `secret2` from the secret store named `vault,` while allowing access to all other secrets. Follow [the Sidecar configuration instructions]({{< ref "configuration-overview.md#sidecar-configuration" >}}) to apply configuration to the sidecar. + +## Next steps + +{{< button text="Service invocation access control" page="invoke-allowlist" >}} diff --git a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md index 9172a28feb9..b4e8f02e64e 100644 --- a/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md +++ b/daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md @@ -7,10 +7,123 @@ description: "Configure Scheduler to persist its database to make it resilient t --- The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution. -By default, the Scheduler service database writes this data to a Persistent Volume Claim of 1Gb of size using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration in some deployments or for a production environment. +By default, the Scheduler service database writes data to a Persistent Volume Claim volume of size `1Gb`, using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). +This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need [additional configuration](#storage-class) if a default StorageClass is not available or when running a production environment. + +{{% alert title="Warning" color="warning" %}} +The default storage size for the Scheduler is `1Gi`, which is likely not sufficient for most production deployments. +Remember that the Scheduler is used for [Actor Reminders]({{< ref actors-timers-reminders.md >}}) & [Workflows]({{< ref workflow-overview.md >}}) when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled, and the [Jobs API]({{< ref jobs_api.md >}}). +You may want to consider reinstalling Dapr with a larger Scheduler storage of at least `16Gi` or more. +For more information, see the [ETCD Storage Disk Size](#etcd-storage-disk-size) section below. +{{% /alert %}} ## Production Setup +### ETCD Storage Disk Size + +The default storage size for the Scheduler is `1Gb`. +This size is likely not sufficient for most production deployments. +When the storage size is exceeded, the Scheduler will log an error similar to the following: + +``` +error running scheduler: etcdserver: mvcc: database space exceeded +``` + +Knowing the safe upper bound for your storage size is not an exact science, and relies heavily on the number, persistence, and the data payload size of your application jobs. +The [Job API]({{< ref jobs_api.md >}}) and [Actor Reminders]({{< ref actors-timers-reminders.md >}}) (with the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature enabled) transparently maps one to one to the usage of your applications. +Workflows (when the [SchedulerReminders]({{< ref support-preview-features.md >}}) preview feature is enabled) create a large number of jobs as Actor Reminders, however these jobs are short lived- matching the lifecycle of each workflow execution. +The data payload of jobs created by Workflows is typically empty or small. + +The Scheduler uses Etcd as its storage backend database. +By design, Etcd persists historical transactions and data in form of [Write-Ahead Logs (WAL) and snapshots](https://etcd.io/docs/v3.5/learning/persistent-storage-files/). +This means the actual disk usage of Scheduler will be higher than the current observable database state, often by a number of multiples. + +### Setting the Storage Size on Installation + +If you need to increase an **existing** Scheduler storage size, see the [Increase Scheduler Storage Size](#increase-existing-scheduler-storage-size) section below. +To increase the storage size (in this example- `16Gi`) for a **fresh** Dapr instalation, you can use the following command: + +{{< tabs "Dapr CLI" "Helm" >}} + +{{% codetab %}} + +```bash +dapr init -k --set dapr_scheduler.cluster.storageSize=16Gi --set dapr_scheduler.etcdSpaceQuota=16Gi +``` + +{{% /codetab %}} + + +{{% codetab %}} + +```bash +helm upgrade --install dapr dapr/dapr \ +--version={{% dapr-latest-version short="true" %}} \ +--namespace dapr-system \ +--create-namespace \ +--set dapr_scheduler.cluster.storageSize=16Gi \ +--set dapr_scheduler.etcdSpaceQuota=16Gi \ +--wait +``` + +{{% /codetab %}} +{{< /tabs >}} + +#### Increase existing Scheduler Storage Size + +{{% alert title="Warning" color="warning" %}} +Not all storage providers support dynamic volume expansion. +Please see your storage provider documentation to determine if this feature is supported, and what to do if it is not. +{{% /alert %}} + +By default, each Scheduler will create a Persistent Volume and Persistent Volume Claim of size `1Gi` against the [default `standard` storage class](#storage-class) for each Scheduler replica. +These will look similar to the following, where in this example we are running Scheduler in HA mode. + +``` +NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 Bound pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO standard 3m25s +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-1 Bound pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO standard 3m25s +dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-2 Bound pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO standard 3m25s +``` + +``` +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-9f699d2e-f347-43b0-aa98-57dcf38229c5 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-0 standard 4m24s +pvc-eaad5fb1-98e9-42a5-bcc8-d45dba1c4b9f 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-2 standard 4m24s +pvc-f4c8be7b-ffbe-407b-954e-7688f2482caa 1Gi RWO Delete Bound dapr-system/dapr-scheduler-data-dir-dapr-scheduler-server-1 standard 4m24s +``` + +To expand the storage size of the Scheduler, follow these steps: + +1. First, ensure that the storage class supports volume expansion, and that the `allowVolumeExpansion` field is set to `true` if it is not already. + +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: standard +provisioner: my.driver +allowVolumeExpansion: true +... +``` + +2. Delete the Scheduler StatefulSet whilst preserving the Bound Persistent Volume Claims. + +```bash +kubectl delete sts -n dapr-system dapr-scheduler-server --cascade=orphan +``` + +3. Increase the size of the Persistent Volume Claims to the desired size by editing the `spec.resources.requests.storage` field. + Again in this case, we are assuming that the Scheduler is running in HA mode with 3 replicas. + +```bash +kubectl edit pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2 +``` + +4. Recreate the Scheduler StatefulSet by [installing Dapr with the desired storage size](#setting-the-storage-size-on-installation). + +### Storage Class + In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required. A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform. diff --git a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md index 3e7c090cbfd..78f0e2c7522 100644 --- a/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md +++ b/daprdocs/content/en/operations/hosting/self-hosted/self-hosted-with-docker.md @@ -138,6 +138,18 @@ services: command: ["./placement", "--port", "50006"] ports: - "50006:50006" + + scheduler: + image: "daprio/dapr" + command: ["./scheduler", "--port", "50007"] + ports: + - "50007:50007" + # WARNING - This is a tmpfs volume, your state will not be persisted across restarts + volumes: + - type: tmpfs + target: /data + tmpfs: + size: "10000" networks: hello-dapr: null @@ -147,6 +159,8 @@ services: To further learn how to run Dapr with Docker Compose, see the [Docker-Compose Sample](https://github.com/dapr/samples/tree/master/hello-docker-compose). +The above example also includes a scheduler definition that uses a non-persistent data store for testing and development purposes. + ## Run on Kubernetes If your deployment target is Kubernetes please use Dapr's first-class integration. Refer to the diff --git a/daprdocs/content/en/operations/observability/metrics/prometheus.md b/daprdocs/content/en/operations/observability/metrics/prometheus.md index 3c787602f85..04e49a42e87 100644 --- a/daprdocs/content/en/operations/observability/metrics/prometheus.md +++ b/daprdocs/content/en/operations/observability/metrics/prometheus.md @@ -93,13 +93,108 @@ helm install dapr-prom prometheus-community/prometheus -n dapr-monitoring --set alertmanager.persistence.enabled=false --set pushgateway.persistentVolume.enabled=false --set server.persistentVolume.enabled=false ``` +For automatic discovery of Dapr targets (Service Discovery), use: + +```bash + helm install dapr-prom prometheus-community/prometheus -f values.yaml -n dapr-monitoring --create-namespace +``` + +### `values.yaml` File + +```yaml +alertmanager: + persistence: + enabled: false +pushgateway: + persistentVolume: + enabled: false +server: + persistentVolume: + enabled: false + +# Adds additional scrape configurations to prometheus.yml +# Uses service discovery to find Dapr and Dapr sidecar targets +extraScrapeConfigs: |- + - job_name: dapr-sidecars + kubernetes_sd_configs: + - role: pod + relabel_configs: + - action: keep + regex: "true" + source_labels: + - __meta_kubernetes_pod_annotation_dapr_io_enabled + - action: keep + regex: "true" + source_labels: + - __meta_kubernetes_pod_annotation_dapr_io_enable_metrics + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_namespace + target_label: namespace + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_pod_name + target_label: pod + - action: replace + regex: (.*);daprd + replacement: ${1}-dapr + source_labels: + - __meta_kubernetes_pod_annotation_dapr_io_app_id + - __meta_kubernetes_pod_container_name + target_label: service + - action: replace + replacement: ${1}:9090 + source_labels: + - __meta_kubernetes_pod_ip + target_label: __address__ + + - job_name: dapr + kubernetes_sd_configs: + - role: pod + relabel_configs: + - action: keep + regex: dapr + source_labels: + - __meta_kubernetes_pod_label_app_kubernetes_io_name + - action: keep + regex: dapr + source_labels: + - __meta_kubernetes_pod_label_app_kubernetes_io_part_of + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_pod_label_app + target_label: app + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_namespace + target_label: namespace + - action: replace + replacement: ${1} + source_labels: + - __meta_kubernetes_pod_name + target_label: pod + - action: replace + replacement: ${1}:9090 + source_labels: + - __meta_kubernetes_pod_ip + target_label: __address__ +``` + 3. Validation Ensure Prometheus is running in your cluster. ```bash kubectl get pods -n dapr-monitoring +``` + +Expected output: +```bash NAME READY STATUS RESTARTS AGE dapr-prom-kube-state-metrics-9849d6cc6-t94p8 1/1 Running 0 4m58s dapr-prom-prometheus-alertmanager-749cc46f6-9b5t8 2/2 Running 0 4m58s @@ -110,6 +205,22 @@ dapr-prom-prometheus-pushgateway-688665d597-h4xx2 1/1 Running 0 dapr-prom-prometheus-server-694fd8d7c-q5d59 2/2 Running 0 4m58s ``` +### Access the Prometheus Dashboard + +To view the Prometheus dashboard and check service discovery: + +```bash +kubectl port-forward svc/dapr-prom-prometheus-server 9090:80 -n dapr-monitoring +``` + +Open a browser and visit `http://localhost:9090`. Navigate to **Status** > **Service Discovery** to verify that the Dapr targets are discovered correctly. + +Prometheus Web UI + +You can see the `job_name` and its discovered targets. + +Prometheus Service Discovery + ## Example
diff --git a/daprdocs/content/en/operations/resiliency/policies.md b/daprdocs/content/en/operations/resiliency/policies.md index db72dd78c5c..086ca7fd5d0 100644 --- a/daprdocs/content/en/operations/resiliency/policies.md +++ b/daprdocs/content/en/operations/resiliency/policies.md @@ -35,7 +35,13 @@ If you don't specify a timeout value, the policy does not enforce a time and def ## Retries -With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. The following retry options are configurable: +With `retries`, you can define a retry strategy for failed operations, including requests failed due to triggering a defined timeout or circuit breaker policy. + +{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}} +Each [pub/sub component]({{< ref supported-pubsub >}}) has its own built-in retry behaviors. Explicity applying a Dapr resiliency policy doesn't override these implicit retry policies. Rather, the resiliency policy augments the built-in retry, which can cause repetitive clustering of messages. +{{% /alert %}} + +The following retry options are configurable: | Retry option | Description | | ------------ | ----------- | diff --git a/daprdocs/content/en/operations/support/support-preview-features.md b/daprdocs/content/en/operations/support/support-preview-features.md index 943b35d0e49..221b24d8466 100644 --- a/daprdocs/content/en/operations/support/support-preview-features.md +++ b/daprdocs/content/en/operations/support/support-preview-features.md @@ -22,4 +22,4 @@ For CLI there is no explicit opt-in, just the version that this was first made a | **Actor State TTL** | Allow actors to save records to state stores with Time To Live (TTL) set to automatically clean up old data. In its current implementation, actor state with TTL may not be reflected correctly by clients, read [Actor State Transactions]({{< ref actors_api.md >}}) for more information. | `ActorStateTTL` | [Actor State Transactions]({{< ref actors_api.md >}}) | v1.11 | | **Component Hot Reloading** | Allows for Dapr-loaded components to be "hot reloaded". A component spec is reloaded when it is created/updated/deleted in Kubernetes or on file when running in self-hosted mode. Ignores changes to actor state stores and workflow backends. | `HotReload`| [Hot Reloading]({{< ref components-concept.md >}}) | v1.13 | | **Subscription Hot Reloading** | Allows for declarative subscriptions to be "hot reloaded". A subscription is reloaded either when it is created/updated/deleted in Kubernetes, or on file in self-hosted mode. In-flight messages are unaffected when reloading. | `HotReload`| [Hot Reloading]({{< ref "subscription-methods.md#declarative-subscriptions" >}}) | v1.14 | -| **Job actor reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, job actor reminders (used for scheduling actor reminders) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Job actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 | +| **Scheduler Actor Reminders** | Whilst the [Scheduler service]({{< ref "concepts/dapr-services/scheduler.md" >}}) is deployed by default, Scheduler actor reminders (actor reminders stored in the Scheduler control plane service as opposed to the Placement control plane service actor reminder system) are enabled through a preview feature and needs a feature flag. | `SchedulerReminders`| [Scheduler actor reminders]({{< ref "jobs-overview.md#actor-reminders" >}}) | v1.14 | diff --git a/daprdocs/content/en/operations/support/support-release-policy.md b/daprdocs/content/en/operations/support/support-release-policy.md index 76a1a57302c..fbba03b5f14 100644 --- a/daprdocs/content/en/operations/support/support-release-policy.md +++ b/daprdocs/content/en/operations/support/support-release-policy.md @@ -24,7 +24,7 @@ A supported release means: From the 1.8.0 release onwards three (3) versions of Dapr are supported; the current and previous two (2) versions. Typically these are `MINOR`release updates. This means that there is a rolling window that moves forward for supported releases and it is your operational responsibility to remain up to date with these supported versions. If you have an older version of Dapr you may have to do intermediate upgrades to get to a supported version. -There will be at least 6 weeks between major.minor version releases giving users a 12 week (3 month) rolling window for upgrading. +There will be at least 13 weeks (3 months) between major.minor version releases giving users at least a 9 month rolling window for upgrading from a non-supported version. For more details on the release process read [release cycle and cadence](https://github.com/dapr/community/blob/master/release-process.md) Patch support is for supported versions (current and previous). @@ -45,6 +45,10 @@ The table below shows the versions of Dapr releases that have been tested togeth | Release date | Runtime | CLI | SDKs | Dashboard | Status | Release notes | |--------------------|:--------:|:--------|---------|---------|---------|------------| +| September 16th 2024 | 1.14.4
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.4) | +| September 13th 2024 | 1.14.3
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | ⚠️ Recalled | [v1.14.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.3) | +| September 6th 2024 | 1.14.2
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.2 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.2) | +| August 14th 2024 | 1.14.1
| 1.14.1 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.1 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.1) | | August 14th 2024 | 1.14.0
| 1.14.0 | Java 1.12.0
Go 1.11.0
PHP 1.2.0
Python 1.14.0
.NET 1.14.0
JS 3.3.1 | 0.15.0 | Supported (current) | [v1.14.0 release notes](https://github.com/dapr/dapr/releases/tag/v1.14.0) | | May 29th 2024 | 1.13.4
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported | [v1.13.4 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.4) | | May 21st 2024 | 1.13.3
| 1.13.0 | Java 1.11.0
Go 1.10.0
PHP 1.2.0
Python 1.13.0
.NET 1.13.0
JS 3.3.0 | 0.14.0 | Supported | [v1.13.3 release notes](https://github.com/dapr/dapr/releases/tag/v1.13.3) | @@ -134,13 +138,12 @@ General guidance on upgrading can be found for [self hosted mode]({{< ref self-h | | 1.8.6 | 1.9.6 | | | 1.9.6 | 1.10.7 | | 1.8.0 to 1.8.6 | N/A | 1.9.6 | -| 1.9.0 | N/A | 1.9.6 | -| 1.10.0 | N/A | 1.10.8 | -| 1.11.0 | N/A | 1.11.4 | -| 1.12.0 | N/A | 1.12.4 | -| 1.12.0 to 1.13.0 | N/A | 1.13.4 | -| 1.13.0 | N/A | 1.13.4 | -| 1.13.0 to 1.14.0 | N/A | 1.14.0 | +| 1.9.0 to 1.9.6 | N/A | 1.10.8 | +| 1.10.0 to 1.10.8 | N/A | 1.11.4 | +| 1.11.0 to 1.11.4 | N/A | 1.12.4 | +| 1.12.0 to 1.12.4 | N/A | 1.13.5 | +| 1.13.0 to 1.13.5 | N/A | 1.14.0 | +| 1.14.0 to 1.14.2 | N/A | 1.14.2 | ## Upgrade on Hosting platforms diff --git a/daprdocs/content/en/operations/support/support-security-issues.md b/daprdocs/content/en/operations/support/support-security-issues.md index 1ae3fce27c8..6e7b24a2d2b 100644 --- a/daprdocs/content/en/operations/support/support-security-issues.md +++ b/daprdocs/content/en/operations/support/support-security-issues.md @@ -52,7 +52,7 @@ The people who should have access to read your security report are listed in [`m code which allows the issue to be reproduced. Explain why you believe this to be a security issue in Dapr. 2. Put that information into an email. Use a descriptive title. -3. Send the email to [Dapr Maintainers (dapr@dapr.io)](mailto:dapr@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE) +3. Send an email to [Security (security@dapr.io)](mailto:security@dapr.io?subject=[Security%20Disclosure]:%20ISSUE%20TITLE) ## Response diff --git a/daprdocs/content/en/reference/api/jobs_api.md b/daprdocs/content/en/reference/api/jobs_api.md index 3a04ed1a9d4..45459867684 100644 --- a/daprdocs/content/en/reference/api/jobs_api.md +++ b/daprdocs/content/en/reference/api/jobs_api.md @@ -32,7 +32,7 @@ At least one of `schedule` or `dueTime` must be provided, but they can also be p Parameter | Description --------- | ----------- `name` | Name of the job you're scheduling -`data` | A protobuf message `@type`/`value` pair. `@type` must be of a [well-known type](https://protobuf.dev/reference/protobuf/google.protobuf). `value` is the serialized data. +`data` | A JSON serialized value or object. `schedule` | An optional schedule at which the job is to be run. Details of the format are below. `dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601. `repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration. @@ -43,9 +43,13 @@ Parameter | Description Systemd timer style cron accepts 6 fields: seconds | minutes | hours | day of month | month | day of week -0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-7/sun-sat +--- | --- | --- | --- | --- | --- +0-59 | 0-59 | 0-23 | 1-31 | 1-12/jan-dec | 0-6/sun-sat +##### Example 1 "0 30 * * * *" - every hour on the half hour + +##### Example 2 "0 15 3 * * *" - every day at 03:15 Period string expressions: @@ -63,13 +67,8 @@ Entry | Description | Equivalent ```json { - "job": { - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "\"someData\"" - }, - "dueTime": "30s" - } + "data": "some data", + "dueTime": "30s" } ``` @@ -88,20 +87,14 @@ The following example curl command creates a job, naming the job `jobforjabba` a ```bash $ curl -X POST \ http://localhost:3500/v1.0-alpha1/jobs/jobforjabba \ - -H "Content-Type: application/json" + -H "Content-Type: application/json" \ -d '{ - "job": { - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "Running spice" - }, - "schedule": "@every 1m", - "repeats": 5 - } + "data": "{\"value\":\"Running spice\"}", + "schedule": "@every 1m", + "repeats": 5 }' ``` - ## Get job data Get a job from its name. @@ -137,10 +130,7 @@ $ curl -X GET http://localhost:3500/v1.0-alpha1/jobs/jobforjabba -H "Content-Typ "name": "jobforjabba", "schedule": "@every 1m", "repeats": 5, - "data": { - "@type": "type.googleapis.com/google.protobuf.StringValue", - "value": "Running spice" - } + "data": 123 } ``` ## Delete a job diff --git a/daprdocs/content/en/reference/arguments-annotations-overview.md b/daprdocs/content/en/reference/arguments-annotations-overview.md index 6a0c2f60b66..a630519b79d 100644 --- a/daprdocs/content/en/reference/arguments-annotations-overview.md +++ b/daprdocs/content/en/reference/arguments-annotations-overview.md @@ -32,11 +32,11 @@ This table is meant to help users understand the equivalent options for running | `--log-as-json` | not supported | | `dapr.io/log-as-json` | Setting this parameter to `true` outputs [logs in JSON format]({{< ref logs >}}). Default is `false` | | `--log-level` | `--log-level` | | `dapr.io/log-level` | Sets the [log level]({{< ref logs-troubleshooting >}}) for the Dapr sidecar. Allowed values are `debug`, `info`, `warn`, `error`. Default is `info` | | `--enable-api-logging` | `--enable-api-logging` | | `dapr.io/enable-api-logging` | [Enables API logging]({{< ref "api-logs-troubleshooting.md#configuring-api-logging-in-kubernetes" >}}) for the Dapr sidecar | -| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`| +| `--app-max-concurrency` | `--app-max-concurrency` | | `dapr.io/app-max-concurrency` | Limit the [concurrency of your application]({{< ref "control-concurrency.md#setting-app-max-concurrency" >}}). A valid value is any number larger than `0`. Default value: `-1`, meaning no concurrency. | | `--metrics-port` | `--metrics-port` | | `dapr.io/metrics-port` | Sets the port for the sidecar metrics server. Default is `9090` | | `--mode` | not supported | | not supported | Runtime hosting option mode for Dapr, either `"standalone"` or `"kubernetes"` (default `"standalone"`). [Learn more.]({{< ref hosting >}}) | -| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` | -| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers. When no annotation is set, the default value is set by the Sidecar Injector. When the annotation is set and the value is empty, the sidecar does not connect to Scheduler server. When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` | +| `--placement-host-address` | `--placement-host-address` | | `dapr.io/placement-host-address` | Comma separated list of addresses for Dapr Actor Placement servers.

When no annotation is set, the default value is set by the Sidecar Injector.

When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Placement server. This can be used when there are no actors running in the sidecar.

When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50057,127.0.0.1:50058` | +| `--scheduler-host-address` | `--scheduler-host-address` | | `dapr.io/scheduler-host-address` | Comma separated list of addresses for Dapr Scheduler servers.

When no annotation is set, the default value is set by the Sidecar Injector.

When the annotation is set and the value is a single space (`' '`), or "empty", the sidecar does not connect to Scheduler server.

When the annotation is set and the value is not empty, the sidecar connects to the configured address. For example: `127.0.0.1:50055,127.0.0.1:50056` | | `--actors-service` | not supported | | not supported | Configuration for the service that offers actor placement information. The format is `:
`. For example, setting this value to `placement:127.0.0.1:50057,127.0.0.1:50058` is an alternative to using the `--placement-host-address` flag. | | `--reminders-service` | not supported | | not supported | Configuration for the service that enables actor reminders. The format is `[:
]`. Currently, the only supported value is `"default"` (which is also the default value), which uses the built-in reminders subsystem in the Dapr sidecar. | | `--profiling-port` | `--profiling-port` | | not supported | The port for the profile server (default `7777`) | diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md index addfba98a8c..413e1893fe6 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/kafka.md @@ -63,6 +63,8 @@ spec: value: true - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. value: 5m + - name: escapeHeaders # Optional. + value: false ``` ## Spec metadata fields @@ -99,6 +101,7 @@ spec: | `consumerFetchDefault` | N | Input/Output | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` | | `heartbeatInterval` | N | Input | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to `"3s"`. | `"5s"` | | `sessionTimeout` | N | Input | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to `"10s"`. | `"20s"` | +| `escapeHeaders` | N | Input | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` | #### Note The metadata `version` must be set to `1.0.0` when using Azure EventHubs with Kafka. diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md b/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md index 7d8f4104b46..97617eb3eb3 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/postgresql.md @@ -56,23 +56,27 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post ### Authenticate using AWS IAM Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. -The user specified in the connection string must be an AWS IAM enabled user granted the `rds_iam` database role. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. The AWS authentication token will be dynamically rotated before it's expiration time with AWS. | Field | Required | Details | Example | |--------|:--------:|---------|---------| -| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` -| `accessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` -| `secretKey` | Y | The secret key associated with the access key. | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | ### Other metadata options -| Field | Required | Binding support |Details | Example | +| Field | Required | Binding support | Details | Example | |--------------------|:--------:|-----|---|---------| -| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` -| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` -| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` +| `timeout` | N | Output | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | +| `maxConns` | N | Output | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` | +| `connectionMaxIdleTime` | N | Output | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` | +| `queryExecMode` | N | Output | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` | ### URL format diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md b/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md index 3a9093666a9..4fc8dbb1b47 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/redis.md @@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `redisUsername` | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` | | `useEntraID` | N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The `redisHost` name must be specified in the form of `"server:port"`
  • TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#create-a-redis-instance" >}}) | `"true"`, `"false"` | | `enableTLS` | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` | +| `clientCert` | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| `clientKey` | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | `failover` | N | Output | Property to enabled failover configuration. Needs sentinalMasterName to be set. Defaults to `"false"` | `"true"`, `"false"` | `sentinelMasterName` | N | Output | The sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"` | `redeliverInterval` | N | Output | The interval between checking for pending messages to redelivery. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"` diff --git a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md index 2de9b95a727..d91818a1d8f 100644 --- a/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md +++ b/daprdocs/content/en/reference/components-reference/supported-bindings/s3.md @@ -44,6 +44,8 @@ spec: value: "" - name: insecureSSL value: "" + - name: storageClass + value: "" ``` {{% alert title="Warning" color="warning" %}} @@ -65,6 +67,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `encodeBase64` | N | Output | Configuration to encode base64 file content before return the content. (In case of opening a file with binary content). `"true"` is the only allowed positive value. Other positive variations like `"True", "1"` are not acceptable. Defaults to `"false"` | `"true"`, `"false"` | | `disableSSL` | N | Output | Allows to connect to non `https://` endpoints. Defaults to `"false"` | `"true"`, `"false"` | | `insecureSSL` | N | Output | When connecting to `https://` endpoints, accepts invalid or self-signed certificates. Defaults to `"false"` | `"true"`, `"false"` | +| `storageClass` | N | Output | The desired storage class for objects during the create operation. [Valid aws storage class types can be found here](https://aws.amazon.com/s3/storage-classes/) | `STANDARD_IA` | {{% alert title="Important" color="warning" %}} When running the Dapr sidecar (daprd) with your application on EKS (AWS Kubernetes), if you're using a node/pod that has already been attached to an IAM policy defining access to AWS resources, you **must not** provide AWS access-key, secret-key, and tokens in the definition of the component spec you're using. @@ -165,10 +168,20 @@ To perform a create operation, invoke the AWS S3 binding with a `POST` method an ```json { "operation": "create", - "data": "YOUR_CONTENT" + "data": "YOUR_CONTENT", + "metadata": { + "storageClass": "STANDARD_IA" + } } ``` +For example you can provide a storage class while using the `create` operation with a Linux curl command + +```bash +curl -d '{ "operation": "create", "data": "YOUR_BASE_64_CONTENT", "metadata": { "storageClass": "STANDARD_IA" } }' / +http://localhost:/v1.0/bindings/ +``` + #### Share object with a presigned URL To presign an object with a specified time-to-live, use the `presignTTL` metadata key on a `create` request. diff --git a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md index a846b6a2344..29d7859c326 100644 --- a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md +++ b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/postgresql-configuration-store.md @@ -79,11 +79,28 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | +### Authenticate using AWS IAM + +Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. +Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. +The AWS authentication token will be dynamically rotated before it's expiration time with AWS. + +| Field | Required | Details | Example | +|--------|:--------:|---------|---------| +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | + ### Other metadata options | Field | Required | Details | Example | |--------------------|:--------:|---------|---------| | `table` | Y | Table name for configuration information, must be lowercased. | `configtable` +| `timeout` | N | Timeout for operations on the database, as a [Go duration](https://pkg.go.dev/time#ParseDuration). Integers are interpreted as number of seconds. Defaults to `20s` | `"30s"`, `30` | | `maxConns` | N | Maximum number of connections pooled by this component. Set to 0 or lower to use the default value, which is the greater of 4 or the number of CPUs. | `"4"` | `connectionMaxIdleTime` | N | Max idle time before unused connections are automatically closed in the connection pool. By default, there's no value and this is left to the database driver to choose. | `"5m"` | `queryExecMode` | N | Controls the default mode for executing queries. By default Dapr uses the extended protocol and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as PGBouncer. In this case it may be preferrable to use `exec` or `simple_protocol`. | `"simple_protocol"` diff --git a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md index caf9d8a4449..28965cb0e7b 100644 --- a/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md +++ b/daprdocs/content/en/reference/components-reference/supported-configuration-stores/redis-configuration-store.md @@ -43,6 +43,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | redisPassword | N | Output | The Redis password | `"password"` | | redisUsername | N | Output | Username for Redis host. Defaults to empty. Make sure your Redis server version is 6 or above, and have created acl rule correctly. | `"username"` | | enableTLS | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` | +| clientCert | N | Output | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| clientKey | N | Output | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | failover | N | Output | Property to enabled failover configuration. Needs sentinelMasterName to be set. Defaults to `"false"` | `"true"`, `"false"` | sentinelMasterName | N | Output | The Sentinel master name. See [Redis Sentinel Documentation](https://redis.io/docs/reference/sentinel-clients/) | `""`, `"127.0.0.1:6379"` | redisType | N | Output | The type of Redis. There are two valid values, one is `"node"` for single node mode, the other is `"cluster"` for Redis cluster mode. Defaults to `"node"`. | `"cluster"` diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md index ff00c013737..2e2962d6855 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/_index.md @@ -11,6 +11,11 @@ no_list: true The following table lists publish and subscribe brokers supported by the Dapr pub/sub building block. [Learn how to set up different brokers for Dapr publish and subscribe.]({{< ref setup-pubsub.md >}}) +{{% alert title="Pub/sub component retries vs inbound resiliency" color="warning" %}} +Each pub/sub component has its own built-in retry behaviors. Before explicity applying a [Dapr resiliency policy]({{< ref "policies.md" >}}), make sure you understand the implicit retry policy of the pub/sub component you're using. Instead of overriding these built-in retries, Dapr resiliency augments them, which can cause repetitive clustering of messages. +{{% /alert %}} + + {{< partial "components/description.html" >}} {{< partial "components/pubsub.html" >}} diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md index cafcee537fe..e6091d87e29 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-apache-kafka.md @@ -63,6 +63,8 @@ spec: value: true - name: schemaLatestVersionCacheTTL # Optional. When using Schema Registry Avro serialization/deserialization. The TTL for schema caching when publishing a message with latest schema available. value: 5m + - name: escapeHeaders # Optional. + value: false ``` @@ -112,6 +114,7 @@ spec: | consumerFetchDefault | N | The default number of message bytes to fetch from the broker in each request. Default is `"1048576"` bytes. | `"2097152"` | | heartbeatInterval | N | The interval between heartbeats to the consumer coordinator. At most, the value should be set to a 1/3 of the `sessionTimeout` value. Defaults to "3s". | `"5s"` | | sessionTimeout | N | The timeout used to detect client failures when using Kafka’s group management facility. If the broker fails to receive any heartbeats from the consumer before the expiration of this session timeout, then the consumer is removed and initiates a rebalance. Defaults to "10s". | `"20s"` | +| escapeHeaders | N | Enables URL escaping of the message header values received by the consumer. Allows receiving content with special characters that are usually not allowed in HTTP headers. Default is `false`. | `true` | The `secretKeyRef` above is referencing a [kubernetes secrets store]({{< ref kubernetes-secret-store.md >}}) to access the tls information. Visit [here]({{< ref setup-secret-store.md >}}) to learn more about how to configure a secret store component. @@ -485,6 +488,39 @@ curl -X POST http://localhost:3500/v1.0/publish/myKafka/myTopic?metadata.correla }' ``` +## Receiving message headers with special characters + +The consumer application may be required to receive message headers that include special characters, which may cause HTTP protocol validation errors. +HTTP header values must follow specifications, making some characters not allowed. [Learn more about the protocols](https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.2). +In this case, you can enable `escapeHeaders` configuration setting, which uses URL escaping to encode header values on the consumer side. + +{{% alert title="Note" color="primary" %}} +When using this setting, the received message headers are URL escaped, and you need to URL "un-escape" it to get the original value. +{{% /alert %}} + +Set `escapeHeaders` to `true` to URL escape. + +```yaml +apiVersion: dapr.io/v1alpha1 +kind: Component +metadata: + name: kafka-pubsub-escape-headers +spec: + type: pubsub.kafka + version: v1 + metadata: + - name: brokers # Required. Kafka broker connection setting + value: "dapr-kafka.myapp.svc.cluster.local:9092" + - name: consumerGroup # Optional. Used for input bindings. + value: "group1" + - name: clientID # Optional. Used as client tracing ID by Kafka brokers. + value: "my-dapr-app-id" + - name: authType # Required. + value: "none" + - name: escapeHeaders + value: "true" +``` + ## Avro Schema Registry serialization/deserialization You can configure pub/sub to publish or consume data encoded using [Avro binary serialization](https://avro.apache.org/docs/), leveraging an [Apache Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/) (for example, [Confluent Schema Registry](https://developer.confluent.io/courses/apache-kafka/schema-registry/), [Apicurio](https://www.apicur.io/registry/)). @@ -597,6 +633,7 @@ To run Kafka on Kubernetes, you can use any Kafka operator, such as [Strimzi](ht {{< /tabs >}} + ## Related links - [Basic schema for a Dapr component]({{< ref component-schema >}}) - Read [this guide]({{< ref "howto-publish-subscribe.md##step-1-setup-the-pubsub-component" >}}) for instructions on configuring pub/sub components diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md index 831f6aa7294..cc357b5bc56 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-azure-servicebus-topics.md @@ -83,8 +83,8 @@ The above example uses secrets as plain strings. It is recommended to use a secr | `maxConcurrentHandlers` | N | Defines the maximum number of concurrent message handlers. Default: `0` (unlimited) | `10` | `disableEntityManagement` | N | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"` | `defaultMessageTimeToLiveInSec` | N | Default message time to live, in seconds. Used during subscription creation only. | `10` -| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `0` (disabled) | `3600` -| `maxDeliveryCount` | N | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Must be 300s or greater. Default set by server. | `10` +| `autoDeleteOnIdleInSec` | N | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `0` (disabled) | `3600` +| `maxDeliveryCount` | N | Defines the number of attempts the server makes to deliver a message. Used during subscription creation only. Default set by server. | `10` | `lockDurationInSec` | N | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `30` | `minConnectionRecoveryInSec` | N | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `2` | `5` | `maxConnectionRecoveryInSec` | N | Maximum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. After each attempt, the component waits a random number of seconds, increasing every time, between the minimum and the maximum. Default: `300` (5 minutes) | `600` diff --git a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md index 1da2cb8b3c2..387920e7a50 100644 --- a/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md +++ b/daprdocs/content/en/reference/components-reference/supported-pubsub/setup-redis-pubsub.md @@ -45,7 +45,9 @@ The above example uses secrets as plain strings. It is recommended to use a secr | redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` | consumerID | N | The consumer group ID. | Can be set to string value (such as `"channel1"` in the example above) or string format value (such as `"{podName}"`, etc.). [See all of template tags you can use in your component metadata.]({{< ref "component-schema.md#templated-metadata-values" >}}) | useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The `redisHost` name must be specified in the form of `"server:port"`
  • TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` | -| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` +| enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` | +| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | redeliverInterval | N | The interval between checking for pending messages to redeliver. Can use either be Go duration string (for example "ms", "s", "m") or milliseconds number. Defaults to `"60s"`. `"0"` disables redelivery. | `"30s"`, `"5000"` | processingTimeout | N | The amount time that a message must be pending before attempting to redeliver it. Can use either be Go duration string ( for example "ms", "s", "m") or milliseconds number. Defaults to `"15s"`. `"0"` disables redelivery. | `"60s"`, `"600000"` | queueDepth | N | The size of the message queue for processing. Defaults to `"100"`. | `"1000"` diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md index 7026dcc920a..8cec85ad16a 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v1.md @@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | +### Authenticate using AWS IAM + +Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. +Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. +The AWS authentication token will be dynamically rotated before it's expiration time with AWS. + +| Field | Required | Details | Example | +|--------|:--------:|---------|---------| +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | + ### Other metadata options | Field | Required | Details | Example | diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md index bcda2558bd2..3223867787f 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-postgresql-v2.md @@ -83,6 +83,22 @@ Authenticating with Microsoft Entra ID is supported with Azure Database for Post | `azureClientId` | N | Client ID (application ID) | `"c7dd251f-811f-…"` | | `azureClientSecret` | N | Client secret (application password) | `"Ecy3X…"` | +### Authenticate using AWS IAM + +Authenticating with AWS IAM is supported with all versions of PostgreSQL type components. +The user specified in the connection string must be an already existing user in the DB, and an AWS IAM enabled user granted the `rds_iam` database role. +Authentication is based on the AWS authentication configuration file, or the AccessKey/SecretKey provided. +The AWS authentication token will be dynamically rotated before it's expiration time with AWS. + +| Field | Required | Details | Example | +|--------|:--------:|---------|---------| +| `useAWSIAM` | Y | Must be set to `true` to enable the component to retrieve access tokens from AWS IAM. This authentication method only works with AWS Relational Database Service for PostgreSQL databases. | `"true"` | +| `connectionString` | Y | The connection string for the PostgreSQL database.
This must contain an already existing user, which corresponds to the name of the user created inside PostgreSQL that maps to the AWS IAM policy. This connection string should not contain any password. Note that the database name field is denoted by dbname with AWS. | `"host=mydb.postgres.database.aws.com user=myapplication port=5432 dbname=my_db sslmode=require"`| +| `awsRegion` | Y | The AWS Region where the AWS Relational Database Service is deployed to. | `"us-east-1"` | +| `awsAccessKey` | Y | AWS access key associated with an IAM account | `"AKIAIOSFODNN7EXAMPLE"` | +| `awsSecretKey` | Y | The secret key associated with the access key | `"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"` | +| `awsSessionToken` | N | AWS session token to use. A session token is only required if you are using temporary security credentials. | `"TOKEN"` | + ### Other metadata options | Field | Required | Details | Example | diff --git a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md index ed6d4118ea4..9b672c6a6dc 100644 --- a/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md +++ b/daprdocs/content/en/reference/components-reference/supported-state-stores/setup-redis.md @@ -32,6 +32,10 @@ spec: value: # Optional. Allowed: true, false. - name: enableTLS value: # Optional. Allowed: true, false. + - name: clientCert + value: # Optional + - name: clientKey + value: # Optional - name: maxRetries value: # Optional - name: maxRetryBackoff @@ -102,6 +106,8 @@ If you wish to use Redis as an actor store, append the following to the yaml. | redisUsername | N | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `""`, `"default"` | useEntraID | N | Implements EntraID support for Azure Cache for Redis. Before enabling this:
  • The `redisHost` name must be specified in the form of `"server:port"`
  • TLS must be enabled
Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#setup-redis" >}}) | `"true"`, `"false"` | | enableTLS | N | If the Redis instance supports TLS with public certificates, can be configured to be enabled or disabled. Defaults to `"false"` | `"true"`, `"false"` +| clientCert | N | The content of the client certificate, used for Redis instances that require client-side certificates. Must be used with `clientKey` and `enableTLS` must be set to true. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN CERTIFICATE-----\nMIIC..."` | +| clientKey | N | The content of the client private key, used in conjunction with `clientCert` for authentication. It is recommended to use a secret store as described [here]({{< ref component-secrets.md >}}) | `"----BEGIN PRIVATE KEY-----\nMIIE..."` | | maxRetries | N | Maximum number of retries before giving up. Defaults to `3` | `5`, `10` | maxRetryBackoff | N | Maximum backoff between each retry. Defaults to `2` seconds; `"-1"` disables backoff. | `3000000000` | failover | N | Property to enabled failover configuration. Needs sentinelMasterName to be set. The redisHost should be the sentinel host address. See [Redis Sentinel Documentation](https://redis.io/docs/manual/sentinel/). Defaults to `"false"` | `"true"`, `"false"` diff --git a/daprdocs/content/en/reference/resource-specs/component-schema.md b/daprdocs/content/en/reference/resource-specs/component-schema.md index 349ff4923a3..875744c2868 100644 --- a/daprdocs/content/en/reference/resource-specs/component-schema.md +++ b/daprdocs/content/en/reference/resource-specs/component-schema.md @@ -8,27 +8,33 @@ description: "The basic spec for a Dapr component" Dapr defines and registers components using a [resource specifications](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/). All components are defined as a resource and can be applied to any hosting environment where Dapr is running, not just Kubernetes. +Typically, components are restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. The namespace is either explicit on the component manifest itself, or set by the API server, which derives the namespace through context with applying to Kubernetes. + +{{% alert title="Note" color="primary" %}} +The exception to this rule is in self-hosted mode, where daprd ingests component resources when the namespace field is omitted. However, the security profile is mute, as daprd has access to the manifest anyway, unlike in Kubernetes. +{{% /alert %}} + ## Format ```yaml apiVersion: dapr.io/v1alpha1 kind: Component auth: - secretstore: [SECRET-STORE-NAME] + secretstore: metadata: - name: [COMPONENT-NAME] - namespace: [COMPONENT-NAMESPACE] + name: + namespace: spec: - type: [COMPONENT-TYPE] + type: version: v1 - initTimeout: [TIMEOUT-DURATION] - ignoreErrors: [BOOLEAN] + initTimeout: + ignoreErrors: metadata: - - name: [METADATA-NAME] - value: [METADATA-VALUE] + - name: + value: scopes: - - [APPID] - - [APPID] + - + - ``` ## Spec fields diff --git a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md index a85a253151c..5e2b8f45d24 100644 --- a/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md +++ b/daprdocs/content/en/reference/resource-specs/httpendpoints-schema.md @@ -10,6 +10,10 @@ aliases: The `HTTPEndpoint` is a Dapr resource that is used to enable the invocation of non-Dapr endpoints from a Dapr application. +{{% alert title="Note" color="primary" %}} +Any HTTPEndpoint resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + ## Format ```yaml diff --git a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md index 32888adc753..06733d1d827 100644 --- a/daprdocs/content/en/reference/resource-specs/resiliency-schema.md +++ b/daprdocs/content/en/reference/resource-specs/resiliency-schema.md @@ -8,6 +8,10 @@ description: "The basic spec for a Dapr resiliency resource" The `Resiliency` Dapr resource allows you to define and apply fault tolerance resiliency policies. Resiliency specs are applied when the Dapr sidecar starts. +{{% alert title="Note" color="primary" %}} +Any resiliency resource can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + ## Format ```yml diff --git a/daprdocs/content/en/reference/resource-specs/subscription-schema.md b/daprdocs/content/en/reference/resource-specs/subscription-schema.md index bd5fc8263a8..c047fd40f87 100644 --- a/daprdocs/content/en/reference/resource-specs/subscription-schema.md +++ b/daprdocs/content/en/reference/resource-specs/subscription-schema.md @@ -6,7 +6,13 @@ weight: 2000 description: "The basic spec for a Dapr subscription" --- -The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. This guide demonstrates two subscription API versions: +The `Subscription` Dapr resource allows you to subscribe declaratively to a topic using an external component YAML file. + +{{% alert title="Note" color="primary" %}} +Any subscription can be restricted to a particular [namepsace]({{< ref isolation-concept.md >}}) and restricted access through scopes to any particular set of applications. +{{% /alert %}} + +This guide demonstrates two subscription API versions: - `v2alpha` (default spec) - `v1alpha1` (deprecated) @@ -23,15 +29,15 @@ metadata: spec: topic: # Required routes: # Required - - rules: - - match: - path: + rules: + - match: + path: pubsubname: # Required deadLetterTopic: # Optional bulkSubscribe: # Optional - - enabled: - - maxMessagesCount: - - maxAwaitDurationMs: + enabled: + maxMessagesCount: + maxAwaitDurationMs: scopes: - ``` diff --git a/daprdocs/layouts/shortcodes/dapr-latest-version.html b/daprdocs/layouts/shortcodes/dapr-latest-version.html index c64a87827be..79be5626137 100644 --- a/daprdocs/layouts/shortcodes/dapr-latest-version.html +++ b/daprdocs/layouts/shortcodes/dapr-latest-version.html @@ -1 +1 @@ -{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.0{{ else if .Get "cli" }}1.14.0{{ else }}1.14.0{{ end -}} +{{- if .Get "short" }}1.14{{ else if .Get "long" }}1.14.4{{ else if .Get "cli" }}1.14.1{{ else }}1.14.1{{ end -}} diff --git a/daprdocs/static/images/prometheus-service-discovery.png b/daprdocs/static/images/prometheus-service-discovery.png new file mode 100644 index 00000000000..34acfcadbb6 Binary files /dev/null and b/daprdocs/static/images/prometheus-service-discovery.png differ diff --git a/daprdocs/static/images/prometheus-web-ui.png b/daprdocs/static/images/prometheus-web-ui.png new file mode 100644 index 00000000000..f6b82e9037f Binary files /dev/null and b/daprdocs/static/images/prometheus-web-ui.png differ diff --git a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip index 1ccec7c23c0..985bf939f98 100644 Binary files a/daprdocs/static/presentations/dapr-slidedeck.pptx.zip and b/daprdocs/static/presentations/dapr-slidedeck.pptx.zip differ