Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Jobs API] Describe Triggered Job Handling Assumptions #4376

Merged
merged 5 commits into from
Oct 10, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,75 @@ In this example, at trigger time, which is `@every 1s` according to the `Schedul

At the trigger time, the `prodDBBackupHandler` function is called, executing the desired business logic for this job at trigger time. For example:

#### HTTP

When you create a job using Dapr's Jobs API, Dapr will automatically assume there is an endpoint available at
`/job/<job-name>`. For instance, if you schedule a job named `test`, Dapr expects your application to listen for job
events at `/job/test`. Ensure your application has a handler set up for this endpoint to process the job when it is
triggered. For example:

*Note: The following example is in Go but applies to any programming language.*

```go
yaron2 marked this conversation as resolved.
Show resolved Hide resolved

func main() {
...
http.HandleFunc("/job/", handleJob)
http.HandleFunc("/job/<job-name>", specificJob)
...
}

func specificJob(w http.ResponseWriter, r *http.Request) {
// Handle specific triggered job
}

func handleJob(w http.ResponseWriter, r *http.Request) {
// Handle the triggered jobs
}
```

#### gRPC

When a job reaches its scheduled trigger time, the triggered job is sent back to the application via the following
callback function:

*Note: The following example is in Go but applies to any programming language with gRPC support.*

```go
import rtv1 "github.com/dapr/dapr/pkg/proto/runtime/v1"
...
func (s *JobService) OnJobEventAlpha1(ctx context.Context, in *rtv1.JobEventRequest) (*rtv1.JobEventResponse, error) {
// Handle the triggered job
}
```

This function processes the triggered jobs within the context of your gRPC server. When you set up the server, ensure that
you register the callback server, which will invoke this function when a job is triggered:

```go
...
js := &JobService{}
rtv1.RegisterAppCallbackAlphaServer(server, js)
```

In this setup, you have full control over how triggered jobs are received and processed, as they are routed directly
through this gRPC method.

#### SDKs

For SDK users, handling triggered jobs is simpler. When a job is triggered, Dapr will automatically route the job to the
event handler you set up during the server initialization. For example, in Go, you'd register the event handler like this:

```go
...
if err = server.AddJobEventHandler("prod-db-backup", prodDBBackupHandler); err != nil {
log.Fatalf("failed to register job event handler: %v", err)
}
```

Dapr takes care of the underlying routing. When the job is triggered, your `prodDBBackupHandler` function is called with
the triggered job data. Here’s an example of handling the triggered job:

```go
// ...

Expand Down Expand Up @@ -144,4 +213,4 @@ dapr run --app-id=distributed-scheduler \
## Next steps

- [Learn more about the Scheduler control plane service]({{< ref "concepts/dapr-services/scheduler.md" >}})
- [Jobs API reference]({{< ref jobs_api.md >}})
- [Jobs API reference]({{< ref jobs_api.md >}})
Loading