Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed some typos #61

Merged
merged 2 commits into from
Aug 15, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions pekko-sample-distributed-workers-scala/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,8 +87,8 @@ a `Tick` to start the process again.

If the work is not accepted or there is no response, for example if the message or response got lost, the `FrontEnd` actor backs off a bit and then sends the work again.

You can see the how the actors on a front-end node is started in the method `Main.start` when the node
contains the `front-end` role:
You can see how actors on a front-end node are started in the method `Main.start` when the node
contains the `front-end` role.

### The Work Result Consumer Actor

Expand Down Expand Up @@ -166,7 +166,7 @@ strategy:
* Any type of failure -- whether from the network, worker actor, or node -- that prevents a `RegisterWorker`
message from arriving within the `work-timeout` period causes the 'WorkManager' actor to remove the worker from its list.

When stopping a `Worker` Actor still tries to gracefully remove it self using the `DeRegisterWorker` message,
When stopping a `Worker` Actor still tries to gracefully remove itself using the `DeRegisterWorker` message,
but in case of crash it will have no chance to communicate that with the master node.

Now let's move on to the last piece of the puzzle, the worker nodes.
Expand Down