Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for downstream labeled-response #40

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

delthas
Copy link
Collaborator

@delthas delthas commented May 2, 2022

This adds support for downstream labeled response. Instead of doing
specific processing on each SendMessage call site, the idea is to add
some logic to SendMessage.

The cap is advertised only in single-upstream mode when the upstream
server supports it too.


When handling a downstream message, we store the label of the current
message we're processing, in the downstream struct. (This should be
understood as data whose lifetime is limited to the processing of one
message. It is reset when we're done handling the message.)

During the handling of that downstream message, at any point, when we
send out messages to this downstream:

  • if we're not handling a particular label, send it as is
  • else if this is the first message we're sending, store/buffer it
  • else, open a batch, send/flush the pending message and our message in
    that batch.

Then, at the end of the processing:

  • if we're not handling a particular label, do nothing
  • if we have a pending message, send it without a batch
  • if we had opened a batch, close it
  • if we didnt send anything, send ACK

This enables us to always send the minimum amount of messages.

  • 2+ messages generates a BATCH
  • 1 message generates a message with label
  • 0 message generates an ACK with label

While we may send messages to many upstreams and downstreams, obviosuly
only the downstream from which the message we're processing was received
is affected.


A special case affects the above behavior, which makes it differ from a
typical server implementation. Soju handles many commands by forwarding
the command to the server, ending the downstream message handling
without responding anything, then forwarding the server response later
when we receive it.

In this case we must not send an empty ACK at the end of our routine,
but we must instead not send anything, copy the message label to the
message we're sending to the server, and then when we get the response
(or batches) from the server, forward that to the downstream.

Examples:

dc->soju: @label=foo BAR
soju->uc: @label=foo BAR
uc->soju: @label=foo BAZ
soju->dc: @label=foo BAZ

dc->soju: @label=foo BAR
soju->uc: @label=foo BAR
uc->soju: @label=foo BATCH +b labeled-response
soju->dc: @label=foo BATCH +b labeled-response
uc->soju: @batch=b BAZ
soju->dc: @batch=b BAZ
uc->soju: BATCH -b
soju->dc: BATCH -b

(This is a bit more complicated because we wrap downstream labels with
our own labels.)

The patch therefore adds a DeferredResponse method to downstreamConn,
which basically tells it not to send an empty ACK at the end of the
routine, because we'll do a proper response later.


When handling upstream messages, we extract the downstream label from
the server labeled response, if it exists. We also extract the
downstream batch tag if it corresponds to a label. We store that info in
the downstreamConn, much like the downstream message handling.

We forward BATCH + and BATCH - as is, even keeping the same batch IDs.

This way, we also support cases where the server responds with a single
message but our logic turns that into multiple messages, or no messages.
(We'd buffer the first message, open a batch, send the messages, close
it etc. just like the downstream message case.)


Some cases are a bit more difficult to implement and not really useful.
(And also not mandatory according to the spec since this is
best-effort.) Those were left TODO for now.

@delthas delthas requested a review from emersion May 2, 2022 19:57
@delthas
Copy link
Collaborator Author

delthas commented May 2, 2022

I figured I'd open an MR rather than a patch since this is somewhat large and may take multiple roundtrips.

@delthas
Copy link
Collaborator Author

delthas commented Mar 8, 2023

Updated.

This adds support for downstream labeled response. Instead of doing
specific processing on each SendMessage call site, the idea is to add
some logic to SendMessage.

The cap is advertised only when the upstream server supports it too.

----

When handling a downstream message, we store the label of the current
message we're processing, in the downstream struct. (This should be
understood as data whose lifetime is limited to the processing of one
message. It is reset when we're done handling the message.)

During the handling of that downstream message, at any point, when we
send out messages to this downstream:
- if we're not handling a particular label, send it as is
- else if this is the first message we're sending, store/buffer it
- else, open a batch, send/flush the pending message and our message in
  that batch.

Then, at the end of the processing:
- if we're not handling a particular label, do nothing
- if we have a pending message, send it without a batch
- if we had opened a batch, close it
- if we didnt send anything, send ACK

This enables us to always send the minimum amount of messages.
- 2+ messages generates a BATCH
- 1 message generates a message with label
- 0 message generates an ACK with label

While we may send messages to many downstreams, obviously only the
downstream from which the message we're processing was received is
affected.

----

A special case affects the above behavior, which makes it differ from a
typical server implementation. Soju handles many commands by forwarding
the command to the server, ending the downstream message handling
without responding anything, then forwarding the server response later
when we receive it.

In this case we must not send an empty ACK at the end of our routine,
but we must instead not send anything, copy the message label to the
message we're sending to the server, and then when we get the response
(or batches) from the server, forward that to the downstream.

Examples:

    dc->soju: @Label=foo BAR
    soju->uc: @Label=foo BAR
    uc->soju: @Label=foo BAZ
    soju->dc: @Label=foo BAZ

    dc->soju: @Label=foo BAR
    soju->uc: @Label=foo BAR
    uc->soju: @Label=foo BATCH +b labeled-response
    soju->dc: @Label=foo BATCH +b labeled-response
    uc->soju: @Batch=b BAZ
    soju->dc: @Batch=b BAZ
    uc->soju: BATCH -b
    soju->dc: BATCH -b

(This is a bit more complicated because we wrap downstream labels with
our own labels.)

The patch therefore adds a DeferredResponse method to downstreamConn,
which basically tells it not to send an empty ACK at the end of the
routine, because we'll do a proper response later.

----

When handling upstream messages, we extract the downstream label from
the server labeled response, if it exists. We also extract the
downstream batch tag if it corresponds to a label. We store that info in
the downstreamConn, much like the downstream message handling.

We forward BATCH + and BATCH - as is, even keeping the same batch IDs.

This way, we also support cases where the server responds with a single
message but our logic turns that into multiple messages, or no messages.
(We'd buffer the first message, open a batch, send the messages, close
it etc. just like the downstream message case.)

----

Some cases are a bit more difficult to implement and not really useful.
(And also not mandatory according to the spec since this is
best-effort.) Those were left TODO for now.
@delthas
Copy link
Collaborator Author

delthas commented Jul 24, 2024

Updated.

@delthas
Copy link
Collaborator Author

delthas commented Jul 24, 2024

BTW I have this running in production on my soju (+ with labeled-response enabled in senpai) (wrt checking stability 😋)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant