Conversation
From my perspective, the key differences boil down to three things: State granularity My PR (#6084): I modelled every handshake step as its own State variant (e.g. ReadClosedNeedFinAck, WriteSentFinWaitingForAck), so each transition is explicit and exhaustively tested. #5687: They collapsed “waiting for ACK” into flags on a few core states, cutting the total variants and match-arm boilerplate in half. ACK injection & driving reads Mine: I check state.needs_fin_ack() at the top of every poll_* and piggy-back the FIN_ACK whenever the next op arrives—lean but relies on the caller continuing to call read or write. #5687: They only inject in response to a FIN in the read loop (and once in poll_close), and in poll_close, they loop reads up to a 10s deadline so a simple close().await still pulls in the peer’s ACK. Timeout handling Mine: I let the close-barrier error out if no ACK ever comes, without tracking real time. #5687: They store a 10 s fin_ack_deadline, log a warning on expiry, and force-close—making the timeout behavior deterministic. In short, I went for a fully explicit, state-machine-first design; #5687 trades some state complexity for built-in timeout logic and a lightweight read loop in poll_close. |
Description
ref #4600
This may not be the ideal and best solution, always open to suggestions and comments :)
Changes:
FIN_ACK = 3flag to WebRTC protobuf message definitionFIN_ACKupon receivingFINFIN_ACKbefore considering write-half closedFIN_ACKhandshake scenariosBehaviour:
FIN→ automatically sendFIN_ACKFIN→ wait forFIN_ACKbefore closing write-halfChange checklist