-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FFTM new listener type for block notifications and receipt decoding functions #123
Conversation
…nfirmations Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
@@ -22,6 +22,7 @@ import ( | |||
|
|||
type BlockInfoByNumberRequest struct { | |||
BlockNumber *fftypes.FFBigInt `json:"blockNumber"` | |||
AllowCache bool `json:"allowCache"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Requires trivial change to firefly-evmconnect
to propagate this to an existing internal boolean
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Progress update on this: Unit tested code is complete for integrating a new There's a new |
…as runtime fromBlock Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #123 +/- ##
========================================
Coverage 99.98% 99.98%
========================================
Files 80 82 +2
Lines 5228 5578 +350
========================================
+ Hits 5227 5577 +350
Misses 1 1 ☔ View full report in Codecov by Sentry. |
} | ||
|
||
type TransactionReceiptResponse struct { | ||
type TransactionReceiptResponseBase struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I worked this through in combination with hyperledger/firefly-evmconnect#141, I found I had to make a tweak to the Receipt structure on the FFCAPI interface.
We have a (now slightly ugly looking sadly) JSON merging structure on the EventWithContext
object, that defines how serialization happens into the JSON merging static and custom bits of the object together.
However, this is on apitype
not ffcapi
where we just have Event
- which is one part of what's merged.
Without doing really complex refactoring, I found a compromise where I map in the FFTM layer between the types. The easiest way to do this was adding this here.
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
req := w.client.R(). | ||
SetContext(ctx). | ||
SetBody(events). | ||
SetResult(&resBody). | ||
SetError(&resBody) | ||
SetDoNotParseResponse(true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tweaked after seeing this error with a webhook test framework:
FF21042: Webhook request failed: json: cannot unmarshal object into Go value of type []uint8
I've now worked through the E2E in a FF dev stack, using the FFTM+EVMConnect PRs together.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, one comment around caching for getting next block
DROP INDEX IF EXISTS transactions_status; | ||
COMMIT; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for fixing thoses
cbl.stateLock.Unlock() | ||
|
||
// Get the next block | ||
nextBlock, err = cbl.bcm.getBlockByNumber(blockNumberToFetch, false, "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@peterbroadhurst I feel this could be too aggressive to avoid different block info with the same Block Number. The number of requests TM sends to a JSON-RPC endpoint affects performance a lot so we should keep it efficient.
The logic need to balance between the following choices when there are multiple block listeners in flight:
- listeners should go and revalidate the information of each block from the "fromBlock" even if they are validating the same block at the same time.
- listeners should use cached information of blocks when possible.
I can see the second one won't work with more logic added as we'll stuck at unmatching block parent hash when the cached block is outdated.
However, I do also feel the first one is a bit too aggressive if each listener will need to fetch each block with no cache at all. For example, if we have 10 listener, then we are emitting 10 get blockInfo call to the JSON-RPC endpoint for every single block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One solution could be default to use cache, and stop using cache when we detected the parent hash do not match the one in the unconfirmed previous block. It should be able to switch back to use cache at some point as well, which probably involves comparing the hash returned with no cache with another recorded hash retrieved with cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with all the desires and concerns of things to think about here @Chengxuan - so thank you for raising.
I completely agree it would be great if we could work out a way to have a cache on this line, but I actually I found I had to disable the previously existing cache here because there's no way I could pre-empt knowing what re-org has happened on the node.
However, please note this only kicks in if the block addition by hash (which is cached) doesn't work. For example if we're catching up on old blocks, or the block-listener wasn't nicely behaved in notifying of blocks in a friendly order (which we go to pains to make true in firefly-evmconnect). If we're just adding hashed block to the list as they arrive, none of the listeners will query at all on this line. That's happening on the other go routine.
The optimization for block arriving in order from the chain to multiple listeners happens here (and is cache-enabled, because it's a get-by-hash which is deterministic to cache):
firefly-transaction-manager/internal/confirmations/confirmed_block_listener.go
Lines 196 to 198 in 072aa38
if idx == 0 || block.ParentHash == cbl.blocksSinceCheckpoint[idx-1].BlockHash { | |
log.L(cbl.ctx).Debugf("Notification of block %d/%s after block %d/%s", block.BlockNumber, block.BlockHash, existingBlock.BlockNumber, existingBlock.BlockHash) | |
cbl.blocksSinceCheckpoint = append(cbl.blocksSinceCheckpoint[0:idx], block) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, just a minute. That comment of // This block fits, and add this on the end.
reflects what I described above, but the actual code in the notification is only dealing with the re-org case.
Sorry - I must have forgotten to come back to that code to put in the add-to-end case in after I did some work.
I'll complete that, and then the code will work as I describe in my comment here.
You have also made me think of another optimization - the dispatcher (or "catch-up" routine) is always asking for blocks until it gets an error. This means in the caught-up notification cases, it's always doing an inefficient extra JSON/RPC call just to find out there's no new blocks.
If I maintain a "last notification height" on the notification listener (or similar), then we can optimize such that if the dispatch is caught up with the head of the chain, then there's no immediate re-poll necessary.
I'll do both of these and request a re-review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok - that's was harder than I'd thought! Logic got significantly more complex, to deal with the locking and avoid a situation where any go channel gets unnecessarily blocked.
However, I believe I now have it so that in the case the listener is keeping up, there will only be cached eth_getBlockByHash
calls, and no eth_getBlockByNumber
calls.
…ager into block-listener Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
Signed-off-by: Peter Broadhurst <peter.broadhurst@kaleido.io>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great. Thanks @peterbroadhurst
The FFTM basis for building up to the goals in FIR-18.
The FireFly core spelling that would expose this would be the next step on FIR-18, but I believe this PR can merge in isolation as the (useful in its own right) foundations in FFTM.
The core proposal is to provide a reliable streaming interface for historical and current blocks, with the required number of confirmations, building on all the previous complex code for block-listening already built in EVMConnect etc.
There is definite code overlap with https://github.com/hyperledger/firefly-evmconnect/blob/8b0c3b46915aa8aab001a34bc03030d5ad44a381/internal/ethereum/blocklistener.go#L92 in handling confirmations, but after much back and forth I believe my proposal allows the two bits of work to operated in concert.
Why blocks?
Because this is the only thing that chains can definitively give you as a stream efficiently to base a programming model around receipts with. The receipts are a side-effect of block confirmation, so if you want the receipts "in order" it's actually the blocks you need to listen for.
Progress/plan in this PR
events
(akalogs
) anderrors
Also see: