You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Token connectors allows sending tokens between different chains. The way these connector works, are by locking/burning the tokens on one side. Emit a provable event of this action. Consume this event on the other side of the bridge and mint/unlock the tokens.
To avoid that this events are consumed multiple times, the hash of the event is stored in memory forever.
Proposal
Instead of storing the hash of each event emitted we will use the following pattern.
The locking contract keeps track of the value: eventNonce: u64 that starts at 0. Whenever an event needs to be emitted, the eventNonce is appended to the event, and the nonce is increased by 1. This means that all events will be enumerated starting from 0 and using consecutive numbers in a safe and reliable way.
The minter contract can do the following:
Proposal 1)
Instead of store the hash of each transaction, it can store the nonce. This alone reduces the amount of storage required by 32x.
Proposal 2)
Store used events in a bitmap. Since we don't want to keep an ever growing bitmap, we group them in batches of size 128. TBD the best size of the batch. If it is too small it degenerates to proposal 1, if it is too large it can be too expensive to load the batch. The bit associated with each event can be computed using batch[event.nonce / BATCH_SIZE] >> (event.nonce % BATCH_SIZE) & 1
Proposal 3)
An extension of proposal 2) where we remove a batch from memory when all nonces are active. To make sure they are not used again in the future, we keep track in a separate variable batch_key_head, the batch with lower key that doesn't have all the nonces claimed.
There are several advantages of this method, even proposal 1) && 2) represent a huge win over the current status quo. With proposal 3) we provide a mechanism to keep the storage used "constant" under the assumption that every event will be claimed after a reasonable time.
Complexity
Implementation-wise is straightforward and with little room for mistakes. Here is a sketch of the code
classMinter:
batch_key_head: u128batches: Map[u128->u128]
def__init__(self):
self.batch_key_head=0defexist(self, event_nonce):
key=event_nonce/128returnkey<self.batch_key_headorself.batches[key] >> (event_nonce%128) &1definsert(self, event_nonce):
# Set the key to 1key=event_nonce/128self.batches[key] |= 1u128<< (event_nonce%128)
# Usually this loop will run only 1 cycle, but to prevent# unexpected situations, it can be truncated to X (10) cycles.whileself.batches[self.batch_key_head] +1==1<<128:
delself.batches[self.batch_key_head]
self.batch_key_head+=1
Migration
Migrating from the current strategies to this one is not really easy, so I suggest we adopt this pattern for future connectors.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Background
Token connectors allows sending tokens between different chains. The way these connector works, are by locking/burning the tokens on one side. Emit a provable event of this action. Consume this event on the other side of the bridge and mint/unlock the tokens.
To avoid that this events are consumed multiple times, the hash of the event is stored in memory forever.
Proposal
Instead of storing the hash of each event emitted we will use the following pattern.
The locking contract keeps track of the value:
eventNonce: u64
that starts at 0. Whenever an event needs to be emitted, theeventNonce
is appended to the event, and the nonce is increased by1
. This means that all events will be enumerated starting from0
and using consecutive numbers in a safe and reliable way.The minter contract can do the following:
Proposal 1)
Instead of store the hash of each transaction, it can store the nonce. This alone reduces the amount of storage required by 32x.
Proposal 2)
Store used events in a bitmap. Since we don't want to keep an ever growing bitmap, we group them in batches of size 128. TBD the best size of the batch. If it is too small it degenerates to proposal 1, if it is too large it can be too expensive to load the batch. The bit associated with each event can be computed using
batch[event.nonce / BATCH_SIZE] >> (event.nonce % BATCH_SIZE) & 1
Proposal 3)
An extension of proposal 2) where we remove a batch from memory when all nonces are active. To make sure they are not used again in the future, we keep track in a separate variable
batch_key_head
, the batch with lower key that doesn't have all the nonces claimed.There are several advantages of this method, even proposal 1) && 2) represent a huge win over the current status quo. With proposal 3) we provide a mechanism to keep the storage used "constant" under the assumption that every event will be claimed after a reasonable time.
Complexity
Implementation-wise is straightforward and with little room for mistakes. Here is a sketch of the code
Migration
Migrating from the current strategies to this one is not really easy, so I suggest we adopt this pattern for future connectors.
Beta Was this translation helpful? Give feedback.
All reactions