Skip to content

Conversation

@revitteth
Copy link
Collaborator

No description provided.

MorettiGeorgiev and others added 30 commits December 17, 2024 16:25
…e-port

Merge pull request #1565 from 0xPolygonHermez/fix/ws-http-same-port
* allow the sequencer to always check for new fork information (#1606)

* include all used info tree indexes in the witness
* sequencer to go through stage loop whilst halted

* updates to L1 processing when network is in a halted state
* fix logging of hash in pool for counter overflows

* better handling of single transactions overflowing batches

* better handling of hash logging fix in pool for overflows
 Conflicts:
	zk/hermez_db/db.go
	zk/stages/stage_sequence_execute.go
* 0tx block handling and moving to timers from tickers

* tweak(ci): stable branches triggers (#1675)

* run ci on stable branches

* tweak(ci): lint docs on stable

---------

Co-authored-by: Max Revitt <max.revitt@gmail.com>
Co-authored-by: Max Revitt <max@revitt.consulting>
…1681)

* sender lock to help prevent incorrect nonce checks (#1643)

* sender lock to help prevent incorrect nonce checks

helps to ensure the pool has finished working with a sent transaction before allowing any nonce checks to be performed.  Only applies to pending nonce, latest won't use this lock.

* tidy up on waiting for sender lock
Add zkevm.l2-datastreamer-use-tls flag
If set to true, a TLS connection will be used.
This also enables SNI (Server Name Indication) support and allows connecting to ds server behind virtual hosting/load balancer

Co-authored-by: Ivan Zubok <93441191+iszubok@users.noreply.github.com>
…1686)

Co-authored-by: Valentin Staykov <79150443+V-Staykov@users.noreply.github.com>
limit

Co-authored-by: Max Revitt <max.revitt@gmail.com>
* fix: 0 tx blocks after limbo

* Add executor-enabled flag.

It disables using remote executor even when executor-urls option is filled in,
which is helpful when testing limbo mode, when txs are still sent to
the remote executor.
* fix: recover sender only for those txs that are included

* Moved recovering sender out of addTransaction.
Default context used for sender recovery.
without this change there is a very small window of time where you
could request a pending nonce and the pool has had the transactions
removed before the latest executed block is updated in the DB so you
in fact get the "latest" nonce from state for an old block instead.

we also slightly reduce the locks on the pool by combining the two slices
of transactions that we want to remove from the pool.
* pool to manage flush locking internally

* acquire semaphore before tracking tx begin

* set no gossip to default to true

This really isn't needed in zkevm networks as we don't support p2p gossip
and it creates load on the DB that we simply don't need.

Some of the deadlocks we've seen, once released, log errors from the
gossip code on repeat so better we don't attempt it at all in this case

Setting the flag to false will override this default so we can re-enable
when the time suits

* do not lock on waiting for the open transactions to close

if we lock here then we can't close down already open transactions
because they will deadlock as they enter the function

* update to mdbx deadlock code test

allows the test to run to completion without thread exhaustion

* useful comment about the stacked locks in the pool best function

* enable pool p2p gossip for unit tests that rely on the old default behaviour

* remove unnecessary 2nd RO tx in send transaction

* do not call track end on error acquiring the semaphore

* increase timeout in stream client tests
…low (#1709)

* temporarily halt processing tx for a sender when they encounter overflow

previously during an overflow situation we would continue to attempt to process
transactions for the same sender that were yielded.  In networks that have a few
very busy accounts this created nonce too high problems which we then incorrectly
attempted to remove from the pool.  Eventually this led to transactions sitting in
the queued pool that actually had a valid nonce and should be processed.

Now we pause processing for a sender that encountered an overflow until the next batch
so that we don't create these problems.

* adjustments to sendersToSkip after changes to sender recovery logic
no more special treatment for nonce too high.  They stick around in
the pool for too long and we keep attempting to process them so they
should be discarded like the others.

this will also update other transactions in the pool for the same
sender and move those over into the queued pool which also helps
…cer [stable 2.61] (#1720)

* fix(gas-pool): create a gas pool per block in normalcy mode on sequencer

fix(gas-pool): changed name normalcy gas pool

* fix(gas-pool): fix for naming on gas pool and pull to block level in sequencer

---------

Co-authored-by: Elliot Hallam <elliot@revitt.consulting>
* create eth logs max limit flag
fix(eth-logs): create eth logs max limit flag to prevent ddos risk

fix(eth-logs): fix tests

* fix(eth-logs): change message to eth logs max range message to block range too large

* fix(eth-logs): add unit test for eth get logs using log max range

* tweak(api): add block range max to limit response

---------

Co-authored-by: Max Revitt <max@revitt.consulting>

tweak(eth-logs): tests
tweak(tx-pool-grpc): log error but still if not valid type

tweak(tx-pool-grpc): add sender
shared mutex in the sync.Cond caused a panic on unlocking an unlocked
 mutex as it was shared with other logic
)

* do not commit new block details to the DB during limbo processing

* do not clean up on unwind test completion
* nonce issue detection and remedy in execution

here we pause handling further transactions for the sender in this
batch and also trigger the pool to perform a sender state change to
remove nonce too low transactions and move nonce too high to queued

* mop up discarded transactions during yielding correctly

* Ignore DbInfo in unwind test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants