Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking virtualfund (TTFP over multiple hops) #1809

Closed
wants to merge 11 commits into from
Closed

Conversation

geoknee
Copy link
Contributor

@geoknee geoknee commented Oct 5, 2023

I wrote a benchmark reusing many utils from our integration tests. Quite possible that I made a mistake, it needs a careful double check.

The main test function sets up a network of alice, bob and up to 12 intermediaries. Alice has a ledger connection to the first. Bob has a ledger connection to all of the others.

Then sub benchmarks are run where we make a longer and longer path from alice to bob and create a virtual channel.

The message service was configured with a 100ms delay for realism (we want the network latency to dominate other latencies in the protocol). This is not as realistic as it could be -- consider that if the nodes are physically arranged in a line (or circle, or any 3d arrangement), the latency between nodes will not be a constant.

Results were piped through chat gpt and https://www.tutorialspoint.com/execute_matplotlib_online.php

(very rapid way to get a graph)

Vertical units are seconds, so (10*network-latency).

In theory the TTFP should run in 5*network latency. This is shown as a horizontal line:

Attempt 1

results

I have also plotted the theoretical behaviour in an HTLC based payment network (green line). The unifying idea is "time to finality on a payment" (in Nitro, to make the funds usable again we have to virtualdefund in addition, whereas in Lightning they are usable straight away. But we get finality before then. ).

Conjecture: The test message service is delaying messages in a blocking manner, whereas we want to delay concurrently across nodes.

Attempt 2

results
Conjecture: The actual latency is not well modelled by our use of "max delay"

Attempt 3

Fixing the delay rigidly gives results which agree pretty well with the theory:

results

There is an increase as the number of hops goes up... but much more slowly than "a roundtrip per hop" which is what we get with HTLCs. Consider that this is all running on my laptop, so one expects some slow down as the system becomes more complex.

@netlify
Copy link

netlify bot commented Oct 5, 2023

👷 Deploy Preview for nitrodocs processing.

Name Link
🔨 Latest commit 9fd2b64
🔍 Latest deploy log https://app.netlify.com/sites/nitrodocs/deploys/651eba3491f3da00088d4b25

@geoknee geoknee closed this Oct 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant