Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

thread '<unnamed>' panicked at 'chainsync fails after applying max retry policy: OddLength' #628

Open
latheesan-k opened this issue Jun 22, 2023 · 2 comments

Comments

@latheesan-k
Copy link

When I start my oura client to sync nft mints on preprod, I am getting the following error:

[2023-06-22T22:13:02Z ERROR oura::utils::retry] max retries reached, failing whole operation
thread '' panicked at 'chainsync fails after applying max retry policy: OddLength', src/sources/n2n/setup.rs:56:18
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
thread 'main' panicked at 'error in pipeline thread: Any { .. }', src/bin/oura/daemon.rs:316:23

Here is my configuration:

daemon.toml

# SOURCE

[source]
type = "N2N"
address = ["Tcp", "preprod-node.world.dev.cardano.org:30000"]
magic = "1"
min_depth = 3

[source.mapper]
include_transaction_details = true

# FILTER

[[filters]]
type = "Selection"

[filters.check]
predicate = "metadata_label_equals"
argument = "721"

# SINK

[sink]
type = "Terminal"
throttle_min_span_millis = 500
wrap = false

# CURSOR

[cursor]
type = "File"
path = "/home/ubuntu/path-to/oura.cursor"

# METRICS

[metrics]
address = "0.0.0.0:9186"
endpoint = "/metrics"

oura.cursor

23919618,d0761868744b3effcfb3e0d05a176a3670c068114498cd7a93c7430747aa65d0
@scarmuega
Copy link
Member

hi @latheesan-k, thanks for reporting. Is the behavior consistent? meaning, do you get the error every time you process that particular block?

@latheesan-k
Copy link
Author

latheesan-k commented Jun 23, 2023

hi @latheesan-k, thanks for reporting. Is the behavior consistent? meaning, do you get the error every time you process that particular block?

Yes, this is consistent behaviour, but only on my lightsail instance in AWS (running Ubuntu 20.04.6 LTS).
Locally on my ubuntu desktop (running Ubuntu 22.04.2 LTS), it appears to work fine, so i am baffled what's causing the issue.

I thought there was some outgoing connection issue from the lightsail instance, so I tried a telnet to the cardano public relay and it worked fine, so that ruled out any firewall issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants