-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check if wasm was built by container exit code and state instead of local mountpoint #570
Conversation
… container's status code and state, instead of checking the volume mountpoint locally
ACK to first file, and concept ACK to logging. Logging should be via log though, not println. I'm happy to make that change myself if you'd prefer (I don't want to bog you down). Thanks for doing this :) This def was an annoyance last time around. |
Thanks for the input, I'll check how to use the logger. Atm it looks super weird when just printing the path without the timestamp like the rest of the runtime |
Ok I need to pass on this simple task of adding a logger to the node. I reverted the log line, the important thing is to not get stuck when building the wasm, because I also have this issue when running on windows with a clean docker desktop installation. |
That'd explain your issue :) You don't need to add a logger. You just need to invoke |
Sorry it was a bother for you, and definitely agreed on the priority being this (which I appreciate you picking up). Will merge once I locally test 👍 |
Thank you! |
Hmm, regarding the logger do you see the log line on startup? Because I don't get the log at all. I first had it like this, but since it didn't print anything I thought of initializing the logger. Once I did that, I saw the log but then probably someone else also tries to init it and then it crashes. I did a |
I noticed it didn't print after I made the commit, while doing local testing. Immediately, I assumed it was broken. It may not print due to the fact the node panics a moment later (and parity may have implemented their logger in an asynchronous fashion)? But I don't care to say that with confidence. I could either delete the attempt which at least may work later, continue holding up this PR over this (obviously not worth it), or just leave it in to review at said later time (though by that logic I shouldn't have nit'd your println, though to be fair, I didn't predict log wouldn't work at the time). Sorry for potentially gas-lighting you on "You just need to invoke I did take a moment to review.
Log isn't present on a node which doesn't panic (a devnet node. I made my initial thoughts it may be due to the panicking premised due to trying a testnet build (which does panic), as that'd test the testnet wasm building + build check, the entire point of this PR. That means yeah, no, this log line just isn't valid here :/ I'll try to follow up there. |
FWIW, we're not insane. Those initial lines are using |
Haha no worries. We need to follow some simple best practices and it is a complete classic that the first and most simple thing makes troubles 😄 To find some peace of mind I debugged into it and the logger is initialized right after calling load_spec, so a little to late for us :/ |
I'll look at moving it. |
…ocal mountpoint (#570) * Check if the serai wasm was built successfully by verifying the build container's status code and state, instead of checking the volume mountpoint locally * Use a log statement for which wasm is used * Minor typo fix --------- Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
No description provided.