Skip to content

Conversation

brodeuralexis
Copy link
Contributor

It is sometimes necessary for a NIF to have some global initialization logic in its load callback, like we already have for atom and resource registration.

This commit inlines the fine::__private__::load function in the FINE_INIT macro to delay template instantiation of a new fine::__private__::OnLoad struct that can be overriden by a user using the FINE_LOAD(env) macro to provide customized initialization logic, while still registering atoms and resources.

It is sometimes necessary for a NIF to have some global initialization
logic in its load callback, like we already have for atom and resource
registration.

This commit inlines the `fine::__private__::load` function in the
`FINE_INIT` macro to delay template instantiation of a new
`fine::__private__::OnLoad` struct that can be overriden by a user using
the `FINE_LOAD(env)` macro to provide customized initialization logic,
while still registering atoms and resources.
Copy link
Member

@jonatanklosko jonatanklosko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @brodeuralexis! I dropped a comment regarding the API :)

That said, I wonder if there is a logic that the user would put on load, which couldn't be a typical initialization. In the docs example, couldn't they could as well do this:

static ThreadPool s_pool = ThreadPool(std::thread::hardware_concurrency());

And if that's the case, I would probably hold off with extra APIs, until there is a use case.

@brodeuralexis brodeuralexis changed the title Allow definition of a load NIF callback Allow definition of a load and unload NIF callback Sep 22, 2025
@brodeuralexis
Copy link
Contributor Author

brodeuralexis commented Sep 22, 2025

I've also added code to support the unload callback, as I believe both should be provided at the same time.

That said, I wonder if there is a logic that the user would put on load, which couldn't be a typical initialization. In the docs example, couldn't they could as well do this:

I've reworked the example to accept the number of threads from load_info, which might be obtained from an application configuration value and forwarded to :erlang.load_nif/2.

And if that's the case, I would probably hold off with extra APIs, until there is a use case.

In libbpf, ring buffers must be polled in a separate thread. I do something akin to the following in a separate thread:

while (!g_exiting) {
  err = ring_buffer__poll(rb, 100 /* timeout */);
  if (err < 0 && err != -EINTR) {
    perror("ring_buffer__poll");
    abort();
  }
}
g_exited.set();

For the ERTS to exit successfully, I need to modify the g_exiting atomic boolean and wait on the g_exited atomic flag to ensure that the thread has exited before unloading can complete. Failure to terminate a thread can cause a crash in the ERTS.

For now, I hard-code 100ms as a timeout for polling, but in the future, this value could be injected from the :erlang.load_nif/2 function as a keyword list load_info allowing users to use Mix.Config to select a timeout:

config :bpf,
  ring_buffer_polling_timeout: 100

Copy link
Member

@jonatanklosko jonatanklosko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The API looks great, I dropped one comment with alternative implementation :)

While using `fine::Registration` incurs some runtime-cost, there is no
longer any constraints on the order of callback registration and
`FINE_INIT`.
Copy link
Member

@jonatanklosko jonatanklosko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small comments and looks good to me!

Copy link
Member

@jonatanklosko jonatanklosko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic, thank you! 🍵

@jonatanklosko jonatanklosko merged commit edadd7d into elixir-nx:main Oct 7, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants