Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wall-time/all tasks profiler #55889

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Wall-time/all tasks profiler #55889

wants to merge 1 commit into from

Conversation

d-netto
Copy link
Member

@d-netto d-netto commented Sep 26, 2024

One limitation of sampling CPU/thread profiles, as is currently done in Julia, is that they primarily capture samples from CPU-intensive tasks.

If many tasks are performing IO or contending for concurrency primitives like semaphores, these tasks won’t appear in the profile, as they aren't scheduled on OS threads sampled by the profiler.

A wall-time profiler, like the one implemented in this PR, samples tasks regardless of OS thread scheduling. This enables profiling of IO-heavy tasks and detecting areas of heavy contention in the system.

Co-developed with @nickrobinson251.

@d-netto
Copy link
Member Author

d-netto commented Sep 26, 2024

An additional note: this kind of wall-time/all tasks profiler is also implemented in Go (and denoted as goroutine profiler there), so there is some precedent for this in other languages as well: https://github.com/felixge/fgprof.

@d-netto
Copy link
Member Author

d-netto commented Sep 26, 2024

@nickrobinson251 I can't assign you as reviewer... Feel free to assign yourself or post review comments otherwise.

@d-netto d-netto force-pushed the dcn-all-task-profiler branch 2 times, most recently from 6b80fe3 to f5c8f5f Compare September 26, 2024 14:13
@IanButterworth
Copy link
Member

I think this is related to #55103. Could the metrics here be useful in that too?

@d-netto d-netto force-pushed the dcn-all-task-profiler branch 2 times, most recently from 1e9f41f to 6cd27d7 Compare September 26, 2024 16:02
@nsajko nsajko added the feature Indicates new feature / enhancement requests label Sep 26, 2024
@d-netto d-netto force-pushed the dcn-all-task-profiler branch 3 times, most recently from f6ea007 to 1029a84 Compare September 27, 2024 11:38
@d-netto d-netto force-pushed the dcn-all-task-profiler branch 3 times, most recently from 0d4ca9c to e493403 Compare September 30, 2024 11:42
@d-netto
Copy link
Member Author

d-netto commented Sep 30, 2024

I think this is related to #55103. Could the metrics here be useful in that too?

For diagnosing excessive scheduling time? I can't immediately see how this PR would be useful for that.

@d-netto
Copy link
Member Author

d-netto commented Sep 30, 2024

For diagnosing excessive scheduling time

#55103 seems like a much more direct approach for doing so, at least.

@d-netto d-netto force-pushed the dcn-all-task-profiler branch 2 times, most recently from 90bca24 to 14766d3 Compare September 30, 2024 14:57
src/signals-unix.c Outdated Show resolved Hide resolved
@NHDaly NHDaly added the needs tests Unit tests are required for this change label Oct 1, 2024
src/task.c Outdated Show resolved Hide resolved
@d-netto d-netto force-pushed the dcn-all-task-profiler branch 2 times, most recently from 5ddd5ba to c9d1995 Compare October 3, 2024 12:50
@d-netto d-netto force-pushed the dcn-all-task-profiler branch 12 times, most recently from 16587e9 to f84bc9a Compare October 22, 2024 12:27
@d-netto
Copy link
Member Author

d-netto commented Oct 22, 2024

Can you share some examples e.g. simple workloads you've tried this out on and what the profiles look like?

Yes, here they are:

Workload 1: a_bunch_of_tasks_waiting_on_channel.jl

using Base.Threads
using Profile
using PProf

ch = Channel(1)

const N_SPAWNED_TASKS = (1 << 10)
const WAIT_TIME_NS = 10_000_000

function spawn_a_bunch_of_tasks_waiting_on_channel()
    for i in 1:N_SPAWNED_TASKS
        Threads.@spawn begin
            take!(ch)
        end
    end
end

function busywait()
    t0 = time_ns()
    while true
        if time_ns() - t0 > WAIT_TIME_NS
            break
        end
    end
end

function main()
    spawn_a_bunch_of_tasks_waiting_on_channel()
    for i in 1:N_SPAWNED_TASKS
        put!(ch, i)
        busywait()
    end
end

Profile.@profile_walltime main()

Expected results

Given that we have a lot more tasks waiting on a channel than tasks actually computing (spinning), we expect to see a large number of samples in channel-related functions. That seems to be the case here.

Screenshot 2024-10-22 at 09 57 31

Workload 2: a_lot_of_short_lived_tasks.jl

using Base.Threads
using Profile
using PProf

const N_SPAWNED_TASKS = (1 << 16)
const WAIT_TIME_NS = 100_000

function spawn_a_bunch_of_short_lived_tasks()
    for i in 1:N_SPAWNED_TASKS
        Threads.@spawn begin
            # Do nothing
        end
    end
end

function busywait()
    t0 = time_ns()
    while true
        if time_ns() - t0 > WAIT_TIME_NS
            break
        end
    end
end

function main()
    spawn_a_bunch_of_short_lived_tasks()
    for i in 1:N_SPAWNED_TASKS
        busywait()
    end
end

Profile.@profile_walltime main()

Expected results

Given that we have a lot of short lived tasks, we expect to get a large numbers of samples corresponding to "failed to sample task & capture backtrace". That seems to be the case here.

Screenshot 2024-10-22 at 10 03 50

@d-netto d-netto force-pushed the dcn-all-task-profiler branch 2 times, most recently from 6b6da74 to b6c9e44 Compare October 22, 2024 13:39
@d-netto
Copy link
Member Author

d-netto commented Oct 22, 2024

A few more workloads suggested by @NHDaly.

Workload 3: compute_heavy.jl

using Base.Threads
using Profile
using PProf

ch = Channel(1)

const MAX_ITERS = (1 << 22)
const N_TASKS = (1 << 12)

function spawn_a_task_waiting_on_channel()
    Threads.@spawn begin
        take!(ch)
    end
end

function sum_of_sqrt()
    sum_of_sqrt = 0.0
    for i in 1:MAX_ITERS
        sum_of_sqrt += sqrt(i)
    end
    return sum_of_sqrt
end

function spawn_a_bunch_of_compute_heavy_tasks()
    Threads.@sync begin
        for i in 1:N_TASKS
            Threads.@spawn begin
                sum_of_sqrt()
            end
        end
    end
end

function main()
    spawn_a_task_waiting_on_channel()
    spawn_a_bunch_of_compute_heavy_tasks()
end

Profile.@profile_walltime main()

Expected results

We have a lot more compute-heavy tasks than sleeping tasks. We expect to see a lot of samples in sum_of_sqrt. That seems to be the case here.

Screenshot 2024-10-22 at 10 53 57

@NHDaly
Copy link
Member

NHDaly commented Oct 22, 2024

Cool, thanks! 🎉

🤔 Is it expected that the currently-scheduled tasks seem to have their stacks starting at a different frame than the waiting tasks?

It looks like the executing tasks start with right with the function in the Task (##1()), while the waiting tasks start with start_task | jl_apply | then the Task function?

I can't decide if I think this is helpful or not. On the one hand, it's maybe nice to visually divide the scheduled vs sleeping tasks, but on the other hand i think it would make more sense to integrate the stacks together if they had most of their content shared.

@NHDaly
Copy link
Member

NHDaly commented Oct 22, 2024

My instinct is that this is not desirable, and we should figure out why they're different, and correct that.

@d-netto
Copy link
Member Author

d-netto commented Oct 22, 2024

Is it expected that the currently-scheduled tasks seem to have their stacks starting at a different frame than the waiting tasks?

Good question, I don't know. Will investigate this.

@d-netto d-netto force-pushed the dcn-all-task-profiler branch 6 times, most recently from 84b9e9e to 3839a78 Compare October 23, 2024 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Indicates new feature / enhancement requests profiler
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants