Skip to content

Use context cancellation in search workers #15

@Fuabioo

Description

@Fuabioo

Problem

When search results reach the limit, workers continue processing remaining files from the channel instead of stopping immediately.

Evidence

search.go:76-94

for i := 0; i < opts.Workers; i++ {
    wg.Add(1)
    go func() {
        defer wg.Done()
        for path := range fileChan {
            result := searchFile(path, searchQuery, opts.CaseSensitive)
            if result != nil {
                mu.Lock()
                if opts.Limit > 0 && len(results) >= opts.Limit {
                    mu.Unlock()
                    return  // This worker exits, but others continue
                }
                results = append(results, result)
                mu.Unlock()
            }
        }
    }()
}

When one worker hits the limit and exits, other workers continue processing all remaining files from the channel until it's drained.

Impact

With a large codebase and low limit (e.g., --limit 5), the search might process hundreds of files after already having enough results. This is inefficient but not a correctness issue.

Suggested Fix

Use context.Context for cancellation so all workers can stop when the limit is reached:

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

// In worker:
select {
case <-ctx.Done():
    return
case path, ok := <-fileChan:
    if !ok {
        return
    }
    // process...
}

// When limit reached:
cancel()

Files: internal/history/search.go:76-94

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions