Skip to content

feat(method): add a method type resolution server to improve performance#71

Merged
visualfc merged 2 commits intogoplus:mainfrom
go-wyvern:perf_parse_type
Mar 4, 2026
Merged

feat(method): add a method type resolution server to improve performance#71
visualfc merged 2 commits intogoplus:mainfrom
go-wyvern:perf_parse_type

Conversation

@go-wyvern
Copy link
Member

No description provided.

Copilot AI review requested due to automatic review settings March 4, 2026 02:03
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces two distinct caching layers to optimize method type resolution and function pointer creation within the reflection system. By storing previously computed results, the changes aim to reduce redundant work and improve the overall performance of operations involving method introspection and invocation, particularly in scenarios where the same method types or function pointers are frequently requested.

Highlights

  • Method Type Resolution Caching: Introduced a caching mechanism for parserMethodType results to avoid redundant computations when rmap (type replacement map) is not used, significantly improving performance for repeated method type lookups.
  • Function Pointer Caching: Implemented caching for function pointers (ptfn) created by createMethod for non-pointer methods, reducing the overhead of reflect.MakeFunc and resolveReflectText for frequently accessed methods.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • method.go
    • Added parserMethodTypeCache (a map) and parserMethodTypeResult (a struct) to store and retrieve parsed method type information.
    • Modified parserMethodType to check parserMethodTypeCache for existing results at the beginning of the function and store new results at the end, specifically when the rmap parameter is nil.
  • methodof.go
    • Introduced globalPtfnCache (a map) and ptfnKey (a struct) to cache function pointers.
    • Updated the resetAll function to clear the newly added globalPtfnCache.
    • Modified createMethod to utilize globalPtfnCache for caching the ptfn (pointer to function) when creating non-pointer methods, preventing repeated reflect.MakeFunc calls.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to improve performance by introducing caching for parserMethodType and createMethod functions. However, a critical vulnerability exists due to the use of unsynchronized global Go maps, which can lead to thread-safety issues, runtime panics, and potential Denial of Service (DoS). Additionally, the parserMethodTypeCache lacks a clearing mechanism, posing a risk of memory accumulation.

)

// parserMethodType result cache
var parserMethodTypeCache = make(map[reflect.Type]*parserMethodTypeResult)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The global map parserMethodTypeCache is accessed without proper synchronization, leading to thread-safety issues and potential runtime panics under concurrent access, which could result in a Denial of Service (DoS). Furthermore, this cache lacks a clearing mechanism, which can cause unbounded memory growth and memory leaks. It is essential to implement thread-safe access (e.g., using sync.RWMutex or sync.Map) and consider adding a reset function for parserMethodTypeCache, similar to how globalIfnCache and globalPtfnCache are handled in methodof.go's resetAll function.

)

var globalIfnCache = make(map[ifnKey]*ifnValue)
var globalPtfnCache = make(map[ptfnKey]textOff)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The global map globalPtfnCache is accessed for both reading and writing without any synchronization mechanism. Concurrent access to this map will lead to a runtime panic, resulting in a Denial of Service (DoS). This occurs in createMethod when caching the ptfn result and in resetAll when re-initializing the cache.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds caching around method-type parsing and pointer-method function text resolution to reduce repeated reflection work and improve runtime performance.

Changes:

  • Introduced a global cache for pointer-receiver trampoline function (ptfn) resolution in createMethod.
  • Added a global cache for parserMethodType results when rmap is nil.
  • Reset logic updated to clear the new ptfn cache.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 8 comments.

File Description
methodof.go Adds a global ptfn cache and uses it in createMethod to avoid rebuilding reflect.MakeFunc trampolines.
method.go Adds parserMethodType result caching to avoid recomputing struct/function type metadata.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

)

var globalIfnCache = make(map[ifnKey]*ifnValue)
var globalPtfnCache = make(map[ptfnKey]textOff)
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both reads and writes to globalPtfnCache are unsynchronized. In Go, concurrent map access can panic at runtime. If createMethod can be called from multiple goroutines, protect this cache with a sync.RWMutex, use sync.Map, or otherwise ensure all accesses are serialized.

Copilot uses AI. Check for mistakes.
Comment on lines +194 to 197
key := ptfnKey{ctyp: ctyp, index: index, variadic: variadic}
if cached, ok := globalPtfnCache[key]; ok {
ptfn = cached
} else {
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both reads and writes to globalPtfnCache are unsynchronized. In Go, concurrent map access can panic at runtime. If createMethod can be called from multiple goroutines, protect this cache with a sync.RWMutex, use sync.Map, or otherwise ensure all accesses are serialized.

Copilot uses AI. Check for mistakes.
Comment on lines +208 to 210
ptfn = resolveReflectText(tovalue(&cv).ptr)
globalPtfnCache[key] = ptfn
}
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both reads and writes to globalPtfnCache are unsynchronized. In Go, concurrent map access can panic at runtime. If createMethod can be called from multiple goroutines, protect this cache with a sync.RWMutex, use sync.Map, or otherwise ensure all accesses are serialized.

Copilot uses AI. Check for mistakes.
Comment on lines 9 to +12
"strings"
)

// parserMethodType result cache
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parserMethodTypeCache is a package-level map that is read/written without synchronization. If parserMethodType is reachable concurrently, this can cause a 'concurrent map read and map write' panic. Consider guarding with a sync.RWMutex or switching to sync.Map.

Suggested change
"strings"
)
// parserMethodType result cache
"strings"
"sync"
)
// parserMethodType result cache
var parserMethodTypeCacheMu sync.RWMutex

Copilot uses AI. Check for mistakes.
Comment on lines +480 to +484
if rmap == nil {
if cached, ok := parserMethodTypeCache[mtyp]; ok {
return cached.in, cached.out, cached.ntyp, cached.inTyp, cached.outTyp
}
}
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parserMethodTypeCache is a package-level map that is read/written without synchronization. If parserMethodType is reachable concurrently, this can cause a 'concurrent map read and map write' panic. Consider guarding with a sync.RWMutex or switching to sync.Map.

Copilot uses AI. Check for mistakes.
Comment on lines +521 to +529
if rmap == nil {
parserMethodTypeCache[mtyp] = &parserMethodTypeResult{
in: in,
out: out,
ntyp: ntyp,
inTyp: inTyp,
outTyp: outTyp,
}
}
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parserMethodTypeCache is a package-level map that is read/written without synchronization. If parserMethodType is reachable concurrently, this can cause a 'concurrent map read and map write' panic. Consider guarding with a sync.RWMutex or switching to sync.Map.

Copilot uses AI. Check for mistakes.
Comment on lines +193 to +194
// Cache ptfn creation based on ctyp, index, and variadic
key := ptfnKey{ctyp: ctyp, index: index, variadic: variadic}
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ptfn cache key includes ctyp produced by reflect.FuncOf(...). reflect.FuncOf can create distinct unnamed function types for identical signatures, which would reduce cache hit rate (and potentially make this cache grow more than expected). Consider keying by a more stable signature (e.g., receiver type ptyp + method type m.Type/tfn + index + variadic, or a canonical hash over in/out types) rather than the newly constructed ctyp.

Suggested change
// Cache ptfn creation based on ctyp, index, and variadic
key := ptfnKey{ctyp: ctyp, index: index, variadic: variadic}
// Cache ptfn creation based on a stable method type, index, and variadic
key := ptfnKey{ctyp: m.Type, index: index, variadic: variadic}

Copilot uses AI. Check for mistakes.
Comment on lines +481 to +483
if cached, ok := parserMethodTypeCache[mtyp]; ok {
return cached.in, cached.out, cached.ntyp, cached.inTyp, cached.outTyp
}
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cache returns the stored in/out slices directly. If any caller later appends to or mutates these slices, it can corrupt the cached value for future callers. To make the cache safe, store immutable copies (copy when inserting) and/or return copies when serving from cache.

Copilot uses AI. Check for mistakes.
@xgopilot
Copy link

xgopilot bot commented Mar 4, 2026

The caching approach is sound, but there are two significant correctness issues: both new caches are unprotected plain map values (data race under concurrent use), and parserMethodTypeCache is not cleared in resetAll() while globalPtfnCache is, leaving the two caches in an inconsistent state after a reset. The PR title also contains a typo ("dd a method type resolution server" → "add a method type resolution cache").

@@ -9,6 +9,18 @@ import (
"strings"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parserMethodTypeCache is a plain map with no mutex. Concurrent calls to parserMethodType (e.g. when SetMethodSet is called from multiple goroutines) will data-race on this map. Consider using sync.Map or protecting it with a sync.RWMutex.

outTyp = reflect.StructOf(outFields)

// Cache the result when rmap is nil
if rmap == nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parserMethodTypeCache is never cleared in resetAll(), but globalPtfnCache is. After a ResetAll() call, globalPtfnCache is wiped while stale inTyp/outTyp structs from the previous generation remain in parserMethodTypeCache. The two caches become inconsistent. Add parserMethodTypeCache = make(map[reflect.Type]*parserMethodTypeResult) to resetAll() in methodof.go.

@@ -31,6 +32,12 @@ type ifnValue struct {
ctxs map[*Context]struct{}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variadic field is redundant. ctyp is constructed as reflect.FuncOf(..., variadic), so ctyp.IsVariadic() == variadic always holds — two keys with the same ctyp and index can never differ on variadic. Remove the field to simplify the key and avoid misleading readers.

Suggested change
}
type ptfnKey struct {
ctyp reflect.Type
index int
}

@@ -13,6 +13,7 @@ import (
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same race condition as parserMethodTypeCache: globalPtfnCache is an unprotected plain map. Concurrent createMethod calls will race on reads and writes here.

outTyp = reflect.StructOf(outFields)

// Cache the result when rmap is nil
if rmap == nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The in and out slices stored in the cache share their backing array with the values returned to callers. Currently createMethod only uses append([]reflect.Type{ptyp}, in...) (always allocates a new array) so this is safe today, but any future caller that does a mutating append on the returned slice could silently corrupt the cache. Store copies:

inCopy := append([]reflect.Type(nil), in...)
outCopy := append([]reflect.Type(nil), out...)
parserMethodTypeCache[mtyp] = &parserMethodTypeResult{
    in: inCopy, out: outCopy, ntyp: ntyp, inTyp: inTyp, outTyp: outTyp,
}

@go-wyvern go-wyvern changed the title feat(method): dd a method type resolution server to improve performance feat(method): add a method type resolution server to improve performance Mar 4, 2026
@visualfc visualfc merged commit b1f158d into goplus:main Mar 4, 2026
27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants