bazel | scala | gazelle |
- Overview
- Installation
- Usage
- Configuration
- Import Resolution Procedure
- Help
This is an experimental gazelle extension for scala. It has the following design characteristics:
- It only works on scala rules that already exist in a
BUILD
file. You are responsible for manually creatingscala_library
,scala_binary
, andscala_test
targets in their respective packages. - It only manages compile-time scala
deps
; you are responsible forruntime_deps
. - Existing scala rules are evaluated for the contents of their
srcs
. Globs are interpreted the same as bazel starlark (unless there is a a bug 😱). - Source files named in the
srcs
are parsed for their import statements and exportable symbols (classes, traits, objects, ...). - Dependencies are resolved by matching required imports against their providing rule labels. The resolution procedure is configurable.
Add build_stack_scala_gazelle
as an external workspace:
# Branch: master
# Commit: cd4ba132018c2ac709bfda4560da394da2544490
# Date: 2022-12-15 22:11:08 +0000 UTC
# URL: https://github.com/stackb/scala-gazelle/commit/cd4ba132018c2ac709bfda4560da394da2544490
#
# Refactor MemoParser (#69)
#
# * Refactor MemoParser
# * regen mocks
# * Fix cache read/write
# Size: 150152 (150 kB)
http_archive(
name = "build_stack_scala_gazelle",
sha256 = "a88095f943b5b382761efe300b098ae438a0083db844bea98efbcfaee6efa8bf",
strip_prefix = "scala-gazelle-cd4ba132018c2ac709bfda4560da394da2544490",
urls = ["https://github.com/stackb/scala-gazelle/archive/cd4ba132018c2ac709bfda4560da394da2544490.tar.gz"],
)
Update to latest, the version in the readme is probably out-of-date!
Declare transitive dependencies in your WORKSPACE
as follows:
load("@build_stack_scala_gazelle//:workspace_deps.bzl", "language_scala_deps")
language_scala_deps()
load("@build_stack_scala_gazelle//:go_repos.bzl", build_stack_scala_gazelle_gazelle_extension_deps = "gazelle_extension_deps")
build_stack_scala_gazelle_gazelle_extension_deps()
At the time of this writing, scala-gazelle uses a feature from bazel-contrib/bazel-gazelle#1394. To patch:
http_archive(
name = "bazel_gazelle",
patch_args = ["-p1"],
patches = ["@build_stack_scala_gazelle//third_party/bazelbuild/bazel-gazelle:pr-1394.patch"],
sha256 = "5ebc984c7be67a317175a9527ea1fb027c67f0b57bb0c990bac348186195f1ba",
strip_prefix = "bazel-gazelle-2d1002926dd160e4c787c1b7ecc60fb7d39b97dc",
urls = ["https://github.com/bazelbuild/bazel-gazelle/archive/2d1002926dd160e4c787c1b7ecc60fb7d39b97dc.tar.gz"],
)
Include the language/scala extension in your gazelle_binary
rule. For
example:
gazelle_binary(
name = "gazelle-scala",
languages = [
"@bazel_gazelle//language/proto:go_default_library",
"@bazel_gazelle//language/go:go_default_library",
"@build_stack_rules_proto//language/protobuf",
"@build_stack_scala_gazelle//language/scala",
],
)
Reference the binary in the gazelle rule:
gazelle(
name = "gazelle",
args = [...],
gazelle = ":gazelle-scala",
)
The args
and data
are discussed below.
Invoke gazelle as per typical usage:
$ bazel run //:gazelle
The extension needs to know which rules it should manage (parse srcs
/resolve
deps
). This is done using gazelle:scala_rule
directives.
A preset catalog of providers are available out-of-the-box:
@io_bazel_rules_scala//scala:scala.bzl%scala_binary
@io_bazel_rules_scala//scala:scala.bzl%scala_library
@io_bazel_rules_scala//scala:scala.bzl%scala_macro_library
@io_bazel_rules_scala//scala:scala.bzl%scala_test
To enable a provider, instantiate a "rule provider config":
# gazelle:scala_rule scala_library implementation @io_bazel_rules_scala//scala:scala.bzl%scala_library
This reads as "create a rule provider configuration named 'scala_library' whose provider implementation is registered under the name '@io_bazel_rules_scala//scala:scala.bzl%scala_library'
You may have your own scala rule macros that look like a scala_library
or
scala_binary
, but have their own rule kinds and loads. To register these
rules/macros as provider implementations, use the
-existing_scala_{type}_rule=LOAD%KIND
flag (where type is one of binary|library|test
). For example:
gazelle(
name = "gazelle",
args = [
"-existing_scala_library_rule=//bazel_tools:scala.bzl%scala_app",
...
],
...
)
# gazelle:scala_rule scala_app implementation //bazel_tools:scala.bzl%scala_app
An advanced use-case would involve writing your own scalarule.Provider
implementation.
To register it:
import "github.com/stackb/scala-gazelle/pkg/scalarule"
func init() {
scalarule.GlobalProviderRegistry().RegisterProvider(
"@foo//rules/scala.bzl:my_scala_library",
newMyScalaLibrary(),
)
}
Enable the rule provider configuration:
# gazelle:scala_rule my_scala_library implementation @foo//rules/scala.bzl:my_scala_library
At the core of the import resolution process is a trie structure where the keys
of the trie are parts of an import statement and the values are
*resolver.Symbol
structs.
For example, for the import io.grpc.Status
, the trie would contain the
following:
io
: (nil
)grpc
(typePACKAGE
, from@maven//:io_grpc_grpc_core
)Status
(typeCLASS
, from@maven//:io_grpc_grpc_core
)
When resolving the import io.grpc.Status.ALREADY_EXISTS
, the longest prefix
match would find the symbol io.grpc.Status
CLASS
and the label
@maven//:io_grpc_grpc_core
would be added to the rule deps
.
The trie is populated by resolver.SymbolProvider
implementations. Each
implementation provides symbols from a different data source.
A symbol provider:
- Have a canonical name.
- Must be enabled with the
-scala_symbol_provider
flag. - Manage its own flags; check the provider source code for complete details.
The source
provider is responsible for indexing importable symbols from
.scala
source files during the rule generation phase.
Source files that are listed in the srcs
of existing scala rules are parsed.
The discovered object
, class
, trait
types are provided to the symbol trie
such that they can be resolved by other rules.
The extension wouldn't do much without this provider, but it still needs to be
enabled in args
:
gazelle(
name = "gazelle",
args = [
"-scala_symbol_provider=source",
],
)
This provider reads maven_install.json
files that are produced from pinned
maven_install
repository rules.
As of bazel-contrib/rules_jvm_external#716 (Add index of packages in jar files when pinning
), @rules_jvm_external
indexes the package
names that jars provide.
The maven
provider reads these package names and populates the trie
accordingly. Note that since only package names are known, maven dependency
resolution via this mechanism alone is coarse-grained.
To configure the maven
provider, use the -maven_install_json_file
flag (can
be repeated if you have more than one maven_install
rule):
gazelle(
name = "gazelle",
args = [
"-scala_symbol_provider=source",
"-scala_symbol_provider=maven",
"-maven_install_json_file=$(location //:maven_install.json)",
"-maven_install_json_file=$(location //:artifactory_install.json)",
],
data = [
"//:maven_install.json",
"//:artifactory_install.json",
],
)
The java
provider indexes symbols from java-related dependencies in the bazel
graph. It relies on an index file produced by the java_index
rule:
load("@build_stack_scala_gazelle//rules:java_index.bzl", "java_index")
java_index(
name = "java_index",
deps = [
"@maven//:io_grpc_grpc_context",
"@maven//:io_grpc_grpc_core",
],
out_json = "java_index.json",
out_proto = "java_index.pb",
platform_deps = ["@bazel_tools//tools/jdk:platformclasspath"],
)
NOTE: Use
bazel build //:java_index --output_groups=json
to produce the JSON file if you want to inspect it.
The deps
attribute names dependencies that you want indexed at a
fine-grained level. Any label that provides JavaInfo
will satisfy.
The platform_deps
attribute is special: it indexes jars that are provided by
the platform and do not need to be resolved to a label in rule deps
. For
example, if you import java.util.Map
, no additional bazel label is required to
use it. The @bazel_tools//tools/jdk:platformclasspath
is the bazel rule that
supplies these symbols. You can also add things like
@maven//:org_scala_lang_scala_library
or other toolchain-provided jars that
never need to be explicitly stated in scala rule deps
.
To enable it:
gazelle(
name = "gazelle",
args = [
"-scala_symbol_provider=source",
"-scala_symbol_provider=java",
"-java_index_file=$(location //:java_index.pb)",
# the flag order is significant: put fine-grained providers (java)
# before coarse-grained ones (maven)
"-scala_symbol_provider=maven",
...
],
data = [
"//:java_index.pb",
],
)
The protobuf
providers works in conjuction with the
stackb/rules_proto gazelle extension.
That extension parses proto files and supplies scala imports for proto
message
, enum
, and service
classes.
To resolve scala dependencies to protobuf rules, enable as follows:
gazelle(
name = "gazelle",
args = [
"-scala_symbol_provider=source",
"-scala_symbol_provider=protobuf",
...
],
)
TODO: provide an example repo showing the full configuration of these two extensions.
If your organization has an additional database or mechanism for import
tracking, you can implement the resolver.SymbolProvider
interface and
register it with the global registry.
For example, if your organization uses https://github.com/johnynek/bazel-deps, you might implement something like:
package provider
import (
"flag"
"fmt"
"github.com/bazelbuild/bazel-gazelle/config"
"github.com/bazelbuild/bazel-gazelle/label"
"github.com/bazelbuild/bazel-gazelle/rule"
"github.com/stackb/scala-gazelle/pkg/collections"
"github.com/stackb/scala-gazelle/pkg/resolver"
)
func init() {
resolver.
GlobalSymbolProviderRegistry().
AddSymbolProvider(newBazelDepsProvider())
}
// bazelDepsProvider is a provider of symbols for the
// johnynek/bazel-deps.
type bazelDepsProvider struct {
bazelDepsYAMLFiles collections.StringSlice
}
// newBazelDepsProvider constructs a new provider.
func newBazelDepsProvider() *bazelDepsProvider {
return &bazelDepsProvider{}
}
// Name implements part of the resolver.SymbolProvider interface.
func (p *bazelDepsProvider) Name() string {
return "bazel-deps"
}
// RegisterFlags implements part of the resolver.SymbolProvider interface.
func (p *bazelDepsProvider) RegisterFlags(fs *flag.FlagSet, cmd string, c *config.Config) {
fs.Var(&p.bazelDepsYAMLFiles, "bazel_deps_yaml_file", "path to bazel_deps.yaml")
}
// CheckFlags implements part of the resolver.SymbolProvider interface.
func (p *bazelDepsProvider) CheckFlags(fs *flag.FlagSet, c *config.Config, scope resolver.Scope) error {
for _, filename := range p.bazelDepsYAMLFiles {
if err := p.loadFile(c.WorkDir, filename, scope); err != nil {
return err
}
}
return nil
}
func (p *bazelDepsProvider) loadFile(dir string, filename string, scope resolver.Scope) error {
return fmt.Errorf("Implement me; Supply symbols to the given scope!")
}
// CanProvide implements part of the resolver.SymbolProvider interface.
func (p *bazelDepsProvider) CanProvide(dep *resolver.ImportLabel, knownRule func(from label.Label) (*rule.Rule, bool)) bool {
if dep.Label.Repo == "bazel_deps" {
return true
}
return false
}
// OnResolve implements part of the resolver.SymbolProvider interface.
func (p *bazelDepsProvider) OnResolve() error {
return nil
}
// OnEnd implements part of the resolver.SymbolProvider interface.
func (p *bazelDepsProvider) OnEnd() error {
return nil
}
The resolver.Scope.CanProvide
function is used to determine if this provider
is capable of providing a given dependency label. When rule deps are resolved,
the existing deps list is cleared of those labels it can find a provider for.
For example, given the rule:
scala_library(
name = "lib",
srcs = glob(["*.scala"]),
deps = [
"//src/main/scala:scala",
"@foo//:scala",
"@maven//:com_google_gson_gson",
],
)
The configured providers are checked to see which labels can be re-resolved. So, the intermediate state of the rule before deps resolution actually happens looks like:
scala_library(
name = "lib",
srcs = glob(["*.scala"]),
deps = [
- "//src/main/scala:scala", # can be resolved to a source rule - delete it!
"@foo//:scala", # don't know anything about @foo - leave it alone!
- "@maven//:com_google_gson_gson", # can be resolved by maven provider - delete it!
],
)
So, if the scala-gazelle extension is not confident that a label can be
re-resolved, it will leave the dependency alone, even without # keep
directives.
Issues can occur when more than one jar provides the same package name. This
situation is known as a "split package". The io.grpc
namespace is a classic
example (see discussion). The
io.grpc.Context
is in @maven//:io_grpc_grpc_context
, but other classes like
io.grpc.Status
are in @maven//:io_grpc_grpc_core
. Both advertise the
package io.grpc
.
To help avoid issues with split packages:
- Use the
java
provider to supply fine-grained deps for selected artifacts. - Avoid wildcard imports that involve split packages.
When the symbol trie is populated from the enabled symbol providers, conflicts can arise if the same symbol is put more than once under the same name.
Rather than ignoring the duplicate, additional symbols are stored on the *resolver.Symbol.Conflicts
slice, which has this signature:
// Symbol associates a name with the label that provides it, along with a type
// classifier that says what kind of symbol it is.
type Symbol struct {
// Type is the kind of symbol this is.
Type sppb.ImportType
// Name is the fully-qualified import name.
Name string
// Label is the bazel label where the symbol is provided from.
Label label.Label
// Provider is the name of the provider that supplied the symbol.
Provider string
// Conflicts is a list of symbols provided by another provider or label.
Conflicts []*Symbol
// Requires is a list of other symbols that are required by this one.
Requires []*Symbol
}
If an import resolves to a symbol that carries a conflict, a warning is emitted. Example:
Unresolved symbol conflict: CLASS "com.google.protobuf.Empty" has multiple providers!
- Maybe add one of the following to //common/akka/grpc:BUILD.bazel:
# gazelle:resolve scala scala com.google.protobuf.Empty @protobufapis//google/protobuf:empty_proto_scala_library:
# gazelle:resolve scala scala com.google.protobuf.Empty @maven//:com_google_protobuf_protobuf_java:
As the warning suggests, one way to suppress the warning is to add a gazelle:resolve
directive indicating which rule should be chosen.
Another way to resolve the conflict is to use a resolver.ConflictResolver
implementation, which has this signature:
// ConflictResolver implementations are capable of applying a conflict
// resolution strategy for conflicting resolved import symbols.
type ConflictResolver interface {
// ResolveConflict takes the context rule and imports, and the target symbol
// with conflicts to resolve.
ResolveConflict(universe Universe, r *rule.Rule, imports ImportMap, imp *Import, symbol *Symbol) (*Symbol, bool)
}
Another example:
Unresolved symbol conflict: PROTO_PACKAGE "examples.helloworld.greeter.proto" has multiple providers!
- Maybe remove a wildcard import (if one exists)
- Maybe add one of the following to @unity//examples/helloworld/greeter/server/scala:BUILD.bazel:
# gazelle:resolve scala scala examples.helloworld.greeter.proto //examples/helloworld/greeter/proto:examples_helloworld_greeter_proto_grpc_scala_library:
# gazelle:resolve scala scala examples.helloworld.greeter.proto //examples/helloworld/greeter/proto:examples_helloworld_greeter_proto_proto_scala_library:
In this case, the conflict occurred because the package
examples.helloworld.greeter.proto
was resolved via a wildcard import
import examples.helloworld.greeter.proto._
. Because that package is provided by
two rules (one proto only, one grpc), we need to choose one.
One way to avoid this conflict is to remove the wildcard import and be explicit about which things are to be imported.
Another way is implemented by the scala_proto_package
conflict resolver:
- if the rule is using any grpc symbols, choose the
examples_helloworld_greeter_proto_grpc_scala_library
. - if the rule is not using any grpc, take the proto one, since we don't want unnecessary grpc deps when they aren't needed.
To use it, you need to register it with a flag and enable it with a directive:
gazelle(
name = "gazelle",
args = [
"-scala_conflict_resolver=scala_proto_package",
...
],
...
)
# gazelle:resolve_conflicts +scala_proto_package
The
+
sign is an intent modifier and is optional in the positive case.
To turn off this strategy in a sub-package:
# gazelle:resolve_conflicts -scala_proto_package
The predefined_label
conflict resolver prefers symbols that have no origin
label. For example, consider the following java_index
rule:
java_index(
name = "java_index",
out_json = "java_index.json",
out_proto = "java_index.pb",
platform_deps = [
"@bazel_tools//tools/jdk:platformclasspath",
"@maven//:org_scala_lang_scala_library",
],
visibility = ["//visibility:public"],
)
When the java provider reads this, it loads all the symbols from
@maven//:org_scala_lang_scala_library
and sets their label to label.NoLabel
,
which implies that this dependency @maven//:org_scala_lang_scala_library
is
not needed in rule deps
since the scala_library is already provided by the
toolchain / compiler.
When choosing between the following conflict, it will choose the one without the
label, thereby suppressing it in deps
:
gazelle: conflicting symbols "scala.runtime": &resolver.Symbol{
Type: s"PACKAGE",
Name: "scala.runtime",
- Label: s"@maven//:org_scala_lang_scala_library",
+ Label: s"//:",
- Provider: "maven",
+ Provider: "java",
... // 1 ignored and 1 identical fields
}
To use it, you need to register it with a flag and enable it with a directive:
gazelle(
name = "gazelle",
args = [
"-scala_conflict_resolver=predefined_label",
...
],
...
)
# gazelle:resolve_conflicts predefined_label
You can implement your own conflict resolution strategies by implementing the resolver.ConflictResolver
interface and registering it with the global registry:
package custom
import "github.com/stackb/scala-gazelle/pkg/resolver"
func init() {
cr := &customConflictResolver{}
resolver.GlobalConflictResolverRegistry().PutConflictResolver(cr.Name(), cr)
}
type customConflictResolver struct {}
...
In some cases it is necessary to visit the resolved dependency list for a rule
and apply custom logic to cleanup labels. The resolver.DepsCleaner
interface
should be implemented and registered to a global cache, similar to how it is
done with "Custom confict resolvers".
To use a deps cleaner, you'll need to do the following:
- implement the
resolver.DepsCleaner
interface (see code for details). - register your implementation in an init function via
resolver.GlobalDepsCleanerRegistry().PutDepsCleaner(name, impl)
using a unique name (e.g. a hypotheticalproto_deps_cleaner
). - Make the deps cleaner available in your gazelle rule via arguments
--scala_deps_cleaner=proto_deps_cleaner
. - Enable the deps cleaner in a BUILD file via the directive
# gazelle:scala_deps_cleaner proto_deps_cleaner
.
Parsing scala source files for a large repository is expensive. A cache can be
enabled via the -scala_gazelle_cache_file
flag. If present, the extension
will read and write to this file.
gazelle(
name = "gazelle",
args = [
"-scala_gazelle_cache_file=${BUILD_WORKING_DIRECTORY}/.scala-gazelle-cache.pb",
],
)
The cache stores a sha256 hash of each source file; it will use cached state if the hash matches the source file.
- Environment variables are expanded.
- To use a JSON cache (for example, to inspect it, change the extension to
.json
)- Bonus: the cache also records the total number of packages and enables a nice progress bar.
Gazelle can be slow for large repositories. To get a better sense of what's going on, cpu and memory profiling can be enabled:
gazelle(
name = "gazelle",
args = [
"-cpuprofile_file=./gazelle.cprof",
"-memprofile_file=./gazelle.mprof",
],
)
Use bazel run @go_sdk//:bin/go -- tool pprof ./gazelle.cprof
to analyze it
(try the commands top10
or web
).
Use bazel run @go_sdk//:bin/go -- tool mprof ./gazelle.mprof
to analyze it
(try commands top10
or web
)
This extension supports the following directives:
Instantiates a named rule provider configuration (enabled by default once instantiated):
# gazelle:scala_rule scala_library implementation @io_bazel_rules_scala//scala:scala.bzl%scala_library
To enable/disable the configuration in a subpackage:
# gazelle:scala_rule scala_library enabled false
# gazelle:scala_rule scala_library enabled true
This is the core gazelle directive not implemented here but is applicable to this one.
Use something like the following to override dependency resolution to a hard-coded label:
# gazelle:resolve scala scala.util @maven//:org_scala_lang_scala_library
Use this directive to co-resolve dependencies that, while not explicitly stated in the source file, are needed for compilation. Example:
# gazelle:resolve_with scala com.typesafe.scalalogging.LazyLogging org.slf4j.Logger
This is referred to as an "implicit" dependency internally.
These are included transitively.
The resolve_kind_rewrite_name
is required for the following scenario:
- You have a custom existing rule implemented as a macro, for example
my_scala_app
. - The
my_scala_app
macro declares a "real"scala_library
using a name like%{name}_lib
.
In this case the extension would parse a my_scala_app
rule at
//src/main/scala/com/foo:scala
; other rules that import symbols from this rule
would resolve to //src/main/scala/com/foo:scala
. However, there is no such
actual scala_library
at :scala
, it really should be
//src/main/scala/com/foo:scala_lib
.
This can be dealt with as follows:
# gazelle:resolve_kind_rewrite_name my_scala_app %{name}_lib
This tells the extension "if you find a rule with kind my_scala_app
, rewrite
the label name to name + "_lib"
, using the magic token %{name}
as a
placeholder."
This directive can be used to resolve free names listed in a scala file against
the current file symbol scope. To inspect the names
of a file, take a look at the file parse cache. For example:
{
"label": "//common/utils/logging/scala",
"kind": "scala_library",
"files": [
{
"filename": "src/LogField.scala",
"imports": [
"com.typesafe.scalalogging.LazyLogging",
"net.logstash.logback.marker.MapEntriesAppendingMarker",
"net.logstash.logback.marker.ObjectAppendingMarker",
"scala.jdk.CollectionConverters._"
],
"packages": [
"common.utils.logging"
],
"objects": [
"common.utils.logging.LogField",
"common.utils.logging.LogFields"
],
"traits": [
"common.utils.logging.LogField"
],
"names": [
"LazyLogging",
"LogField",
"LogFields",
"MapEntriesAppendingMarker",
"ObjectAppendingMarker",
"String",
"apply",
"fieldName",
"fieldValue",
"fieldValue.toString",
"name",
],
"extends": {
"object trumid.common.utils.logging.LogField": {
"classes": [
"com.typesafe.scalalogging.LazyLogging"
]
}
}
}
],
"sha256": "3ee80930372ea846ebb48e55eb76d55fed89b6af5f05d08f98b38045eb0464d6",
"parseTimeMillis": "3"
},
In this case, if a dependency was missing from the deps
list, but would be
corrected by resolving ObjectAppendingMarker
(but not
MapEntriesAppendingMarker
, for example purposes), one could instruct the
resolver to try and resolve it selectively via:
# gazelle:resolve_file_symbol_name LogField.scala +ObjectAppendingMarker -MapEntriesAppendingMarker
The scala_debug
directive is a debugging aid that adds comments to the
generated rules detailing what the symbols are and how they resolved.
This adds a list of comments to the srcs
attribute detailing the required
imports and how they resolved. For example:
# gazelle:scala_debug imports
Generates:
scala_binary(
name = "app",
# import: ❌ AbstractServiceBase<ERROR> symbol not found (EXTENDS of foo.allocation.Main)
# import: ✅ akka.NotUsed<CLASS> @maven//:com_typesafe_akka_akka_actor_2_12<jarindex> (DIRECT of BusinessFlows.scala)
# import: ✅ java.time.format.DateTimeFormatter<CLASS> NO-LABEL<java> (DIRECT of RequestHandler.scala)
# import: ✅ scala.concurrent.ExecutionContext<PACKAGE> @maven//:org_scala_lang_scala_library<maven> (DIRECT of RequestHandler.scala)
srcs = glob(["src/main/**/*.scala"]),
main_class = "foo.allocation.Main",
)
This adds a list of comments to the srcs
attribute detailing the provided
exports and how they resolved. Example:
# gazelle:scala_debug exports
This adds a suffix comment to each resolved item in deps
and exports
showing how the depency resolved.
# gazelle:scala_debug deps
If the rule has main_class
attribute, that name is added to the imports (type
MAIN_CLASS
).
The remainder of rule imports are collected from file imports for all .scala
source files in the rule.
Once this initial set of imports are gathered, the transitive set of required symbol are collected from:
extends
clauses (typeEXTENDS
)- imports matching a
gazelle:resolve_with
directive (typeIMPLICIT
).
The imports for a file are collected as follows:
The .scala
file is parsed:
- Import statements are collected, including nested imports.
- a set of names are collected by traversing the body of the AST. Some of these names are function calls, some of them are types, etc.
Symbols named in import statements are added to imports (type DIRECT
).
A trie of the symbols in scope for the file is built from:
- the file package(s)
- wildcard imports
"Names" represent a variety of things in a scala file. It might be a class
instatiation (e.g new Foo()
), a static method call doSomething()
, and other
similar names.
In an earlier implementation of scala-gazelle, all names in the file were
tested against the file scope and matching symbols are added to the imports
list (of type RESOLVED_NAME
).
The drawback of that approach is that it was imprecise, potentially leading to
false positive import resolutions (and unnecessary and/or incorrect deps
list).
Now, name resolution is an opt-in feature using the gazelle directive gazelle:resolve_file_symbol_name
directive.
The resolution procedure works as follows:
- Is the import named in a
gazelle:resolve
override? If yes, stop ✅. - Does the import satisfy a longest prefix match in the known import trie? If yes, stop ✅.
- Does the gazelle "rule index" and "cross-resolve" mechanism find a result for the import? If yes, stop ✅.
- No label was found. Mark as
symbol not found
and move on ❌.
For general help, please raise an github
issue or ask on the bazel slack
in the #gazelle
channel.
If you need dedicated help integrating scala-gazelle
into your repository or
want additional features, please reach out to pcj@stack.build
to assist on a
part-time contractual basis.