Conversation
📝 WalkthroughWalkthroughAdds remote filter management (download, compile, and local caching), expands the Room schema (v11→v12), replaces in-process trie/bloom implementations with on-disk binaries loaded by the native tunnel engine (multi-file support), introduces custom-filter API/manager and zip extraction, and migrates many utilities from Changes
Sequence DiagramsequenceDiagram
participant VPN as AdBlockVpnService
participant Repo as FilterListRepository
participant API as Remote API
participant DL as FilterDownloadManager
participant FS as FileSystem
participant Engine as Tunnel Engine
VPN->>Repo: seedDefaultsIfNeeded()
VPN->>Repo: fetchAndSyncRemoteFilterLists()
activate Repo
Repo->>API: GET filter_lists.json
API-->>Repo: metadata + URLs
Repo->>DL: downloadFilterList(filter, forceUpdate?)
activate DL
DL->>API: GET .bloom/.trie/.css
API-->>DL: file bytes
DL->>FS: write tmp -> rename (atomic)
DL-->>Repo: DownloadedFilterPaths
deactivate DL
Repo->>FS: persist URL/metadata to DB
Repo-->>VPN: done
deactivate Repo
VPN->>Repo: loadAllEnabledFilters()
Repo->>FS: gather CSV of trie/bloom paths
Repo-->>VPN: paths CSVs
VPN->>Engine: setTries(adCsv, secCsv, adBloomCsv, secBloomCsv)
activate Engine
Engine->>FS: open multiple .trie/.bloom files (mmap/load)
Engine-->>VPN: engine ready
deactivate Engine
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can generate a title for your PR based on the changes with custom instructions.Set the |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a916febcc7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (filter.bloomUrl.isNullOrEmpty() || filter.trieUrl.isNullOrEmpty()) { | ||
| Timber.d("Skipping ${filter.name}: no pre-compiled URLs") | ||
| continue |
There was a problem hiding this comment.
Keep loading enabled custom filter lists
loadAllEnabledFilters() now skips any enabled list whose bloomUrl or trieUrl is empty, but user-added lists are still inserted with only url (their binary URL fields default to empty). In that common flow, the list stays enabled in UI yet is never loaded into the engine, so custom filters silently stop blocking domains after this change.
Useful? React with 👍 / 👎.
| Result.failure(e) | ||
| } | ||
| } | ||
| suspend fun validateFilterUrl(url: String): Result<Boolean> = Result.success(true) |
There was a problem hiding this comment.
Restore real URL validation for added filter lists
validateFilterUrl() now always returns success, so FilterSetupViewModel.addFilterList() will accept arbitrary/invalid URLs and persist them as usable filters. Users only discover problems later when updates/load fail, which is a regression from the previous guard that rejected non-filter content up front.
Useful? React with 👍 / 👎.
| if i < len(adBlooms) { | ||
| adBloom = adBlooms[i] | ||
| } |
There was a problem hiding this comment.
Preserve bloom-trie pairing when a bloom file fails
The lookup path assumes bloom and trie arrays are index-aligned, but SetTries() builds those slices independently and only appends successful loads. If one bloom file fails to load while its trie succeeds, later bloom filters shift left and may gate the wrong trie, causing false negatives (blocked domains being treated as clean) in both DNS and MITM checks.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.kt`:
- Around line 103-124: The live engine is being fed raw selector files (via
file.readText()) instead of valid CSS generated by
FilterDownloadManager.getInjectableCss(File), so cosmetic rules never become
proper CSS; locate the code in GoTunnelAdapter where engine.setCosmeticCSS(...)
is invoked (currently passing file.readText()) and replace that call to use the
injectable CSS produced by FilterDownloadManager.getInjectableCss(file) (or the
instance method on FilterDownloadManager), ensure you handle the empty-string
fallback (don’t call setCosmeticCSS with raw selectors) and add the necessary
import/instance reference so the engine receives the CSSBuilder output.
- Around line 67-100: The downloadFile function currently writes the response
body regardless of HTTP status; after obtaining response = client.get(url) (in
downloadFile), check response.status (e.g., response.status.isSuccess or
status.value in 200..299) and if it is not successful, log an error including
the URL and status, close/discard the response body/channel, and return null
without creating/writing the tempFile; only proceed to call
response.bodyAsChannel() and stream to tempFile when the status indicates
success. Ensure the error path does not leave partial temp files or cache
entries (delete any tempFile if present) and use the same identifiers:
downloadFile, client.get, response.status, response.bodyAsChannel, tempFile,
destFile.
In `@app/src/main/java/app/pwhs/blockads/data/remote/models/FilterList.kt`:
- Around line 10-11: The FilterList data class default of isBuiltIn = true is
being overridden by the custom manifest parser; update the parser in
FilterListRepository (the block that calls extractBoolean("isBuiltIn")) to
respect the model default when the key is missing—either call a variant like
extractBoolean("isBuiltIn", default = true) or change extractBoolean to accept
and return a provided default when the key is absent, so FilterList.isBuiltIn is
true for manifests that omit the field.
In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt`:
- Around line 489-531: The current code builds "objects" by splitting the JSON
string and uses local parsers extractString/extractInt/extractBoolean over
"cleaned", which is fragile; replace this ad-hoc parsing with a proper JSON
deserializer (e.g., kotlinx.serialization, Moshi, or Gson) to parse the incoming
JSON into a typed DTO (e.g., FilterListRemote) and then map those DTO instances
to your app.pwhs.blockads.data.remote.models.FilterList; ensure you preserve the
id fallback (id ?: name.lowercase().replace(" ", "_")), copy all fields (name,
description, isEnabled, isBuiltIn, category, ruleCount, bloomUrl, trieUrl,
cssUrl, originalUrl), catch and log JSON parsing exceptions around the
deserialization so failures are visible, and remove the
objects/cleaned/extractString/extractInt/extractBoolean string-manipulation
logic.
- Around line 675-677: The methods getDomainPreview(FilterList, limit) and
validateFilterUrl(url) currently return hardcoded placeholders which bypass real
behavior; replace them with real implementations: implement getDomainPreview to
parse the provided FilterList (use its rules or source content) to extract
domain names up to the given limit, handle network or parsing errors by
returning an empty list or partial results and log/report exceptions, and
implement validateFilterUrl to perform proper URL validation (check scheme, host
presence, percent-encoding, and optionally a lightweight reachability or HEAD
request with timeout), returning Result.success(true) only for valid URLs and
Result.failure(...) with a descriptive exception for invalid inputs; update or
add unit tests for both getDomainPreview and validateFilterUrl to cover edge
cases (empty/invalid filters, unreachable URLs, weird schemes) and remove the
placeholder returns.
- Around line 314-320: The domain checks in checkDomainAndParents are currently
case-sensitive; normalize the input once by lowercasing the incoming domain
parameter at the start of checkDomainAndParents and use that normalized value
for all subsequent checker calls (including the parent-loop and the "*.$d"
wildcard checks) so the checker always receives lowercased strings and
mixed-case lookups won't miss matches.
- Around line 642-645: The method updateSingleFilter currently treats the
operation as successful if the initial result.isSuccess is true but ignores the
Result returned by loadAllEnabledFilters(), causing false success; modify
updateSingleFilter to capture the Result from loadAllEnabledFilters() (call it
reloadResult), check if reloadResult.isSuccess before returning
Result.success(filter.ruleCount), and if reloadResult.isFailure return
reloadResult (or Result.failure(reloadResult.exceptionOrNull() ?:
Exception("reload failed"))) after optionally rolling back or skipping the stats
update; also ensure filterListDao.updateStats(id = filter.id, ...) only runs
when both the initial operation and loadAllEnabledFilters() succeed so callers
receive an accurate Result.
- Around line 608-610: The early return when validLists is empty in
FilterListRepository.kt (inside the withContext block where validLists is
computed) prevents cosmetic_rules.css from being cleaned up when the user
selects only security lists; instead, ensure the cosmetic CSS cleanup runs even
if validLists.isEmpty — either move the existing cosmetic_rules.css removal
logic before the if (validLists.isEmpty()) return@withContext check or add an
explicit deletion/cleanup call for cosmetic_rules.css in the branch that handles
the empty validLists case so stale cosmetic rules are removed when only security
lists remain.
In `@app/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.kt`:
- Around line 290-292: The remote sync is invoked before the boot path waits for
connectivity causing fetchAndSyncRemoteFilterLists() to fail and
loadAllEnabledFilters() to use stale/empty data; change the flow so that
filterRepo.fetchAndSyncRemoteFilterLists() is either gated behind the same
network wait (use networkAvailableFlow.first() before calling it) or add a
follow-up call to fetchAndSyncRemoteFilterLists() once
networkAvailableFlow.first() completes (e.g., after the boot connectivity resume
block around where networkAvailableFlow.first() is awaited), ensuring
loadAllEnabledFilters() runs only after a successful sync or retry.
In `@tunnel/engine.go`:
- Around line 145-176: The current shutdown/reload sequence unmaps mmapped
tries/blooms while lookups may access them (SetTries, Stop call Close on mmaps
and tunnel/trie.go:t.Close and tunnel/bloom.go:t.Close unmap buffers) causing
races with readers (IsDomainBlocked, handleDNSQuery) — fix by making the swap
atomic or adding an RWMutex: change the engine to use a single immutable
snapshot struct (e.g., TriesAndBlooms) and atomically replace the pointer when
reloading, or protect all accesses (readers in IsDomainBlocked/handleDNSQuery
and writers in SetTries/Stop and the close loops for
adTries/secTries/adBlooms/secBlooms) with an RWMutex (RLock for lookups, Lock
for replacing/closing) so Close() never unmaps while readers hold RLock.
- Around line 178-227: The code currently compresses loaded tries and bloom
filters independently (e.adTries/e.secTries vs e.adBlooms/e.secBlooms) causing
index misalignment when a file fails to load; update the
LoadMmapTrie/LoadBloomFilter loops (the blocks that append to e.adTries,
e.secTries, e.adBlooms, e.secBlooms) so that you preserve CSV order by either
(A) storing each trie+bloom as a single paired container (e.g., a struct or
tuple) and append that pair only once per CSV index, or (B) when one side fails
to load append a nil placeholder for that index to the corresponding slice
(append(nil) to e.adBlooms/e.secBlooms when LoadBloomFilter fails, and
append(nil) to e.adTries/e.secTries when LoadMmapTrie fails) so SetTries(),
lookup code, and the other referenced blocks (also at the other occurrences
noted) see aligned indices; update any lookup logic to handle nil placeholders
accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3cd30953-bff5-4dd0-a449-1f92f8bfc258
⛔ Files ignored due to path filters (1)
app/libs/tunnel-sources.jaris excluded by!**/*.jar
📒 Files selected for processing (10)
app/libs/tunnel.aarapp/src/main/java/app/pwhs/blockads/data/AppDatabase.ktapp/src/main/java/app/pwhs/blockads/data/entities/FilterList.ktapp/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.ktapp/src/main/java/app/pwhs/blockads/data/remote/models/FilterList.ktapp/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.ktapp/src/main/java/app/pwhs/blockads/di/AppModule.ktapp/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.ktapp/src/main/java/app/pwhs/blockads/service/GoTunnelAdapter.kttunnel/engine.go
| private suspend fun downloadFile(url: String, destFile: File, forceUpdate: Boolean): String? { | ||
| if (!forceUpdate && destFile.exists() && destFile.length() > 0) { | ||
| Timber.d("File already exists: ${destFile.name}") | ||
| return destFile.absolutePath | ||
| } | ||
|
|
||
| return try { | ||
| Timber.d("Downloading from $url to ${destFile.name}") | ||
| val response = client.get(url) | ||
| val channel = response.bodyAsChannel() | ||
|
|
||
| val tempFile = File(destFile.parent, "${destFile.name}.tmp") | ||
| FileOutputStream(tempFile).use { output -> | ||
| val buffer = ByteArray(8 * 1024) | ||
| var bytesRead: Int | ||
| while (channel.readAvailable(buffer).also { bytesRead = it } >= 0) { | ||
| if (bytesRead > 0) { | ||
| output.write(buffer, 0, bytesRead) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (tempFile.renameTo(destFile)) { | ||
| Timber.d("Successfully downloaded to ${destFile.absolutePath}") | ||
| destFile.absolutePath | ||
| } else { | ||
| Timber.e("Failed to rename temp file to ${destFile.name}") | ||
| tempFile.delete() | ||
| null | ||
| } | ||
| } catch (e: Exception) { | ||
| Timber.e(e, "Failed to download $url") | ||
| null | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== HttpClient configuration =="
sed -n '43,66p' app/src/main/java/app/pwhs/blockads/di/AppModule.kt
echo
echo "== downloadFile implementation =="
sed -n '67,100p' app/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.kt
echo
echo "== Repo-wide response validation hooks =="
rg -n "expectSuccess|HttpResponseValidator|status\\.isSuccess|response\\.status" app/src/main/javaRepository: pass-with-high-score/blockads-android
Length of output: 2253
Validate HTTP response status before writing downloaded files to cache.
The Ktor HttpClient in AppModule.kt is not configured with expectSuccess, and downloadFile() does not validate response.status before writing to disk. This allows HTTP error responses (404, 500, etc.) to be written as corrupt .bloom/.trie` files. The condition at line 68 then caches and reuses these files indefinitely on subsequent app startups.
Add a status check immediately after client.get(url):
Suggested fix
+import io.ktor.http.isSuccess
+
return try {
Timber.d("Downloading from $url to ${destFile.name}")
val response = client.get(url)
+ if (!response.status.isSuccess()) {
+ Timber.w("Skipping ${destFile.name}: HTTP ${response.status}")
+ return null
+ }
val channel = response.bodyAsChannel()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.kt`
around lines 67 - 100, The downloadFile function currently writes the response
body regardless of HTTP status; after obtaining response = client.get(url) (in
downloadFile), check response.status (e.g., response.status.isSuccess or
status.value in 200..299) and if it is not successful, log an error including
the URL and status, close/discard the response body/channel, and return null
without creating/writing the tempFile; only proceed to call
response.bodyAsChannel() and stream to tempFile when the status indicates
success. Ensure the error path does not leave partial temp files or cache
entries (delete any tempFile if present) and use the same identifiers:
downloadFile, client.get, response.status, response.bodyAsChannel, tempFile,
destFile.
| /** | ||
| * Reads a downloaded CSS file containing raw selectors, appends { display: none !important; } | ||
| * and returns a single valid CSS string ready for injection. | ||
| */ | ||
| fun getInjectableCss(file: File): String { | ||
| if (!file.exists() || file.length() == 0L) { | ||
| return "" | ||
| } | ||
|
|
||
| val cssBuilder = StringBuilder() | ||
| try { | ||
| file.forEachLine { line -> | ||
| val selector = line.trim() | ||
| if (selector.isNotEmpty()) { | ||
| cssBuilder.append(selector).append(" { display: none !important; }\n") | ||
| } | ||
| } | ||
| } catch (e: Exception) { | ||
| Timber.e(e, "Error reading CSS file ${file.absolutePath}") | ||
| return "" | ||
| } | ||
| return cssBuilder.toString() |
There was a problem hiding this comment.
The selector-to-CSS conversion is not wired into the live engine path.
app/src/main/java/app/pwhs/blockads/service/GoTunnelAdapter.kt Line 267-Line 273 still does file.readText() and sends the raw file straight to engine.setCosmeticCSS(...). If these downloaded .css files really contain selector lists as documented here, cosmetic filtering never becomes valid CSS.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.kt`
around lines 103 - 124, The live engine is being fed raw selector files (via
file.readText()) instead of valid CSS generated by
FilterDownloadManager.getInjectableCss(File), so cosmetic rules never become
proper CSS; locate the code in GoTunnelAdapter where engine.setCosmeticCSS(...)
is invoked (currently passing file.readText()) and replace that call to use the
injectable CSS produced by FilterDownloadManager.getInjectableCss(file) (or the
instance method on FilterDownloadManager), ensure you handle the empty-string
fallback (don’t call setCosmeticCSS with raw selectors) and add the necessary
import/instance reference so the engine receives the CSSBuilder output.
| val isEnabled: Boolean = false, | ||
| val isBuiltIn: Boolean = true, |
There was a problem hiding this comment.
isBuiltIn = true is not honored by the current manifest parser.
The custom parser in app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt Line 486-Line 530 always passes extractBoolean("isBuiltIn"), and that helper returns false when the key is missing. So Line 11 looks like a safe default, but manifests that omit isBuiltIn will still deserialize as false.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/remote/models/FilterList.kt` around
lines 10 - 11, The FilterList data class default of isBuiltIn = true is being
overridden by the custom manifest parser; update the parser in
FilterListRepository (the block that calls extractBoolean("isBuiltIn")) to
respect the model default when the key is missing—either call a variant like
extractBoolean("isBuiltIn", default = true) or change extractBoolean to accept
and return a provided default when the key is absent, so FilterList.isBuiltIn is
true for manifests that omit the field.
| private inline fun checkDomainAndParents(domain: String, checker: (String) -> Boolean): Boolean { | ||
| if (checker(domain)) return true | ||
| var d = domain | ||
| while (d.contains('.')) { | ||
| d = d.substringAfter('.') | ||
| if (checker(d)) return true | ||
| // Check wildcard: *.remaining (e.g., *.example.com matches sub.example.com) | ||
| if (checker("*.$d")) return true |
There was a problem hiding this comment.
Domain checks are case-sensitive at lookup time.
Line 314 through Line 320 compares raw input against lowercase sets; uppercase/mixed-case domains can miss matches. Normalize once at entry in checkDomainAndParents.
💡 Proposed fix
private inline fun checkDomainAndParents(domain: String, checker: (String) -> Boolean): Boolean {
- if (checker(domain)) return true
- var d = domain
+ val normalized = domain.lowercase()
+ if (checker(normalized)) return true
+ var d = normalized
while (d.contains('.')) {
d = d.substringAfter('.')
if (checker(d)) return true
if (checker("*.$d")) return true
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt`
around lines 314 - 320, The domain checks in checkDomainAndParents are currently
case-sensitive; normalize the input once by lowercasing the incoming domain
parameter at the start of checkDomainAndParents and use that normalized value
for all subsequent checker calls (including the parent-loop and the "*.$d"
wildcard checks) so the checker always receives lowercased strings and
mixed-case lookups won't miss matches.
| val objects = json.split("},").map { | ||
| it.trim().removePrefix("[").removeSuffix("]").trim() + "}" | ||
| } | ||
|
|
||
| // ── Build per-filter fingerprints ── | ||
| val currentFpMap = buildFingerprintMap(enabledLists) | ||
| val currentFingerprint = currentFpMap.entries | ||
| .sortedBy { it.key } | ||
| .joinToString(";") { "${it.key}:${it.value}" } | ||
| for (obj in objects) { | ||
| val cleaned = obj.trim().removePrefix("{").removeSuffix("}").removeSuffix("},") | ||
| if (cleaned.isBlank()) continue | ||
|
|
||
| val savedFingerprint = try { | ||
| if (fingerprintFile.exists()) fingerprintFile.readText() else "" | ||
| } catch (_: Exception) { | ||
| "" | ||
| fun extractString(key: String): String? { | ||
| val pattern = "\"$key\"\\s*:\\s*\"(.*?)\"".toRegex() | ||
| return pattern.find(cleaned)?.groupValues?.get(1) | ||
| ?.replace("\\u0026", "&") | ||
| } | ||
|
|
||
| // ── Strategy 1: Cache HIT — nothing changed ── | ||
| if (currentFingerprint == savedFingerprint | ||
| && adTrieFile.exists() && adTrieFile.length() > 0 | ||
| ) { | ||
| adTrie = DomainTrie.loadFromMmap(adTrieFile) | ||
| securityTrie = if (secTrieFile.exists() && secTrieFile.length() > 0) { | ||
| DomainTrie.loadFromMmap(secTrieFile) | ||
| } else null | ||
|
|
||
| val elapsed = System.currentTimeMillis() - startTime | ||
| val totalCount = (adTrie?.size ?: 0) + (securityTrie?.size ?: 0) | ||
| _domainCountFlow.value = totalCount | ||
| Timber.d("Trie cache HIT — loaded $totalCount domains via mmap in ${elapsed}ms") | ||
|
|
||
| // Ensure cosmetic rules are generated even on a cache hit if the file is missing/empty | ||
| val cssFile = File(context.filesDir, "$CACHE_DIR/cosmetic_rules.css") | ||
| if (!cssFile.exists() || cssFile.length() == 0L) { | ||
| Timber.d("Trie cache HIT, but cosmetic CSS is missing or 0 bytes. Recompiling...") | ||
| compileCosmeticRules(enabledLists) | ||
| } | ||
|
|
||
| return@withContext Result.success(totalCount) | ||
| fun extractInt(key: String): Int { | ||
| val pattern = "\"$key\"\\s*:\\s*(\\d+)".toRegex() | ||
| return pattern.find(cleaned)?.groupValues?.get(1)?.toIntOrNull() ?: 0 | ||
| } | ||
| fun extractBoolean(key: String): Boolean { | ||
| val pattern = "\"$key\"\\s*:\\s*(true|false)".toRegex() | ||
| return pattern.find(cleaned)?.groupValues?.get(1) == "true" | ||
| } | ||
|
|
||
| // ── Determine what changed ── | ||
| val savedFpMap = parseFingerprintMap(savedFingerprint) | ||
| val currentIds = currentFpMap.keys | ||
| val savedIds = savedFpMap.keys | ||
|
|
||
| val addedFilterIds = currentIds - savedIds | ||
| val removedFilterIds = savedIds - currentIds | ||
| val changedFilterIds = currentIds.intersect(savedIds) | ||
| .filter { currentFpMap[it] != savedFpMap[it] } | ||
| .toSet() | ||
|
|
||
| val isAddOnly = removedFilterIds.isEmpty() && changedFilterIds.isEmpty() | ||
| && addedFilterIds.isNotEmpty() | ||
| && adTrieFile.exists() && adTrieFile.length() > 0 | ||
|
|
||
| if (isAddOnly) { | ||
| // ── Strategy 2: Incremental ADD — only new filters ── | ||
| Timber.d("Trie INCREMENTAL — ${addedFilterIds.size} new filter(s), loading existing + adding new") | ||
|
|
||
| val addedFilters = enabledLists.filter { it.id in addedFilterIds } | ||
|
|
||
| val adCount = incrementalAdd( | ||
| addedFilters.filter { it.category != FilterList.CATEGORY_SECURITY }, | ||
| adTrieFile | ||
| ) | ||
| val secCount = incrementalAdd( | ||
| addedFilters.filter { it.category == FilterList.CATEGORY_SECURITY }, | ||
| secTrieFile | ||
| ) | ||
|
|
||
| adTrie = if (adTrieFile.exists() && adTrieFile.length() > 0) { | ||
| DomainTrie.loadFromMmap(adTrieFile) | ||
| } else null | ||
| securityTrie = if (secTrieFile.exists() && secTrieFile.length() > 0) { | ||
| DomainTrie.loadFromMmap(secTrieFile) | ||
| } else null | ||
|
|
||
| saveFingerprintAndLog( | ||
| fingerprintFile, | ||
| currentFingerprint, | ||
| startTime, | ||
| "INCREMENTAL" | ||
| val name = extractString("name") ?: continue | ||
| val bloomUrl = extractString("bloomUrl") ?: continue | ||
| val trieUrl = extractString("trieUrl") ?: continue | ||
|
|
||
| results.add( | ||
| app.pwhs.blockads.data.remote.models.FilterList( | ||
| name = name, | ||
| id = extractString("id") ?: name.lowercase().replace(" ", "_"), | ||
| description = extractString("description"), | ||
| isEnabled = extractBoolean("isEnabled"), | ||
| isBuiltIn = extractBoolean("isBuiltIn"), | ||
| category = extractString("category"), | ||
| ruleCount = extractInt("ruleCount"), | ||
| bloomUrl = bloomUrl, | ||
| trieUrl = trieUrl, | ||
| cssUrl = extractString("cssUrl"), | ||
| originalUrl = extractString("originalUrl") | ||
| ) | ||
| val totalCount = (adTrie?.size ?: 0) + (securityTrie?.size ?: 0) | ||
| _domainCountFlow.value = totalCount | ||
| ) | ||
| } | ||
| results |
There was a problem hiding this comment.
Fragile JSON parsing can silently break remote sync.
Line 489 through Line 531 parses JSON via string splitting and regex extraction. This is not JSON-safe (escaped quotes, commas/braces inside strings, formatting changes), so remote sync can drop/mangle entries and still continue with partial state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt`
around lines 489 - 531, The current code builds "objects" by splitting the JSON
string and uses local parsers extractString/extractInt/extractBoolean over
"cleaned", which is fragile; replace this ad-hoc parsing with a proper JSON
deserializer (e.g., kotlinx.serialization, Moshi, or Gson) to parse the incoming
JSON into a typed DTO (e.g., FilterListRemote) and then map those DTO instances
to your app.pwhs.blockads.data.remote.models.FilterList; ensure you preserve the
id fallback (id ?: name.lowercase().replace(" ", "_")), copy all fields (name,
description, isEnabled, isBuiltIn, category, ruleCount, bloomUrl, trieUrl,
cssUrl, originalUrl), catch and log JSON parsing exceptions around the
deserialization so failures are visible, and remove the
objects/cleaned/extractString/extractInt/extractBoolean string-manipulation
logic.
| if (result.isSuccess) { | ||
| filterListDao.updateStats(id = filter.id, count = filter.ruleCount, timestamp = System.currentTimeMillis()) | ||
| loadAllEnabledFilters() | ||
| Result.success(filter.ruleCount) |
There was a problem hiding this comment.
updateSingleFilter reports success even when reload fails.
Line 644 ignores the result of loadAllEnabledFilters(). If reload fails, this method still returns Result.success(...), leaving caller/UI in a false-success state.
💡 Proposed fix
if (result.isSuccess) {
filterListDao.updateStats(id = filter.id, count = filter.ruleCount, timestamp = System.currentTimeMillis())
- loadAllEnabledFilters()
- Result.success(filter.ruleCount)
+ val reloadResult = loadAllEnabledFilters()
+ if (reloadResult.isFailure) {
+ return@withContext Result.failure(
+ reloadResult.exceptionOrNull() ?: Exception("Failed to reload enabled filters")
+ )
+ }
+ Result.success(filter.ruleCount)
} else {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (result.isSuccess) { | |
| filterListDao.updateStats(id = filter.id, count = filter.ruleCount, timestamp = System.currentTimeMillis()) | |
| loadAllEnabledFilters() | |
| Result.success(filter.ruleCount) | |
| if (result.isSuccess) { | |
| filterListDao.updateStats(id = filter.id, count = filter.ruleCount, timestamp = System.currentTimeMillis()) | |
| val reloadResult = loadAllEnabledFilters() | |
| if (reloadResult.isFailure) { | |
| return@withContext Result.failure( | |
| reloadResult.exceptionOrNull() ?: Exception("Failed to reload enabled filters") | |
| ) | |
| } | |
| Result.success(filter.ruleCount) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt`
around lines 642 - 645, The method updateSingleFilter currently treats the
operation as successful if the initial result.isSuccess is true but ignores the
Result returned by loadAllEnabledFilters(), causing false success; modify
updateSingleFilter to capture the Result from loadAllEnabledFilters() (call it
reloadResult), check if reloadResult.isSuccess before returning
Result.success(filter.ruleCount), and if reloadResult.isFailure return
reloadResult (or Result.failure(reloadResult.exceptionOrNull() ?:
Exception("reload failed"))) after optionally rolling back or skipping the stats
update; also ensure filterListDao.updateStats(id = filter.id, ...) only runs
when both the initial operation and loadAllEnabledFilters() succeed so callers
receive an accurate Result.
| suspend fun getDomainPreview(filter: FilterList, limit: Int = 100): List<String> = emptyList() | ||
|
|
||
| Result.failure(IllegalArgumentException("Not a valid filter list")) | ||
| } catch (e: Exception) { | ||
| Result.failure(e) | ||
| } | ||
| } | ||
| suspend fun validateFilterUrl(url: String): Result<Boolean> = Result.success(true) |
There was a problem hiding this comment.
Placeholder implementations are shipping behavior regressions.
Line 675 and Line 677 currently hardcode emptyList() and Result.success(true). That bypasses domain preview and URL validation entirely, which can cause incorrect UX and invalid filter inputs to be accepted.
I can draft concrete implementations for both methods (including lightweight validation and edge-case handling) if you want me to open a follow-up patch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt`
around lines 675 - 677, The methods getDomainPreview(FilterList, limit) and
validateFilterUrl(url) currently return hardcoded placeholders which bypass real
behavior; replace them with real implementations: implement getDomainPreview to
parse the provided FilterList (use its rules or source content) to extract
domain names up to the given limit, handle network or parsing errors by
returning an empty list or partial results and log/report exceptions, and
implement validateFilterUrl to perform proper URL validation (check scheme, host
presence, percent-encoding, and optionally a lightweight reachability or HEAD
request with timeout), returning Result.success(true) only for valid URLs and
Result.failure(...) with a descriptive exception for invalid inputs; update or
add unit tests for both getDomainPreview and validateFilterUrl to cover edge
cases (empty/invalid filters, unreachable URLs, weird schemes) and remove the
placeholder returns.
| filterRepo.seedDefaultsIfNeeded() | ||
| filterRepo.fetchAndSyncRemoteFilterLists() | ||
| val result = filterRepo.loadAllEnabledFilters() |
There was a problem hiding this comment.
Remote sync runs before the boot path waits for connectivity.
On boot-starts without network, fetchAndSyncRemoteFilterLists() fails here, loadAllEnabledFilters() proceeds with stale or empty binaries, and there is no retry after Line 374 resumes on connectivity. Please gate the sync behind the same network wait, or rerun it once networkAvailableFlow.first() completes.
Also applies to: 370-375
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.kt` around
lines 290 - 292, The remote sync is invoked before the boot path waits for
connectivity causing fetchAndSyncRemoteFilterLists() to fail and
loadAllEnabledFilters() to use stale/empty data; change the flow so that
filterRepo.fetchAndSyncRemoteFilterLists() is either gated behind the same
network wait (use networkAvailableFlow.first() before calling it) or add a
follow-up call to fetchAndSyncRemoteFilterLists() once
networkAvailableFlow.first() completes (e.g., after the boot connectivity resume
block around where networkAvailableFlow.first() is awaited), ensuring
loadAllEnabledFilters() runs only after a successful sync or retry.
| e.mu.Lock() | ||
| defer e.mu.Unlock() | ||
|
|
||
| // Close old tries | ||
| if e.adTrie != nil { | ||
| e.adTrie.Close() | ||
| e.adTrie = nil | ||
| for _, t := range e.adTries { | ||
| if t != nil { | ||
| t.Close() | ||
| } | ||
| } | ||
| if e.secTrie != nil { | ||
| e.secTrie.Close() | ||
| e.secTrie = nil | ||
| e.adTries = nil | ||
|
|
||
| for _, t := range e.secTries { | ||
| if t != nil { | ||
| t.Close() | ||
| } | ||
| } | ||
| e.secTries = nil | ||
|
|
||
| // Close old bloom filters | ||
| if e.adBloom != nil { | ||
| e.adBloom.Close() | ||
| e.adBloom = nil | ||
| for _, bf := range e.adBlooms { | ||
| if bf != nil { | ||
| bf.Close() | ||
| } | ||
| } | ||
| if e.secBloom != nil { | ||
| e.secBloom.Close() | ||
| e.secBloom = nil | ||
| e.adBlooms = nil | ||
|
|
||
| for _, bf := range e.secBlooms { | ||
| if bf != nil { | ||
| bf.Close() | ||
| } | ||
| } | ||
| e.secBlooms = nil |
There was a problem hiding this comment.
Do not unmap live tries/blooms while queries can still read them.
app/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.kt Line 447-Line 450 now reloads tries while the engine is running, but SetTries() and Stop() immediately call Close() on the current mmaps. tunnel/trie.go Line 75-Line 85 and tunnel/bloom.go Line 150-Line 160 unmap those buffers, while IsDomainBlocked() and handleDNSQuery() keep traversing them without a shared lock. An in-flight lookup can therefore race with reload/stop and touch unmapped memory. Please swap an immutable snapshot atomically or guard both readers and writers with an RWMutex.
Also applies to: 405-431, 623-648, 734-755
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tunnel/engine.go` around lines 145 - 176, The current shutdown/reload
sequence unmaps mmapped tries/blooms while lookups may access them (SetTries,
Stop call Close on mmaps and tunnel/trie.go:t.Close and tunnel/bloom.go:t.Close
unmap buffers) causing races with readers (IsDomainBlocked, handleDNSQuery) —
fix by making the swap atomic or adding an RWMutex: change the engine to use a
single immutable snapshot struct (e.g., TriesAndBlooms) and atomically replace
the pointer when reloading, or protect all accesses (readers in
IsDomainBlocked/handleDNSQuery and writers in SetTries/Stop and the close loops
for adTries/secTries/adBlooms/secBlooms) with an RWMutex (RLock for lookups,
Lock for replacing/closing) so Close() never unmaps while readers hold RLock.
| // Load ad tries | ||
| for _, path := range strings.Split(adTriePathsCsv, ",") { | ||
| path = strings.TrimSpace(path) | ||
| if path == "" { continue } | ||
| t, err := LoadMmapTrie(path) | ||
| if err != nil { | ||
| logf("Failed to load Ad Trie: %v", err) | ||
| logf("Failed to load Ad Trie from %s: %v", path, err) | ||
| } else { | ||
| e.adTrie = t | ||
| logf("Loaded Ad Trie from Go native Mmap") | ||
| e.adTries = append(e.adTries, t) | ||
| logf("Loaded Ad Trie from Go native Mmap: %s", path) | ||
| } | ||
| } | ||
|
|
||
| // Load security trie | ||
| if secTriePath != "" { | ||
| t, err := LoadMmapTrie(secTriePath) | ||
| // Load security tries | ||
| for _, path := range strings.Split(secTriePathsCsv, ",") { | ||
| path = strings.TrimSpace(path) | ||
| if path == "" { continue } | ||
| t, err := LoadMmapTrie(path) | ||
| if err != nil { | ||
| logf("Failed to load Security Trie: %v", err) | ||
| logf("Failed to load Security Trie from %s: %v", path, err) | ||
| } else { | ||
| e.secTrie = t | ||
| logf("Loaded Security Trie from Go native Mmap") | ||
| e.secTries = append(e.secTries, t) | ||
| logf("Loaded Security Trie from Go native Mmap: %s", path) | ||
| } | ||
| } | ||
|
|
||
| // Load ad bloom filter | ||
| if adBloomPath != "" { | ||
| bf, err := LoadBloomFilter(adBloomPath) | ||
| for _, path := range strings.Split(adBloomPathsCsv, ",") { | ||
| path = strings.TrimSpace(path) | ||
| if path == "" { continue } | ||
| bf, err := LoadBloomFilter(path) | ||
| if err != nil { | ||
| logf("Failed to load Ad Bloom Filter: %v", err) | ||
| logf("Failed to load Ad Bloom Filter from %s: %v", path, err) | ||
| } else { | ||
| e.adBloom = bf | ||
| logf("Loaded Ad Bloom Filter for fast pre-filtering") | ||
| e.adBlooms = append(e.adBlooms, bf) | ||
| logf("Loaded Ad Bloom Filter for fast pre-filtering: %s", path) | ||
| } | ||
| } | ||
|
|
||
| // Load security bloom filter | ||
| if secBloomPath != "" { | ||
| bf, err := LoadBloomFilter(secBloomPath) | ||
| for _, path := range strings.Split(secBloomPathsCsv, ",") { | ||
| path = strings.TrimSpace(path) | ||
| if path == "" { continue } | ||
| bf, err := LoadBloomFilter(path) | ||
| if err != nil { | ||
| logf("Failed to load Security Bloom Filter: %v", err) | ||
| logf("Failed to load Security Bloom Filter from %s: %v", path, err) | ||
| } else { | ||
| e.secBloom = bf | ||
| logf("Loaded Security Bloom Filter for fast pre-filtering") | ||
| e.secBlooms = append(e.secBlooms, bf) | ||
| logf("Loaded Security Bloom Filter for fast pre-filtering: %s", path) | ||
| } |
There was a problem hiding this comment.
Preserve trie/bloom alignment when one file fails to load.
These slices are paired by index during lookup, but SetTries() compresses them independently by appending only successful loads. If bloom n fails and bloom n+1 succeeds, trie n+1 is now checked behind bloom n+1's predecessor slot, and a negative bloom result can skip the correct trie entirely. Please store trie/bloom pairs together or append nil placeholders to keep CSV order aligned.
Also applies to: 629-648, 734-755
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tunnel/engine.go` around lines 178 - 227, The code currently compresses
loaded tries and bloom filters independently (e.adTries/e.secTries vs
e.adBlooms/e.secBlooms) causing index misalignment when a file fails to load;
update the LoadMmapTrie/LoadBloomFilter loops (the blocks that append to
e.adTries, e.secTries, e.adBlooms, e.secBlooms) so that you preserve CSV order
by either (A) storing each trie+bloom as a single paired container (e.g., a
struct or tuple) and append that pair only once per CSV index, or (B) when one
side fails to load append a nil placeholder for that index to the corresponding
slice (append(nil) to e.adBlooms/e.secBlooms when LoadBloomFilter fails, and
append(nil) to e.adTries/e.secTries when LoadMmapTrie fails) so SetTries(),
lookup code, and the other referenced blocks (also at the other occurrences
noted) see aligned indices; update any lookup logic to handle nil placeholders
accordingly.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tunnel/engine.go (1)
734-761:⚠️ Potential issue | 🔴 CriticalData race:
handleDNSQueryreads slices without synchronization.Unlike
IsDomainBlocked(lines 622-627) which copies the slice references under the mutex,handleDNSQuerydirectly iterates overe.secTries,e.adTries, etc. without holding or copying under the lock. IfSetTries()orStop()runs concurrently, this is a data race on the slice headers themselves—not just use-after-free of the underlying mmapped data.Apply the same copy-under-lock pattern used in
IsDomainBlocked:🔒 Proposed fix
// Security domains +e.mu.Lock() +secBlooms := e.secBlooms +secTries := e.secTries +adBlooms := e.adBlooms +adTries := e.adTries +e.mu.Unlock() + -for i, secTrie := range e.secTries { +for i, secTrie := range secTries { if secTrie == nil { continue } var secBloom *BloomFilter - if i < len(e.secBlooms) { - secBloom = e.secBlooms[i] + if i < len(secBlooms) { + secBloom = secBlooms[i] } if secBloom == nil || secBloom.MightContainDomainOrParent(domain) { if secTrie.ContainsOrParent(domain) { e.handleBlockedDomain(queryInfo, "security", appName, startTime) return } } } // Ad domains -for i, adTrie := range e.adTries { +for i, adTrie := range adTries { if adTrie == nil { continue } var adBloom *BloomFilter - if i < len(e.adBlooms) { - adBloom = e.adBlooms[i] + if i < len(adBlooms) { + adBloom = adBlooms[i] } if adBloom == nil || adBloom.MightContainDomainOrParent(domain) { if adTrie.ContainsOrParent(domain) { e.handleBlockedDomain(queryInfo, "filter_list", appName, startTime) return } } }Note: This fix addresses the slice-header race. The underlying use-after-free concern (unmapping while reading) mentioned in the previous review still applies and requires the atomic-swap or RWMutex approach.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tunnel/engine.go` around lines 734 - 761, handleDNSQuery currently iterates e.secTries, e.adTries (and their corresponding e.secBlooms/e.adBlooms) directly and races with SetTries/Stop; mirror the IsDomainBlocked fix by taking the mutex, copying the slice headers into local variables (e.g. localSecTries, localSecBlooms, localAdTries, localAdBlooms), then release the lock and iterate the locals; ensure you reference the same mutex used by SetTries/Stop so the copy is done under lock before any concurrent mutation.
♻️ Duplicate comments (5)
app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt (5)
314-320:⚠️ Potential issue | 🟡 MinorNormalize domain input before parent/wildcard checks.
Mixed-case input can miss matches because lookup sets are lowercased, but
domainis used as-is in the checker path.💡 Proposed fix
private inline fun checkDomainAndParents(domain: String, checker: (String) -> Boolean): Boolean { - if (checker(domain)) return true - var d = domain + val normalized = domain.lowercase() + if (checker(normalized)) return true + var d = normalized while (d.contains('.')) { d = d.substringAfter('.') if (checker(d)) return true if (checker("*.$d")) return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt` around lines 314 - 320, In checkDomainAndParents normalize the incoming domain (and any derived parent d) to a canonical lowercase form before running checker so mixed-case input doesn't miss lowercased lookup keys; update the function to lowercase/trim the initial domain, then use that normalized variable in the initial checker(domain) call and in the loop when computing d and the wildcard "*.$d" checks (keep using the same checker parameter).
642-645:⚠️ Potential issue | 🟠 MajorDo not return success if enabled-filter reload fails.
loadAllEnabledFilters()result is ignored, so this can report success even when runtime filter state is not reloaded.💡 Proposed fix
if (result.isSuccess) { - filterListDao.updateStats(id = filter.id, count = filter.ruleCount, timestamp = System.currentTimeMillis()) - loadAllEnabledFilters() - Result.success(filter.ruleCount) + val reloadResult = loadAllEnabledFilters() + if (reloadResult.isFailure) { + return@withContext Result.failure( + reloadResult.exceptionOrNull() ?: Exception("Failed to reload enabled filters") + ) + } + filterListDao.updateStats(id = filter.id, count = filter.ruleCount, timestamp = System.currentTimeMillis()) + Result.success(filter.ruleCount) } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt` around lines 642 - 645, The code currently treats the operation as successful when result.isSuccess even though loadAllEnabledFilters() may fail; modify the success path so that after filterListDao.updateStats(...) you call loadAllEnabledFilters(), capture its Result, and only return Result.success(filter.ruleCount) if that Result is also success; if loadAllEnabledFilters() returns failure propagate or return that failure (or combine errors) instead of unconditionally returning Result.success; update the block that currently returns Result.success(filter.ruleCount) to check the Result from loadAllEnabledFilters() and handle failure accordingly.
608-610:⚠️ Potential issue | 🟠 MajorEnsure stale cosmetic CSS is deleted when only security lists are enabled.
Early return skips cleanup, so old
cosmetic_rules.csscan remain active.💡 Proposed fix
val validLists = enabledLists.filter { it.category != FilterList.CATEGORY_SECURITY } -if (validLists.isEmpty()) return@withContext +if (validLists.isEmpty()) { + File(context.filesDir, "$CACHE_DIR/cosmetic_rules.css").delete() + return@withContext +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt` around lines 608 - 610, The early return after computing validLists prevents cleanup of stale cosmetic_rules.css when only security lists are enabled; instead, check if validLists.isEmpty() and if so delete the cosmetic CSS (the existing cleanup logic that removes "cosmetic_rules.css") before returning. Update the block around enabledLists / validLists (references: enabledLists, validLists, FilterList.CATEGORY_SECURITY) to perform the cosmetic_rules.css deletion when validLists.isEmpty() (or when all enabledLists have category == CATEGORY_SECURITY) and then return.
486-535:⚠️ Potential issue | 🟠 MajorReplace ad-hoc JSON parsing with a real JSON decoder.
Splitting by
"},and regex extraction is not JSON-safe (escaped quotes/braces, formatting changes), so sync can silently drop or mangle entries.💡 Proposed fix
+import org.json.JSONArray + private fun parseRemoteFilterJson(json: String): List<app.pwhs.blockads.data.remote.models.FilterList> { return try { - val results = mutableListOf<app.pwhs.blockads.data.remote.models.FilterList>() - val objects = json.split("},").map { - it.trim().removePrefix("[").removeSuffix("]").trim() + "}" - } - - for (obj in objects) { - val cleaned = obj.trim().removePrefix("{").removeSuffix("}").removeSuffix("},") - if (cleaned.isBlank()) continue - - fun extractString(key: String): String? { - val pattern = "\"$key\"\\s*:\\s*\"(.*?)\"".toRegex() - return pattern.find(cleaned)?.groupValues?.get(1) - ?.replace("\\u0026", "&") - } - fun extractInt(key: String): Int { - val pattern = "\"$key\"\\s*:\\s*(\\d+)".toRegex() - return pattern.find(cleaned)?.groupValues?.get(1)?.toIntOrNull() ?: 0 - } - fun extractBoolean(key: String): Boolean { - val pattern = "\"$key\"\\s*:\\s*(true|false)".toRegex() - return pattern.find(cleaned)?.groupValues?.get(1) == "true" - } - - val name = extractString("name") ?: continue - val bloomUrl = extractString("bloomUrl") ?: continue - val trieUrl = extractString("trieUrl") ?: continue - - results.add( - app.pwhs.blockads.data.remote.models.FilterList( - name = name, - id = extractString("id") ?: name.lowercase().replace(" ", "_"), - description = extractString("description"), - isEnabled = extractBoolean("isEnabled"), - isBuiltIn = extractBoolean("isBuiltIn"), - category = extractString("category"), - ruleCount = extractInt("ruleCount"), - bloomUrl = bloomUrl, - trieUrl = trieUrl, - cssUrl = extractString("cssUrl"), - originalUrl = extractString("originalUrl") - ) - ) - } - results + val arr = JSONArray(json) + buildList { + for (i in 0 until arr.length()) { + val o = arr.optJSONObject(i) ?: continue + val name = o.optString("name").takeIf { it.isNotBlank() } ?: continue + val bloomUrl = o.optString("bloomUrl").takeIf { it.isNotBlank() } ?: continue + val trieUrl = o.optString("trieUrl").takeIf { it.isNotBlank() } ?: continue + add( + app.pwhs.blockads.data.remote.models.FilterList( + name = name, + id = o.optString("id").ifBlank { name.lowercase().replace(" ", "_") }, + description = o.optString("description").ifBlank { null }, + isEnabled = o.optBoolean("isEnabled"), + isBuiltIn = o.optBoolean("isBuiltIn"), + category = o.optString("category").ifBlank { null }, + ruleCount = o.optInt("ruleCount", 0), + bloomUrl = bloomUrl, + trieUrl = trieUrl, + cssUrl = o.optString("cssUrl").ifBlank { null }, + originalUrl = o.optString("originalUrl").ifBlank { null } + ) + ) + } + } } catch (e: Exception) { Timber.e(e, "Failed to parse remote filter JSON") emptyList() } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt` around lines 486 - 535, The current parseRemoteFilterJson function must be replaced with proper JSON decoding: stop manual splitting/regex and use a JSON library (kotlinx.serialization, Moshi or Gson) to deserialize the input string directly into List<app.pwhs.blockads.data.remote.models.FilterList>; ensure the FilterList data class has the correct `@Serializable/`@Json annotation mappings for fields like name, id, description, isEnabled, isBuiltIn, category, ruleCount, bloomUrl, trieUrl, cssUrl, originalUrl, and preserve the existing fallback behavior for id (generate from name if missing) and error handling (log with Timber.e and return emptyList on failure). Locate parseRemoteFilterJson and update its implementation to call the JSON decoder and map/validate entries instead of using extractString/extractInt/extractBoolean helpers.
674-676:⚠️ Potential issue | 🟠 MajorPlaceholder stubs are shipping behavioral regressions.
getDomainPreview()always returns empty andvalidateFilterUrl()always returns success, which bypasses core UX and validation paths.I can draft concrete implementations (with edge-case handling and tests) in a follow-up patch if you want.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt` around lines 674 - 676, The two placeholder stubs in FilterListRepository—getDomainPreview(filter: FilterList, limit: Int = 100) and validateFilterUrl(url: String)—must be replaced with real implementations: implement getDomainPreview to load the filter content (from filter.source or the repository's storage), parse rules to extract domains (apply the existing rule-parsing utility or regex used elsewhere), deduplicate and respect the limit, and return the list or an empty list on recoverable errors; implement validateFilterUrl to perform real validation (e.g., check URL format, perform a lightweight HTTP HEAD/GET with timeout or use existing network client, verify content-type/status and optionally basic rule parsing to ensure it looks like a filter) and return Result.success(true) only when validations pass or Result.failure(exception) with descriptive errors on failure, ensuring exceptions are caught and converted to Result to avoid bypassing UX and validation paths in FilterListRepository.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tunnel/engine.go`:
- Around line 734-761: handleDNSQuery currently iterates e.secTries, e.adTries
(and their corresponding e.secBlooms/e.adBlooms) directly and races with
SetTries/Stop; mirror the IsDomainBlocked fix by taking the mutex, copying the
slice headers into local variables (e.g. localSecTries, localSecBlooms,
localAdTries, localAdBlooms), then release the lock and iterate the locals;
ensure you reference the same mutex used by SetTries/Stop so the copy is done
under lock before any concurrent mutation.
---
Duplicate comments:
In `@app/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt`:
- Around line 314-320: In checkDomainAndParents normalize the incoming domain
(and any derived parent d) to a canonical lowercase form before running checker
so mixed-case input doesn't miss lowercased lookup keys; update the function to
lowercase/trim the initial domain, then use that normalized variable in the
initial checker(domain) call and in the loop when computing d and the wildcard
"*.$d" checks (keep using the same checker parameter).
- Around line 642-645: The code currently treats the operation as successful
when result.isSuccess even though loadAllEnabledFilters() may fail; modify the
success path so that after filterListDao.updateStats(...) you call
loadAllEnabledFilters(), capture its Result, and only return
Result.success(filter.ruleCount) if that Result is also success; if
loadAllEnabledFilters() returns failure propagate or return that failure (or
combine errors) instead of unconditionally returning Result.success; update the
block that currently returns Result.success(filter.ruleCount) to check the
Result from loadAllEnabledFilters() and handle failure accordingly.
- Around line 608-610: The early return after computing validLists prevents
cleanup of stale cosmetic_rules.css when only security lists are enabled;
instead, check if validLists.isEmpty() and if so delete the cosmetic CSS (the
existing cleanup logic that removes "cosmetic_rules.css") before returning.
Update the block around enabledLists / validLists (references: enabledLists,
validLists, FilterList.CATEGORY_SECURITY) to perform the cosmetic_rules.css
deletion when validLists.isEmpty() (or when all enabledLists have category ==
CATEGORY_SECURITY) and then return.
- Around line 486-535: The current parseRemoteFilterJson function must be
replaced with proper JSON decoding: stop manual splitting/regex and use a JSON
library (kotlinx.serialization, Moshi or Gson) to deserialize the input string
directly into List<app.pwhs.blockads.data.remote.models.FilterList>; ensure the
FilterList data class has the correct `@Serializable/`@Json annotation mappings
for fields like name, id, description, isEnabled, isBuiltIn, category,
ruleCount, bloomUrl, trieUrl, cssUrl, originalUrl, and preserve the existing
fallback behavior for id (generate from name if missing) and error handling (log
with Timber.e and return emptyList on failure). Locate parseRemoteFilterJson and
update its implementation to call the JSON decoder and map/validate entries
instead of using extractString/extractInt/extractBoolean helpers.
- Around line 674-676: The two placeholder stubs in
FilterListRepository—getDomainPreview(filter: FilterList, limit: Int = 100) and
validateFilterUrl(url: String)—must be replaced with real implementations:
implement getDomainPreview to load the filter content (from filter.source or the
repository's storage), parse rules to extract domains (apply the existing
rule-parsing utility or regex used elsewhere), deduplicate and respect the
limit, and return the list or an empty list on recoverable errors; implement
validateFilterUrl to perform real validation (e.g., check URL format, perform a
lightweight HTTP HEAD/GET with timeout or use existing network client, verify
content-type/status and optionally basic rule parsing to ensure it looks like a
filter) and return Result.success(true) only when validations pass or
Result.failure(exception) with descriptive errors on failure, ensuring
exceptions are caught and converted to Result to avoid bypassing UX and
validation paths in FilterListRepository.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 993e5637-a753-4410-aa5f-20936cc02b1f
⛔ Files ignored due to path filters (1)
app/libs/tunnel-sources.jaris excluded by!**/*.jar
📒 Files selected for processing (8)
app/libs/tunnel.aarapp/src/main/java/app/pwhs/blockads/data/repository/CosmeticRuleParser.ktapp/src/main/java/app/pwhs/blockads/data/repository/DomainTrie.ktapp/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.ktapp/src/main/java/app/pwhs/blockads/ui/logs/component/DomainDetailBottomSheet.ktapp/src/test/java/app/pwhs/blockads/data/repository/CosmeticRuleParserTest.ktapp/src/test/java/app/pwhs/blockads/util/WireGuardConfigParserTest.kttunnel/engine.go
💤 Files with no reviewable changes (4)
- app/src/main/java/app/pwhs/blockads/data/repository/CosmeticRuleParser.kt
- app/src/test/java/app/pwhs/blockads/data/repository/CosmeticRuleParserTest.kt
- app/src/test/java/app/pwhs/blockads/util/WireGuardConfigParserTest.kt
- app/src/main/java/app/pwhs/blockads/data/repository/DomainTrie.kt
There was a problem hiding this comment.
♻️ Duplicate comments (1)
app/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.kt (1)
291-291:⚠️ Potential issue | 🟠 MajorBoot startup still syncs remote filters before connectivity is guaranteed.
Line 291 runs remote sync before the boot connectivity gate at Lines 370-375. Because
fetchAndSyncRemoteFilterLists()swallows exceptions inapp/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.kt:423-480, boot can continue with stale filter metadata and no post-connectivity retry.Proposed fix (gate sync before load during boot-without-network)
filterRepo.loadWhitelist() filterRepo.loadCustomRules() filterRepo.seedDefaultsIfNeeded() + if (startedFromBoot && networkMonitor?.isNetworkAvailable() == false) { + connectingPhase = getString(R.string.vpn_phase_waiting_network) + updateNotification() + Timber.d("Waiting for network before syncing remote filter lists...") + networkAvailableFlow.first() + } filterRepo.fetchAndSyncRemoteFilterLists() val result = filterRepo.loadAllEnabledFilters() Timber.d("Filters loaded: ${result.getOrDefault(0)} domains")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.kt` at line 291, The startup currently calls filterRepo.fetchAndSyncRemoteFilterLists() before the boot-time connectivity gate in AdBlockVpnService, allowing swallowed exceptions in FilterListRepository.fetchAndSyncRemoteFilterLists() to leave stale filters and no retry; change startup to first load local filters (keep existing local-load call) and defer or schedule fetchAndSyncRemoteFilterLists() until after the boot connectivity check completes (the connectivity gate block in AdBlockVpnService), and/or add a retry trigger there so failures from FilterListRepository.fetchAndSyncRemoteFilterLists() result in a post-connectivity retry rather than being silently ignored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@app/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.kt`:
- Line 291: The startup currently calls
filterRepo.fetchAndSyncRemoteFilterLists() before the boot-time connectivity
gate in AdBlockVpnService, allowing swallowed exceptions in
FilterListRepository.fetchAndSyncRemoteFilterLists() to leave stale filters and
no retry; change startup to first load local filters (keep existing local-load
call) and defer or schedule fetchAndSyncRemoteFilterLists() until after the boot
connectivity check completes (the connectivity gate block in AdBlockVpnService),
and/or add a retry trigger there so failures from
FilterListRepository.fetchAndSyncRemoteFilterLists() result in a
post-connectivity retry rather than being silently ignored.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eed59e7d-d6c6-4930-a7fa-6dae34c2ce2c
📒 Files selected for processing (27)
app/src/main/java/app/pwhs/blockads/MainActivity.ktapp/src/main/java/app/pwhs/blockads/service/AdBlockVpnService.ktapp/src/main/java/app/pwhs/blockads/service/GoTunnelAdapter.ktapp/src/main/java/app/pwhs/blockads/ui/appearance/AppearanceViewModel.ktapp/src/main/java/app/pwhs/blockads/ui/customrules/CustomRulesViewModel.ktapp/src/main/java/app/pwhs/blockads/ui/filter/component/FilterItem.ktapp/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailScreen.ktapp/src/main/java/app/pwhs/blockads/ui/home/HomeScreen.ktapp/src/main/java/app/pwhs/blockads/ui/home/component/StatsChart.ktapp/src/main/java/app/pwhs/blockads/ui/logs/LogViewModel.ktapp/src/main/java/app/pwhs/blockads/ui/logs/component/DomainDetailBottomSheet.ktapp/src/main/java/app/pwhs/blockads/ui/logs/component/LogEntryItem.ktapp/src/main/java/app/pwhs/blockads/ui/onboarding/OnboardingScreen.ktapp/src/main/java/app/pwhs/blockads/ui/profile/component/ProfileItem.ktapp/src/main/java/app/pwhs/blockads/ui/profile/component/ScheduleItem.ktapp/src/main/java/app/pwhs/blockads/ui/settings/SettingsViewModel.ktapp/src/main/java/app/pwhs/blockads/ui/statistics/StatisticsScreen.ktapp/src/main/java/app/pwhs/blockads/ui/wireguard/WireGuardImportViewModel.ktapp/src/main/java/app/pwhs/blockads/utils/AppConstants.ktapp/src/main/java/app/pwhs/blockads/utils/AppNameResolver.ktapp/src/main/java/app/pwhs/blockads/utils/BatteryMonitor.ktapp/src/main/java/app/pwhs/blockads/utils/CustomRuleParser.ktapp/src/main/java/app/pwhs/blockads/utils/FormatUtil.ktapp/src/main/java/app/pwhs/blockads/utils/LocaleHelper.ktapp/src/main/java/app/pwhs/blockads/utils/WireGuardConfigParser.ktapp/src/main/java/app/pwhs/blockads/widget/AdBlockWidgetProvider.ktapp/src/main/java/app/pwhs/blockads/worker/DailySummaryWorker.kt
✅ Files skipped from review due to trivial changes (18)
- app/src/main/java/app/pwhs/blockads/ui/profile/component/ScheduleItem.kt
- app/src/main/java/app/pwhs/blockads/worker/DailySummaryWorker.kt
- app/src/main/java/app/pwhs/blockads/widget/AdBlockWidgetProvider.kt
- app/src/main/java/app/pwhs/blockads/ui/logs/LogViewModel.kt
- app/src/main/java/app/pwhs/blockads/ui/wireguard/WireGuardImportViewModel.kt
- app/src/main/java/app/pwhs/blockads/utils/CustomRuleParser.kt
- app/src/main/java/app/pwhs/blockads/ui/customrules/CustomRulesViewModel.kt
- app/src/main/java/app/pwhs/blockads/ui/onboarding/OnboardingScreen.kt
- app/src/main/java/app/pwhs/blockads/ui/appearance/AppearanceViewModel.kt
- app/src/main/java/app/pwhs/blockads/ui/settings/SettingsViewModel.kt
- app/src/main/java/app/pwhs/blockads/ui/home/component/StatsChart.kt
- app/src/main/java/app/pwhs/blockads/utils/WireGuardConfigParser.kt
- app/src/main/java/app/pwhs/blockads/MainActivity.kt
- app/src/main/java/app/pwhs/blockads/ui/filter/component/FilterItem.kt
- app/src/main/java/app/pwhs/blockads/ui/statistics/StatisticsScreen.kt
- app/src/main/java/app/pwhs/blockads/ui/logs/component/DomainDetailBottomSheet.kt
- app/src/main/java/app/pwhs/blockads/ui/logs/component/LogEntryItem.kt
- app/src/main/java/app/pwhs/blockads/ui/home/HomeScreen.kt
🚧 Files skipped from review as they are similar to previous changes (1)
- app/src/main/java/app/pwhs/blockads/service/GoTunnelAdapter.kt
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupViewModel.kt (1)
77-100:⚠️ Potential issue | 🟠 MajorThe custom filter name is currently dropped.
This flow still asks the user for
name, but onlytrimmedUrlis sent toCustomFilterManager. Right now the label survives only in the toast, not in the stored filter. Either threadnamethrough the manager and persist it, or remove the field from the dialog.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupViewModel.kt` around lines 77 - 100, The addFilterList flow currently drops the provided name because only trimmedUrl is passed to customFilterManager.addCustomFilter; update the code and persistence so the name is stored: change the API on CustomFilterManager.addCustomFilter to accept (name: String, url: String) and propagate that through its implementation and the entity/DAO layer so the CustomFilter record persists the label, then update this ViewModel call (addFilterList) to pass both name and trimmedUrl and adjust any other callers; alternatively, if labels are not desired, remove the name parameter from the dialog and all related callers to keep code consistent.
🧹 Nitpick comments (2)
app/src/main/java/app/pwhs/blockads/data/repository/CustomFilterManager.kt (2)
106-148: Consider atomic DB+file operations to prevent orphan records.If file copy (lines 116-130) fails after
filterListDao.insert()(line 106), the database contains an orphan entry with empty URLs. WhileloadAllEnabledFilters()skips entries with emptybloomUrl/trieUrl, this leaves stale DB records that may confuse users viewing their filter list.Consider wrapping the insert and file operations in a transaction with rollback on failure, or deleting the inserted record in the catch block.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/CustomFilterManager.kt` around lines 106 - 148, The current flow inserts via filterListDao.insert(filterEntity) then copies files, so a copy failure leaves an orphan DB row; change to an atomic approach: either perform the file copy first to temporary files and then run a DB transaction that inserts the entity and updates bloomUrl/trieUrl/cssUrl (use filterListDao.insert and filterListDao.update inside a transaction), or wrap the existing insert+file copy in a try/catch and on any exception delete the inserted row (filterListDao.deleteById/appropriate DAO delete) and rethrow/log; ensure extractDir.deleteRecursively() still runs in finally and that Result.success(updatedEntity) is only returned after all operations complete.
251-268: Regex-based JSON parsing is fragile but acceptable for this scope.The
parseInfoJsonfunction uses regex extraction, which can break on escaped quotes or complex JSON structures. However, sinceinfo.jsonis generated by your own backend API with a predictable format, and there's a fallback path (lines 80-87), this is acceptable.Consider migrating to
kotlinx.serializationorGsonif the JSON format becomes more complex.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/main/java/app/pwhs/blockads/data/repository/CustomFilterManager.kt` around lines 251 - 268, The regex-based parsing in parseInfoJson (and its helpers extractString/extractInt) is fragile; replace it with a proper JSON deserialization using a library (kotlinx.serialization or Gson) to parse into the FilterInfo data class: implement a JSON decoder that reads name, url, ruleCount, and updatedAt and returns FilterInfo with the same default fallbacks, and remove the regex helpers so parsing is robust to escaped quotes and formatting changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/main/java/app/pwhs/blockads/data/remote/api/CustomFilterApi.kt`:
- Around line 45-52: The current CustomFilterApi code constructs JSON by string
interpolation (requestBody = """{"url":"$filterUrl"}""") and parses responses
with regex, which breaks on quotes/escapes and silently defaults missing fields;
replace the manual JSON handling in CustomFilterApi: when sending use
JSONObject().put("url", filterUrl).toString() instead of requestBody and set
that as the POST body for client.post to BUILD_ENDPOINT; when receiving parse
with JSONObject(responseText) and read fields using optString/optInt/optLong
(e.g., for ruleCount and fileSize) and add explicit validation checks that throw
a descriptive exception if required fields are absent or invalid (do not fall
back to silent defaults). Ensure you update all places that build the request
and parse the response (the client.post block and the response parsing around
responseText, ruleCount, fileSize) to use org.json.JSONObject.
In `@app/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.kt`:
- Around line 40-56: Stage the .bloom and .trie downloads to temporary files
first and only rename/move them into their final names together once both
succeed: use downloadFile to write to temp paths (e.g., bloomFileTemp,
trieFileTemp), verify both temp files exist, then atomically move/rename them to
bloomFile and trieFile (using atomic Files.move/rename) so the pair is promoted
together before returning Result.success(DownloadedFilterPaths(...)); if either
download fails, delete any temp files and return Result.failure. Apply the same
staging-and-atomic-promotion logic for the other symmetric download block in
this file (the other downloadFile pair around the same code paths).
In `@app/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailScreen.kt`:
- Around line 339-376: Replace hardcoded UI text and the hardcoded green color
in FilterDetailScreen composables with resource-backed and theme-backed values:
add string resources for "Blocked Requests", "Test a Domain" and any other new
literals (the Text(...) calls in FilterDetailScreen.kt) and replace the inline
string literals with stringResource(R.string.<name>), and replace the hardcoded
success/allowed color with your theme color/token (e.g.,
MaterialTheme.colorScheme.<successToken> or your app's success color token) so
the copy is localizable and the allowed-state color follows the Material theme;
update references in both the block around the "Blocked
Requests"/formatCount(blockedCount) Text and the other section noted (around the
"Test a Domain" UI and lines ~395-425).
In
`@app/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailViewModel.kt`:
- Around line 60-69: The testDomain function can leave _isTestingDomain true if
filterRepo.checkDomainInFilter throws; wrap the suspension call inside a
try-finally inside the viewModelScope.launch in testDomain so that
_isTestingDomain.value is set to false in the finally block; keep setting
_testDomainResult from filterRepo.checkDomainInFilter inside the try and
optionally catch exceptions to map to a failure result before the finally,
referencing testDomain, viewModelScope.launch, _isTestingDomain,
_testDomainResult, and filterRepo.checkDomainInFilter.
In `@app/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupViewModel.kt`:
- Around line 88-91: The loading flags are not reset if an exception is thrown;
wrap the async operations that set _isAddingCustomFilter and _isUpdatingFilter
in try/finally blocks so the flags are always set back to false. Specifically,
in the code around _isAddingCustomFilter with
customFilterManager.addCustomFilter(trimmedUrl) ensure you set
_isAddingCustomFilter.value = true before the call and move
_isAddingCustomFilter.value = false into a finally block, and apply the same
pattern inside updateAllFilters() for _isUpdatingFilter so any thrown exception
still clears the busy state.
In `@app/src/main/java/app/pwhs/blockads/utils/ZipUtils.kt`:
- Around line 41-42: The zip-slip guard using startsWith(canonicalDest) is too
permissive; update the extraction check (around the code setting
destDir.mkdirs() and val canonicalDest = destDir.canonicalPath) to ensure the
entry's canonical path is either exactly equal to canonicalDest or starts with
canonicalDest + File.separator (use File.separatorChar) before writing files;
apply the same stricter check in the second extraction block referenced at lines
71-75 so entryFile.canonicalPath cannot escape the destination directory.
---
Outside diff comments:
In `@app/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupViewModel.kt`:
- Around line 77-100: The addFilterList flow currently drops the provided name
because only trimmedUrl is passed to customFilterManager.addCustomFilter; update
the code and persistence so the name is stored: change the API on
CustomFilterManager.addCustomFilter to accept (name: String, url: String) and
propagate that through its implementation and the entity/DAO layer so the
CustomFilter record persists the label, then update this ViewModel call
(addFilterList) to pass both name and trimmedUrl and adjust any other callers;
alternatively, if labels are not desired, remove the name parameter from the
dialog and all related callers to keep code consistent.
---
Nitpick comments:
In `@app/src/main/java/app/pwhs/blockads/data/repository/CustomFilterManager.kt`:
- Around line 106-148: The current flow inserts via
filterListDao.insert(filterEntity) then copies files, so a copy failure leaves
an orphan DB row; change to an atomic approach: either perform the file copy
first to temporary files and then run a DB transaction that inserts the entity
and updates bloomUrl/trieUrl/cssUrl (use filterListDao.insert and
filterListDao.update inside a transaction), or wrap the existing insert+file
copy in a try/catch and on any exception delete the inserted row
(filterListDao.deleteById/appropriate DAO delete) and rethrow/log; ensure
extractDir.deleteRecursively() still runs in finally and that
Result.success(updatedEntity) is only returned after all operations complete.
- Around line 251-268: The regex-based parsing in parseInfoJson (and its helpers
extractString/extractInt) is fragile; replace it with a proper JSON
deserialization using a library (kotlinx.serialization or Gson) to parse into
the FilterInfo data class: implement a JSON decoder that reads name, url,
ruleCount, and updatedAt and returns FilterInfo with the same default fallbacks,
and remove the regex helpers so parsing is robust to escaped quotes and
formatting changes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55503ace-4a74-4aba-837d-c3ea1530a0ac
📒 Files selected for processing (14)
app/libs/tunnel.aarapp/src/main/java/app/pwhs/blockads/data/dao/FilterListDao.ktapp/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.ktapp/src/main/java/app/pwhs/blockads/data/remote/api/CustomFilterApi.ktapp/src/main/java/app/pwhs/blockads/data/remote/models/FilterList.ktapp/src/main/java/app/pwhs/blockads/data/repository/CustomFilterManager.ktapp/src/main/java/app/pwhs/blockads/data/repository/FilterListRepository.ktapp/src/main/java/app/pwhs/blockads/di/AppModule.ktapp/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupScreen.ktapp/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupViewModel.ktapp/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailScreen.ktapp/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailViewModel.ktapp/src/main/java/app/pwhs/blockads/utils/ZipUtils.kttunnel/engine.go
✅ Files skipped from review due to trivial changes (1)
- app/src/main/java/app/pwhs/blockads/data/remote/models/FilterList.kt
| val endpoint = if (force) "$BUILD_ENDPOINT?force=true" else BUILD_ENDPOINT | ||
| val requestBody = """{"url":"$filterUrl"}""" | ||
|
|
||
| Timber.d("Calling build API: $endpoint with url=$filterUrl") | ||
|
|
||
| val response = client.post(endpoint) { | ||
| contentType(ContentType.Application.Json) | ||
| setBody(requestBody) |
There was a problem hiding this comment.
Use a JSON library instead of manual string interpolation and regex parsing.
Interpolating filterUrl into a JSON string causes syntax errors when the URL contains quotes or backslashes. The regex parser fails to correctly extract values that contain escaped content (e.g., \" inside response fields), and the silent defaults of 0 for missing ruleCount and fileSize mask actual data failures instead of failing fast.
Replace the manual string construction and parsing with org.json.JSONObject:
- Use
JSONObject().put("url", filterUrl).toString()for requests (properly escapes all characters) - Use
JSONObject(responseText)for parsing, withoptString(),optInt(),optLong()methods, and add explicit validation to throw exceptions when required fields are missing or invalid
Example fix
+import org.json.JSONObject
@@
- val requestBody = """{"url":"$filterUrl"}"""
+ val requestBody = JSONObject()
+ .put("url", filterUrl)
+ .toString()
@@
- val status = extractString(responseText, "status")
+ val json = JSONObject(responseText)
+ val status = json.optString("status")
if (status != "success") {
throw CustomFilterException("Build failed with status: $status")
}
- val downloadUrl = extractString(responseText, "downloadUrl")
- ?: throw CustomFilterException("Missing downloadUrl in response")
- val ruleCount = extractInt(responseText, "ruleCount")
- val fileSize = extractLong(responseText, "fileSize")
+ val downloadUrl = json.optString("downloadUrl")
+ .takeIf { it.isNotBlank() }
+ ?: throw CustomFilterException("Missing downloadUrl in response")
+ val ruleCount = json.optInt("ruleCount", -1)
+ .takeIf { it >= 0 }
+ ?: throw CustomFilterException("Missing ruleCount in response")
+ val fileSize = json.optLong("fileSize", -1L)
+ .takeIf { it >= 0L }
+ ?: throw CustomFilterException("Missing fileSize in response")Applies to lines 45-52, 64-77, and 87-101.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/remote/api/CustomFilterApi.kt`
around lines 45 - 52, The current CustomFilterApi code constructs JSON by string
interpolation (requestBody = """{"url":"$filterUrl"}""") and parses responses
with regex, which breaks on quotes/escapes and silently defaults missing fields;
replace the manual JSON handling in CustomFilterApi: when sending use
JSONObject().put("url", filterUrl).toString() instead of requestBody and set
that as the POST body for client.post to BUILD_ENDPOINT; when receiving parse
with JSONObject(responseText) and read fields using optString/optInt/optLong
(e.g., for ruleCount and fileSize) and add explicit validation checks that throw
a descriptive exception if required fields are absent or invalid (do not fall
back to silent defaults). Ensure you update all places that build the request
and parse the response (the client.post block and the response parsing around
responseText, ruleCount, fileSize) to use org.json.JSONObject.
| val bloomFile = File(filterDir, "${filter.id}.bloom") | ||
| val trieFile = File(filterDir, "${filter.id}.trie") | ||
| val cssFile = File(filterDir, "${filter.id}.css") | ||
|
|
||
| val bloomPath = if (!filter.bloomUrl.isNullOrEmpty()) downloadFile(filter.bloomUrl!!, bloomFile, forceUpdate) else null | ||
| val triePath = if (!filter.trieUrl.isNullOrEmpty()) downloadFile(filter.trieUrl!!, trieFile, forceUpdate) else null | ||
|
|
||
| var cssPath: String? = null | ||
| if (!filter.cssUrl.isNullOrEmpty()) { | ||
| cssPath = downloadFile(filter.cssUrl, cssFile, forceUpdate) | ||
| } | ||
|
|
||
| if (bloomPath != null && triePath != null) { | ||
| Result.success(DownloadedFilterPaths(bloomPath, triePath, cssPath)) | ||
| } else { | ||
| Result.failure(Exception("Failed to download core filter files (.bloom or .trie) for ${filter.id}")) | ||
| } |
There was a problem hiding this comment.
Download the core pair atomically.
.bloom and .trie are refreshed independently into their final remote_filters/{id}.* paths. If one succeeds and the other fails, this method returns failure but leaves a mixed-version pair on disk that later forceUpdate = false loads can reuse. Stage both files first, then promote them together only when the full pair is present.
Also applies to: 78-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/data/remote/FilterDownloadManager.kt`
around lines 40 - 56, Stage the .bloom and .trie downloads to temporary files
first and only rename/move them into their final names together once both
succeed: use downloadFile to write to temp paths (e.g., bloomFileTemp,
trieFileTemp), verify both temp files exist, then atomically move/rename them to
bloomFile and trieFile (using atomic Files.move/rename) so the pair is promoted
together before returning Result.success(DownloadedFilterPaths(...)); if either
download fails, delete any temp files and return Result.failure. Apply the same
staging-and-atomic-promotion logic for the other symmetric download block in
this file (the other downloadFile pair around the same code paths).
| "Filter Statistics", | ||
| style = MaterialTheme.typography.titleMedium, | ||
| fontWeight = FontWeight.Bold | ||
| ) | ||
| } | ||
|
|
||
| if (isLoadingDomains) { | ||
| item { | ||
| Box( | ||
| item { | ||
| Card( | ||
| colors = CardDefaults.cardColors( | ||
| containerColor = MaterialTheme.colorScheme.surface | ||
| ), | ||
| shape = RoundedCornerShape(12.dp), | ||
| modifier = Modifier.fillMaxWidth() | ||
| ) { | ||
| Column( | ||
| modifier = Modifier | ||
| .fillMaxWidth() | ||
| .height(100.dp), | ||
| contentAlignment = Alignment.Center | ||
| .padding(16.dp), | ||
| verticalArrangement = Arrangement.spacedBy(8.dp) | ||
| ) { | ||
| CircularProgressIndicator( | ||
| color = MaterialTheme.colorScheme.primary, | ||
| modifier = Modifier.size(32.dp), | ||
| strokeWidth = 2.dp | ||
| Text( | ||
| text = "Blocked Requests", | ||
| style = MaterialTheme.typography.labelMedium, | ||
| color = TextSecondary | ||
| ) | ||
| } | ||
| } | ||
| } else if (domainPreview.isEmpty()) { | ||
| item { | ||
| Card( | ||
| colors = CardDefaults.cardColors( | ||
| containerColor = MaterialTheme.colorScheme.surface | ||
| ), | ||
| shape = RoundedCornerShape(12.dp), | ||
| modifier = Modifier.fillMaxWidth() | ||
| ) { | ||
| Text( | ||
| stringResource(R.string.filter_detail_no_domains), | ||
| style = MaterialTheme.typography.bodyMedium, | ||
| color = TextSecondary, | ||
| modifier = Modifier.padding(16.dp) | ||
| text = formatCount(blockedCount), | ||
| style = MaterialTheme.typography.headlineMedium, | ||
| fontWeight = FontWeight.Bold, | ||
| color = MaterialTheme.colorScheme.primary | ||
| ) | ||
| } | ||
| } | ||
| } else { | ||
| item { | ||
| Card( | ||
| colors = CardDefaults.cardColors( | ||
| containerColor = MaterialTheme.colorScheme.surface | ||
| ), | ||
| shape = RoundedCornerShape(12.dp), | ||
| } | ||
|
|
||
| // Test a Domain section | ||
| item { | ||
| Text( | ||
| "Test a Domain", |
There was a problem hiding this comment.
Move the new copy and success color into resources/theme.
These literals will not localize, and the hardcoded green can drift from the current Material theme. Please back the new text with stringResource(...) and use a theme color/token for the allowed state.
Also applies to: 395-425
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailScreen.kt`
around lines 339 - 376, Replace hardcoded UI text and the hardcoded green color
in FilterDetailScreen composables with resource-backed and theme-backed values:
add string resources for "Blocked Requests", "Test a Domain" and any other new
literals (the Text(...) calls in FilterDetailScreen.kt) and replace the inline
string literals with stringResource(R.string.<name>), and replace the hardcoded
success/allowed color with your theme color/token (e.g.,
MaterialTheme.colorScheme.<successToken> or your app's success color token) so
the copy is localizable and the allowed-state color follows the Material theme;
update references in both the block around the "Blocked
Requests"/formatCount(blockedCount) Text and the other section noted (around the
"Test a Domain" UI and lines ~395-425).
| fun testDomain() { | ||
| val domain = _testDomainQuery.value.trim() | ||
| if (domain.isBlank()) return | ||
|
|
||
| viewModelScope.launch { | ||
| _isLoadingDomains.value = true | ||
| val f = filterListDao.getById(filterId) | ||
| if (f != null) { | ||
| _domainPreview.value = filterRepo.getDomainPreview(f, 200) | ||
| } | ||
| _isLoadingDomains.value = false | ||
| _isTestingDomain.value = true | ||
| _testDomainResult.value = filterRepo.checkDomainInFilter(filterId, domain) | ||
| _isTestingDomain.value = false | ||
| } | ||
| } |
There was a problem hiding this comment.
Missing exception handling leaves loading state stuck.
If filterRepo.checkDomainInFilter(filterId, domain) throws an exception, _isTestingDomain.value = false is never executed, leaving the UI in a permanent loading state.
🛡️ Proposed fix using try-finally
fun testDomain() {
val domain = _testDomainQuery.value.trim()
if (domain.isBlank()) return
viewModelScope.launch {
_isTestingDomain.value = true
- _testDomainResult.value = filterRepo.checkDomainInFilter(filterId, domain)
- _isTestingDomain.value = false
+ try {
+ _testDomainResult.value = filterRepo.checkDomainInFilter(filterId, domain)
+ } catch (e: Exception) {
+ _testDomainResult.value = null
+ } finally {
+ _isTestingDomain.value = false
+ }
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fun testDomain() { | |
| val domain = _testDomainQuery.value.trim() | |
| if (domain.isBlank()) return | |
| viewModelScope.launch { | |
| _isLoadingDomains.value = true | |
| val f = filterListDao.getById(filterId) | |
| if (f != null) { | |
| _domainPreview.value = filterRepo.getDomainPreview(f, 200) | |
| } | |
| _isLoadingDomains.value = false | |
| _isTestingDomain.value = true | |
| _testDomainResult.value = filterRepo.checkDomainInFilter(filterId, domain) | |
| _isTestingDomain.value = false | |
| } | |
| } | |
| fun testDomain() { | |
| val domain = _testDomainQuery.value.trim() | |
| if (domain.isBlank()) return | |
| viewModelScope.launch { | |
| _isTestingDomain.value = true | |
| try { | |
| _testDomainResult.value = filterRepo.checkDomainInFilter(filterId, domain) | |
| } catch (e: Exception) { | |
| _testDomainResult.value = null | |
| } finally { | |
| _isTestingDomain.value = false | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@app/src/main/java/app/pwhs/blockads/ui/filter/detail/FilterDetailViewModel.kt`
around lines 60 - 69, The testDomain function can leave _isTestingDomain true if
filterRepo.checkDomainInFilter throws; wrap the suspension call inside a
try-finally inside the viewModelScope.launch in testDomain so that
_isTestingDomain.value is set to false in the finally block; keep setting
_testDomainResult from filterRepo.checkDomainInFilter inside the try and
optionally catch exceptions to map to a failure result before the finally,
referencing testDomain, viewModelScope.launch, _isTestingDomain,
_testDomainResult, and filterRepo.checkDomainInFilter.
| // Use backend compiler API to build optimized binary files | ||
| _isAddingCustomFilter.value = true | ||
| val result = customFilterManager.addCustomFilter(trimmedUrl) | ||
| _isAddingCustomFilter.value = false |
There was a problem hiding this comment.
Reset the loading flags in finally.
If any repository/manager call throws before the happy path completes, the dialog or update button stays permanently busy because these flags never flip back.
Suggested fix
- _isAddingCustomFilter.value = true
- val result = customFilterManager.addCustomFilter(trimmedUrl)
- _isAddingCustomFilter.value = false
+ _isAddingCustomFilter.value = true
+ val result = try {
+ customFilterManager.addCustomFilter(trimmedUrl)
+ } finally {
+ _isAddingCustomFilter.value = false
+ }Apply the same try/finally pattern to _isUpdatingFilter in updateAllFilters().
Also applies to: 121-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/ui/filter/FilterSetupViewModel.kt` around
lines 88 - 91, The loading flags are not reset if an exception is thrown; wrap
the async operations that set _isAddingCustomFilter and _isUpdatingFilter in
try/finally blocks so the flags are always set back to false. Specifically, in
the code around _isAddingCustomFilter with
customFilterManager.addCustomFilter(trimmedUrl) ensure you set
_isAddingCustomFilter.value = true before the call and move
_isAddingCustomFilter.value = false into a finally block, and apply the same
pattern inside updateAllFilters() for _isUpdatingFilter so any thrown exception
still clears the busy state.
| destDir.mkdirs() | ||
| val canonicalDest = destDir.canonicalPath |
There was a problem hiding this comment.
Tighten the zip-slip guard.
startsWith(canonicalDest) is bypassable when a sibling path shares the same prefix, e.g. /files/out_evil/... vs /files/out. Require exact equality or the destination path plus File.separator before extracting.
Suggested fix
destDir.mkdirs()
val canonicalDest = destDir.canonicalPath
+ val canonicalDestPrefix = canonicalDest + File.separator
@@
- if (!entryFile.canonicalPath.startsWith(canonicalDest)) {
+ val canonicalEntry = entryFile.canonicalPath
+ if (
+ canonicalEntry != canonicalDest &&
+ !canonicalEntry.startsWith(canonicalDestPrefix)
+ ) {
throw ZipExtractionException(
"Zip-slip detected: ${entry.name}"
)
}Also applies to: 71-75
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/main/java/app/pwhs/blockads/utils/ZipUtils.kt` around lines 41 - 42,
The zip-slip guard using startsWith(canonicalDest) is too permissive; update the
extraction check (around the code setting destDir.mkdirs() and val canonicalDest
= destDir.canonicalPath) to ensure the entry's canonical path is either exactly
equal to canonicalDest or starts with canonicalDest + File.separator (use
File.separatorChar) before writing files; apply the same stricter check in the
second extraction block referenced at lines 71-75 so entryFile.canonicalPath
cannot escape the destination directory.
Summary by CodeRabbit
New Features
Bug Fixes
Chores