Skip to content

Conversation

@vikahaze
Copy link
Collaborator

@vikahaze vikahaze commented Dec 8, 2025

Summary

This PR adds solutions and explanations for 8 LeetCode problems following the standardized template format.

Problems Added

  1. 27. Remove Element (Easy) - Two pointers approach to remove elements in-place
  2. 156. Binary Tree Upside Down (Medium) - Recursive tree transformation
  3. 157. Read N Characters Given Read4 (Easy) - File reading API implementation
  4. 158. Read N Characters Given read4 II - Call Multiple Times (Hard) - Stateful file reading with buffer management
  5. 159. Longest Substring with At Most Two Distinct Characters (Medium) - Sliding window technique
  6. 161. One Edit Distance (Medium) - String comparison with edit distance check
  7. 163. Missing Ranges (Easy) - Array range gap detection
  8. 170. Two Sum III - Data structure design (Easy) - Hash map-based two sum data structure

Changes

  • Created 7 new solution files (problem 27 already had a solution)
  • Created/updated 8 explanation files following the new template format
  • All explanations include:
    • Strategy section with constraints, complexity analysis, and decomposition
    • Step-by-step walkthrough with trace tables
    • Proper markdown formatting without LaTeX

Normalization

  • Updated data/book-sets.json to include new problems in "All" set
  • Regenerated solution books
  • Normalized JSON files

Summary by CodeRabbit

  • Documentation

    • Added extensive problem guides with strategies, complexity notes, step-by-step walkthroughs and example traces.
  • New Features

    • Added multiple runnable solution implementations covering trees, strings, sliding-window, file-read patterns, missing ranges and a two-sum data structure.
  • Refactor

    • Converted a module-level function into a class method interface.
  • Chores

    • Updated set-generation and JSON output behavior, added premium-aware filtering and formatting changes.

✏️ Tip: You can customize this high-level summary in your review settings.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @vikahaze, your pull request is larger than the review limit of 150000 diff characters

@coderabbitai
Copy link

coderabbitai bot commented Dec 8, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds many new explanation markdowns and Python solution files, expands the book documentation with numerous LeetCode problem entries, converts one solution function into a class method, and applies several script/JSON serialization updates including premium-aware filtering in a normalization script.

Changes

Cohort / File(s) Summary
Explanations (new)
explanations/156/en.md, explanations/157/en.md, explanations/158/en.md, explanations/159/en.md, explanations/161/en.md, explanations/163/en.md, explanations/170/en.md
Added detailed explanation markdowns covering problem restatements, constraints, complexity, approaches, decomposition, and walkthroughs.
Explanation (updated)
explanations/27/en.md
Expanded and reformatted existing explanation (renamed header, added subsections, trace table); content reorganized but algorithmic logic unchanged.
Solutions (new)
solutions/156/01.py, solutions/157/01.py, solutions/158/01.py, solutions/159/01.py, solutions/161/01.py, solutions/163/01.py, solutions/170/01.py
Added Python solution implementations for multiple problems (upside-down binary tree, read4 reader basic and buffered variants, longest substring ≤2 distinct chars, one-edit-distance, missing ranges, TwoSum DS).
Solution (modified)
solutions/27/01.py
Converted module-level removeElement(nums, val) into Solution.removeElement(self, nums, val) (moved into a Solution class); logic preserved but public signature changed.
Book content
books/All.md
Massive additions of problem sections: metadata, explanations, example traces, and Python snippets; some duplication/reorganization present.
Data / JSON
data/book-sets.json, package.json
Updated data/book-sets.json (All set membership changed; minor formatting change). Added package.json with a JSON-formatting script (format:json).
Scripts (behavior/formatting changes)
scripts/normalize_book_sets.py, scripts/normalize_json.py, scripts/sort_book_sets_by_difficulty.py, scripts/update_problems_from_csv.py
normalize_book_sets: added premium-aware filtering/reporting and compact JSON write; normalize_json: removed custom formatter and now writes json.dump (delegates formatting to Prettier); sort_book_sets_by_difficulty/update_problems_from_csv: changed JSON serialization to omit indent and use ensure_ascii=False and added comments about Prettier.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Areas needing extra attention:
    • solutions/27/01.py — public API changed from module function to class method; update callers/tests.
    • scripts/normalize_book_sets.py — verify premium filtering, set-difference logic, logging, and JSON write behavior.
    • books/All.md — large content additions may include duplicates, broken links, or formatting issues.
    • scripts/normalize_json.py, scripts/update_problems_from_csv.py, scripts/sort_book_sets_by_difficulty.py — confirm JSON output expectations and Prettier integration.
    • New solution files — validate edge cases, type hints, and return semantics.

Possibly related PRs

"I'm a rabbit in the code-field bright,
Hopping through diffs by lantern-light,
Docs bloomed like clover, solutions in rows,
Scripts trimmed and tidy where the cool wind blows,
Nose twitch, commit done — I hop off in flight."

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately summarizes the main change: adding solutions and explanations for eight specific LeetCode problems (27, 156, 157, 158, 159, 161, 163, 170).

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4ae676f and 7c61689.

📒 Files selected for processing (6)
  • data/book-sets.json (1 hunks)
  • package.json (1 hunks)
  • scripts/normalize_book_sets.py (4 hunks)
  • scripts/normalize_json.py (2 hunks)
  • scripts/sort_book_sets_by_difficulty.py (1 hunks)
  • scripts/update_problems_from_csv.py (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (6)
solutions/163/01.py (1)

4-15: LGTM! Consider unpacking syntax for style consistency.

The algorithm correctly identifies missing ranges by checking gaps between numbers. The approach of appending upper + 1 to handle the final range is elegant.

For slightly more Pythonic code, consider using unpacking:

-        for num in nums + [upper + 1]:
+        for num in [*nums, upper + 1]:

This is a minor style preference mentioned by the linter and doesn't affect functionality.

solutions/158/01.py (1)

7-24: LGTM! Correct buffering approach for multiple calls.

The solution properly handles multiple read calls by:

  • Maintaining an internal queue to buffer leftover characters
  • Consuming buffered characters before calling read4
  • Adding new characters from read4 to the buffer
  • Stopping at EOF or when n characters are read

The static analysis warning about read4 is a false positive—it's provided by the LeetCode platform.

For slightly better performance, consider using collections.deque instead of a list for the queue, since pop(0) on a list is O(n). However, given the queue holds at most 4 elements, the current implementation is acceptable:

+from collections import deque
+
 class Solution:
     def __init__(self):
-        self.queue = []
+        self.queue = deque()
     
     def read(self, buf: List[str], n: int) -> int:
         idx = 0
         
         while idx < n:
             if self.queue:
-                buf[idx] = self.queue.pop(0)
+                buf[idx] = self.queue.popleft()
                 idx += 1
             else:
                 buf4 = [''] * 4
                 count = read4(buf4)
                 if count == 0:
                     break
                 self.queue.extend(buf4[:count])
         
         return idx
books/All.md (4)

4303-4321: Remove stray __init__ stub; keep solution block minimal.

The def __init__ placeholder before the actual solution is commented-out and unrelated to the algorithm. It distracts readers and may confuse tooling that extracts code. Please remove it and keep only the solution function.

Apply this edit to streamline the snippet:

-```python
-def __init__(self, val=0, left=None, right=None):
-#         self.val = val
-#         self.left = left
-#         self.right = right
-
-class Solution:
+```python
+class Solution:
     def upsideDownBinaryTree(self, root: Optional[TreeNode]) -> Optional[TreeNode]:
         if not root or not root.left:
             return root
         
         new_root = self.upsideDownBinaryTree(root.left)
         
         root.left.left = root.right
         root.left.right = root
         root.left = None
         root.right = None
         
         return new_root

6892-6914: Add missing typing import or drop hints; also remove unrelated __init__ stub.

This snippet uses Optional[ListNode] but doesn’t import Optional. Either import Optional or remove the type hints to keep the example simple. Also, the def __init__ stub is unrelated.

Minimal, self-contained edit:

-```python
-def __init__(self, val=0, next=None):
-#         self.val = val
-#         self.next = next
-class Solution:
+```python
+from typing import Optional
+
+class Solution:
     def oddEvenList(self, head: Optional[ListNode]) -> Optional[ListNode]:
         if not head or not head.next:
             return head
         
         odd = head
         even = head.next
         even_head = even
         
         while even and even.next:
             odd.next = even.next
             odd = odd.next
             even.next = odd.next
             even = even.next
         
         odd.next = even_head
         return head

8075-8090: Clean up stubs and ensure typing context for TreeNode/Optional.

Both “Search in a BST” (lines 8075–8090) and “Insert into a BST” (lines 8180–8199) contain the same commented __init__ stub and use Optional without a local import. For consistency across the book:

  • Remove the def __init__ placeholder.
  • Add from typing import Optional at the top of each code block or remove the type hints.

Example for 700:

-```python
-def __init__(self, val=0, left=None, right=None):
-#         self.val = val
-#         self.left = left
-#         self.right = right
-class Solution:
+```python
+from typing import Optional
+
+class Solution:
     def searchBST(self, root: Optional[TreeNode], val: int) -> Optional[TreeNode]:
         if not root:
             return None
         
         if root.val == val:
             return root
         elif root.val > val:
             return self.searchBST(root.left, val)
         else:
             return self.searchBST(root.right, val)

Also applies to: 8180-8199


4240-4244: Resolve markdownlint issues: duplicate headings and bare URLs.

Multiple new sections repeat “## Explanation” immediately after an “### Explanation” (MD024) and use bare URLs (MD034). This hurts document navigation and link rendering.

  • Keep a single “### Strategy” and a single “### Steps” under one “### Explanation”.
  • Convert all plain URLs to [Problem link](...).
  • Consider adding a pre-commit hook to run markdownlint.

Example hook (optional):

+#!/bin/sh
+FILES=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.md$')
+[ -z "$FILES" ] && exit 0
+npx markdownlint-cli2 $FILES || exit 1

Also applies to: 4420-4424, 6826-6830, 7187-7191, 7757-7761, 8013-8017, 8544-8548, 8659-8663, 8990-8994, 9570-9574, 9854-9858, 9990-9994

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 66c5a52 and 9d945bf.

📒 Files selected for processing (16)
  • books/All.md (12 hunks)
  • explanations/156/en.md (1 hunks)
  • explanations/157/en.md (1 hunks)
  • explanations/158/en.md (1 hunks)
  • explanations/159/en.md (1 hunks)
  • explanations/161/en.md (1 hunks)
  • explanations/163/en.md (1 hunks)
  • explanations/170/en.md (1 hunks)
  • explanations/27/en.md (1 hunks)
  • solutions/156/01.py (1 hunks)
  • solutions/157/01.py (1 hunks)
  • solutions/158/01.py (1 hunks)
  • solutions/159/01.py (1 hunks)
  • solutions/161/01.py (1 hunks)
  • solutions/163/01.py (1 hunks)
  • solutions/170/01.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
solutions/158/01.py (1)
solutions/157/01.py (2)
  • Solution (6-20)
  • read (7-20)
solutions/170/01.py (1)
solutions/1/01.js (1)
  • complement (4-4)
🪛 LanguageTool
explanations/161/en.md

[style] ~14-~14: This phrasing could be wordy, so try replacing it with something more concise.
Context: ...We first check if the length difference is more than 1 (impossible to be one edit). Then we ...

(MORE_THAN_EXCEEDS)


[style] ~24-~24: This phrasing could be wordy, so try replacing it with something more concise.
Context: ...sition:** 1. Check if length difference is more than 1 - if so, return False 2. Ensure s is ...

(MORE_THAN_EXCEEDS)

🪛 markdownlint-cli2 (0.18.1)
books/All.md

4240-4240: Bare URL used

(MD034, no-bare-urls)


4244-4244: Multiple headings with the same content

(MD024, no-duplicate-heading)


4418-4418: Bare URL used

(MD034, no-bare-urls)


4422-4422: Multiple headings with the same content

(MD024, no-duplicate-heading)


6824-6824: Bare URL used

(MD034, no-bare-urls)


6828-6828: Multiple headings with the same content

(MD024, no-duplicate-heading)


7185-7185: Bare URL used

(MD034, no-bare-urls)


7189-7189: Multiple headings with the same content

(MD024, no-duplicate-heading)


7755-7755: Bare URL used

(MD034, no-bare-urls)


7759-7759: Multiple headings with the same content

(MD024, no-duplicate-heading)


8011-8011: Bare URL used

(MD034, no-bare-urls)


8015-8015: Multiple headings with the same content

(MD024, no-duplicate-heading)


8093-8093: Bare URL used

(MD034, no-bare-urls)


8202-8202: Bare URL used

(MD034, no-bare-urls)


8206-8206: Multiple headings with the same content

(MD024, no-duplicate-heading)


8542-8542: Bare URL used

(MD034, no-bare-urls)


8546-8546: Multiple headings with the same content

(MD024, no-duplicate-heading)


8612-8612: Reference links and images should use a label that is defined
Missing link or image reference definition: "0"

(MD052, reference-links-images)


8613-8613: Reference links and images should use a label that is defined
Missing link or image reference definition: "0"

(MD052, reference-links-images)


8657-8657: Bare URL used

(MD034, no-bare-urls)


8661-8661: Multiple headings with the same content

(MD024, no-duplicate-heading)


8988-8988: Bare URL used

(MD034, no-bare-urls)


8992-8992: Multiple headings with the same content

(MD024, no-duplicate-heading)


9568-9568: Bare URL used

(MD034, no-bare-urls)


9572-9572: Multiple headings with the same content

(MD024, no-duplicate-heading)


9852-9852: Bare URL used

(MD034, no-bare-urls)


9856-9856: Multiple headings with the same content

(MD024, no-duplicate-heading)


9988-9988: Bare URL used

(MD034, no-bare-urls)


9992-9992: Multiple headings with the same content

(MD024, no-duplicate-heading)

🪛 Ruff (0.14.7)
solutions/156/01.py

11-11: Undefined name TreeNode

(F821)


11-11: Undefined name TreeNode

(F821)

solutions/163/01.py

8-8: Consider [*nums, upper + 1] instead of concatenation

Replace with [*nums, upper + 1]

(RUF005)

solutions/158/01.py

19-19: Undefined name read4

(F821)

solutions/157/01.py

12-12: Undefined name read4

(F821)

🔇 Additional comments (18)
explanations/161/en.md (1)

1-62: LGTM! Clear and comprehensive explanation.

The documentation effectively covers the one-edit-distance problem with accurate complexity analysis, clear decomposition, and helpful trace walkthroughs. The examples demonstrate both true and false cases well.

explanations/27/en.md (1)

1-54: LGTM! Enhanced documentation structure.

The updates significantly improve the explanation with better organization, detailed complexity analysis, and a helpful trace table. The distinction between brute force and optimized approaches adds educational value.

solutions/157/01.py (1)

7-20: LGTM! Clean implementation of the read API.

The solution correctly handles all edge cases:

  • Stops when reaching n characters
  • Handles EOF (when read4 returns 0)
  • Properly limits copying using min(count, n - total)
  • Efficient slice assignment for copying

The static analysis warning about read4 being undefined is a false positive—it's provided by the LeetCode platform as indicated by the comment on lines 3-4.

solutions/159/01.py (1)

4-22: LGTM! Correct sliding window implementation.

The solution properly implements the sliding window technique:

  • Expands the window by adding characters and updating counts
  • Shrinks the window when exceeding two distinct characters
  • Correctly removes zero-count entries to maintain accurate distinct count
  • Tracks the maximum valid window size

The algorithm handles all edge cases correctly and achieves O(n) time complexity.

explanations/159/en.md (1)

1-58: LGTM! Thorough explanation with detailed trace.

The documentation effectively explains the sliding window approach with:

  • Accurate complexity analysis (O(n) time, O(1) space for at most 2 distinct chars)
  • Clear decomposition of the algorithm
  • Detailed trace table showing multiple shrink steps
  • Good comparison between brute force and optimized approaches

The trace walkthrough with sub-steps (3a, 3b, 4a) is particularly helpful for understanding the window shrinking behavior.

explanations/170/en.md (1)

1-63: LGTM! Clear explanation of the Two Sum data structure.

The documentation effectively covers the data structure design with:

  • Accurate complexity analysis (O(1) for add, O(n) for find)
  • Clear explanation of the hash map frequency approach
  • Proper handling of the edge case where complement equals the number
  • Detailed trace showing both successful and unsuccessful find operations
  • Good comparison between brute force and optimized approaches

The step-by-step walkthrough clearly demonstrates how the hash map enables efficient lookups and handles the special case of duplicate numbers.

solutions/156/01.py (2)

1-9: LGTM! Standard LeetCode pattern.

The commented TreeNode definition and undefined type warning from Ruff are expected in LeetCode solutions—the platform provides TreeNode at runtime.


10-22: LGTM! Correct recursive transformation.

The implementation correctly handles the upside-down transformation: base case guards against empty/leaf nodes, recursion finds the new root, and pointer reassignments during backtracking establish the transformed structure.

solutions/170/01.py (3)

1-5: LGTM! Clean initialization.

Using defaultdict(int) is appropriate for tracking element frequencies.


7-8: LGTM! Correct add implementation.

The method efficiently tracks element frequencies in O(1) time.


10-16: LGTM! Correct two-sum logic.

The implementation correctly handles both distinct-number pairs and same-number pairs (requiring count > 1). The logic is sound and matches the problem requirements.

explanations/163/en.md (1)

1-57: LGTM! Clear and well-structured explanation.

The documentation follows the template with comprehensive strategy breakdown, complexity analysis, and an effective trace table for understanding the algorithm.

solutions/161/01.py (3)

1-10: LGTM! Efficient guard and normalization.

The length-difference guard (lines 5-6) and swap normalization (lines 8-10) correctly handle impossible cases and simplify the subsequent logic.


12-17: LGTM! Correct mismatch handling.

The logic correctly distinguishes between replacement (same length) and insertion/deletion (different length) by comparing the appropriate substrings.


19-19: LGTM! Correct no-mismatch handling.

When no mismatch is found within s's range, the function correctly returns True only if t has exactly one additional character, ensuring exactly one edit distance.

explanations/158/en.md (1)

1-63: LGTM! Clear explanation of stateful buffering.

The documentation effectively explains the key difference from problem 157: maintaining a queue to preserve leftover characters between multiple calls. The trace table clearly demonstrates the buffering behavior.

explanations/157/en.md (1)

1-54: LGTM! Clear explanation of batched reading.

The documentation effectively explains the strategy of reading in chunks of 4 and handling partial reads. The trace table clearly shows how the buffer is filled across iterations.

explanations/156/en.md (1)

1-56: LGTM! Excellent explanation of recursive transformation.

The documentation clearly explains the upside-down transformation with effective use of the trace table to show both the recursive descent and backtracking pointer reassignments. Well aligned with the code implementation.

Comment on lines +4488 to +4501
def __init__(self):
self.num_counts = defaultdict(int)
def add(self, number: int) -> None:
self.num_counts[number] += 1
def find(self, value: int) -> bool:
for num in self.num_counts:
complement = value - num
if complement in self.num_counts:
if complement != num or self.num_counts[num] > 1:
return True
return False
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing import for defaultdict (code won’t run as-is).

defaultdict is used but never imported. Add the import to make the snippet self-contained.

Apply this fix:

-```python
-def __init__(self):
-        self.num_counts = defaultdict(int)
+```python
+from collections import defaultdict
+
+def __init__(self):
+        self.num_counts = defaultdict(int)
     
     def add(self, number: int) -> None:
         self.num_counts[number] += 1
     
     def find(self, value: int) -> bool:
         for num in self.num_counts:
             complement = value - num
             if complement in self.num_counts:
                 if complement != num or self.num_counts[num] > 1:
                     return True
         return False
🤖 Prompt for AI Agents
In books/All.md around lines 4488 to 4501 the snippet uses defaultdict but never
imports it; add "from collections import defaultdict" at the top of the snippet
(before __init__) so the class can instantiate defaultdict(int) and the code
runs as-is.

Comment on lines +8202 to +8217
https://leetcode.com/problems/leaf-similar-trees/

### Explanation

## Explanation

### Strategy (The "Why")

**1.1 Constraints & Complexity:**

* **Input Size:** Each tree can have up to 200 nodes.
* **Time Complexity:** O(n + m) - We traverse both trees once to collect leaves, where n and m are the number of nodes in each tree.
* **Space Complexity:** O(n + m) - We store leaf sequences for both trees, plus O(h) for recursion stack.
* **Edge Case:** If both trees are empty, they are leaf-similar (both have empty leaf sequences).

**1.2 High-level approach:**
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if markdownlint-cli2 is available, install if needed
if ! command -v npx &> /dev/null; then
  echo "npx not available, attempting to install Node.js packages..."
  npm install -g markdownlint-cli2 2>&1 | head -20
fi

# Run markdownlint on the file
npx markdownlint-cli2 books/All.md 2>&1 | head -100

Repository: romankurnovskii/leetcode-apps

Length of output: 9644


🏁 Script executed:

#!/bin/bash
# Also inspect the actual file content around lines 8202-8217 to confirm the issues
sed -n '8195,8225p' books/All.md | cat -n

Repository: romankurnovskii/leetcode-apps

Length of output: 1387


🏁 Script executed:

#!/bin/bash
# Search for the pattern of duplicate headings throughout the file to assess scope
rg '### Explanation\s+## Explanation' books/All.md --multiline -A 2 -B 2 | head -50

Repository: romankurnovskii/leetcode-apps

Length of output: 840


Replace bare URL with Markdown link and deduplicate headings.

The bare URL (line 8202) should be converted to a Markdown link, and the duplicate "### Explanation" / "## Explanation" headings (lines 8204-8206) should be consolidated to a single "### Explanation". This pattern—bare URLs followed by duplicated "Explanation" headings—appears throughout the file and should be standardized consistently.

Example fix for this section:

-## 872. Leaf-Similar Trees [Easy]
-https://leetcode.com/problems/leaf-similar-trees/
+## 872. Leaf-Similar Trees [Easy]
+[Problem link](https://leetcode.com/problems/leaf-similar-trees/)
-
-### Explanation
-
-## Explanation
+### Explanation

Markdownlint identifies 2502 formatting errors in All.md, including bare URLs (MD034), duplicate headings (MD024), and structural inconsistencies. A full lint pass should be run to address all formatting issues systematically.

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

8202-8202: Bare URL used

(MD034, no-bare-urls)


8206-8206: Multiple headings with the same content

(MD024, no-duplicate-heading)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
scripts/normalize_book_sets.py (2)

121-135: Accumulate premium problems instead of overwriting premium_set

Inside the loop, premium_set is reassigned for each object with a premium field, so only the last such object contributes. If book-sets.json ever contains multiple premium lists, earlier ones would be silently ignored. Prefer accumulating:

-    premium_set = set()
+    premium_set = set()
@@
-        elif "premium" in obj:
-            # Get premium problems list
-            premium_set = set(obj.get("premium", []))
+        elif "premium" in obj:
+            # Accumulate premium problems from any object
+            premium_set |= set(obj.get("premium", []))

This is backward‑compatible even if there is only one premium section today.


137-169: All-TODO premium filtering logic is good; consider updating the high-level docs

The updated All-TODO handling correctly:

  • strips out premium problems via removed_premium, and
  • separately removes problems that already have both solution and explanation.

That matches the intent of keeping a non‑premium, unsolved backlog. However, the module docstring at the top still only mentions removing problems that have both solution and explanation; it doesn’t mention premium-based removals. Consider updating that docstring (and optionally the CLI description) to mention premium exclusion so future readers aren’t surprised by this behavior.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9d945bf and f7d88d7.

📒 Files selected for processing (2)
  • scripts/normalize_book_sets.py (3 hunks)
  • solutions/27/01.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
scripts/normalize_book_sets.py

159-159: f-string without any placeholders

Remove extraneous f prefix

(F541)

🔇 Additional comments (2)
solutions/27/01.py (1)

4-15: LGTM! Clean refactoring with correct two-pointer implementation.

The refactoring to a class-based interface follows LeetCode conventions perfectly. The two-pointer algorithm is correctly implemented:

  • The slow pointer tracks where to place the next non-val element
  • The fast pointer scans through the array
  • In-place modification with O(n) time and O(1) space complexity

The logic handles all edge cases correctly (empty array, all elements equal to val, no matching elements).

scripts/normalize_book_sets.py (1)

187-197: Premium exclusion from “All” set looks correct

The list comprehension for problems_with_both now correctly:

  • restricts to problems that have both solution and explanation, and
  • excludes any problem numbers present in premium_set.

This matches the intended behavior of keeping premium problems out of the “All” set while preserving the existing constraints. No changes needed here.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
scripts/normalize_book_sets.py (1)

281-301: Remove unnecessary f-string on the “[All] Updating problem list” log

Ruff correctly flags an f-string without placeholders at Line 287:

print(f"\n[All] Updating problem list:")

It should be a plain string:

-            print(f"\n[All] Updating problem list:")
+            print("\n[All] Updating problem list:")

This clears F541 and matches how other static messages are printed.

🧹 Nitpick comments (3)
scripts/normalize_book_sets.py (3)

20-99: Consider wiring compact_json_arrays into the JSON write path

The compact_json_arrays helper looks solid for compacting numeric arrays, but it is currently unused. As written, future runs of this script will fall back to plain json.dump, potentially losing your desired formatting.

If you want this normalization to be stable and automatic, consider updating the write block to use the helper:

-            try:
-                with open(book_sets_file, "w", encoding="utf-8") as f:
-                    json.dump(data, f, indent=2, ensure_ascii=False)
+            try:
+                with open(book_sets_file, "w", encoding="utf-8") as f:
+                    json_str = json.dumps(data, indent=2, ensure_ascii=False)
+                    f.write(compact_json_arrays(json_str))

This keeps book-sets.json consistently normalized whenever the script runs.


204-216: premium_set only reflects the last object with a "premium" key

In the discovery loop (around Line 214), premium_set is reassigned each time an object with a "premium" field is seen:

elif "premium" in obj:
    premium_set = set(obj.get("premium", []))

If book-sets.json ever contains multiple such objects, earlier premium lists will be silently overwritten. If the intent is to treat all of them as premium, consider accumulating instead:

-        elif "premium" in obj:
-            # Get premium problems list
-            premium_set = set(obj.get("premium", []))
+        elif "premium" in obj:
+            # Accumulate premium problems from all premium-aware sets
+            premium_set |= set(obj.get("premium", []))

If you are certain there will always be exactly one such object, this is fine as-is; otherwise, the union form is safer.


384-387: Writing JSON ignores compact_json_arrays formatting

The final write block still uses json.dump, which will produce the default “one number per line” arrays and ignore your new compaction logic. If the goal is to keep book-sets.json normalized with compact numeric arrays, please update this block as suggested in the earlier comment so the helper is actually applied.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f7d88d7 and 4ae676f.

📒 Files selected for processing (1)
  • scripts/normalize_book_sets.py (5 hunks)
🧰 Additional context used
🪛 Ruff (0.14.7)
scripts/normalize_book_sets.py

287-287: f-string without any placeholders

Remove extraneous f prefix

(F541)

@romankurnovskii romankurnovskii merged commit 1a07892 into main Dec 8, 2025
1 of 3 checks passed
@romankurnovskii romankurnovskii deleted the problems-27-156-157-158-159-161-163-170 branch December 8, 2025 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants