Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,20 @@

All notable changes to Library Manager will be documented in this file.

## [0.9.0-beta.130] - 2026-02-18

### Fixed

- **Issue #160: Rate-limited batches no longer trigger false exhaustion** - Layer 4 processing
now distinguishes between "genuinely unidentifiable books" and "AI providers temporarily
unavailable." Rate-limited batches return a distinct signal (`-1`) and are not counted toward
the 3-strike exhaustion rule. When circuit breakers are open on AI providers, the worker waits
for recovery instead of marking books as "all processing layers exhausted." Previously, books
that were perfectly identifiable could be permanently marked as failed if providers were
rate-limited during processing.

---

## [0.9.0-beta.129] - 2026-02-18

### Changed
Expand Down
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

**Smart Audiobook Library Organizer with Multi-Source Metadata & AI Verification**

[![Version](https://img.shields.io/badge/version-0.9.0--beta.129-blue.svg)](CHANGELOG.md)
[![Version](https://img.shields.io/badge/version-0.9.0--beta.130-blue.svg)](CHANGELOG.md)
[![Docker](https://img.shields.io/badge/docker-ghcr.io-blue.svg)](https://ghcr.io/deucebucket/library-manager)
[![License](https://img.shields.io/badge/license-AGPL--3.0-blue.svg)](LICENSE)

Expand All @@ -16,6 +16,11 @@

## Recent Changes (stable)

> **beta.130** - **Fix: Rate-Limited Batches No Longer Trigger False Exhaustion** (Issue #160)
> - **Rate-limited batches skipped** - When AI providers are rate-limited, batches are no longer counted toward the 3-strike "all processing layers exhausted" rule
> - **Circuit breaker awareness** - Layer 4 now waits for providers to recover instead of permanently marking identifiable books as failed
> - **Distinct signal for rate limiting** - `process_queue` returns `-1` (rate-limited) vs `0` (genuinely empty) so the worker can react correctly

> **beta.129** - **UI: Feedback Widget Moved to Nav Bar** (Issue #159)
> - **Bug icon in nav bar** - Feedback/bug report button moved from floating bottom-right circle to a consistent bug icon in the top navigation bar
> - **No more overlapping buttons** - Eliminates confusing dual floating buttons on the dashboard page
Expand Down
2 changes: 1 addition & 1 deletion app.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
- Multi-provider AI (Gemini, OpenRouter, Ollama)
"""

APP_VERSION = "0.9.0-beta.129"
APP_VERSION = "0.9.0-beta.130"
GITHUB_REPO = "deucebucket/library-manager" # Your GitHub repo

# Versioning Guide:
Expand Down Expand Up @@ -730,7 +730,7 @@
try:
with open(ERROR_REPORTS_PATH, 'r') as f:
reports = json.load(f)
except:

Check failure on line 733 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (E722)

app.py:733:13: E722 Do not use bare `except`
reports = []

# Add new report (keep last 100 reports to avoid file bloat)
Expand All @@ -754,7 +754,7 @@
try:
with open(ERROR_REPORTS_PATH, 'r') as f:
return json.load(f)
except:

Check failure on line 757 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (E722)

app.py:757:9: E722 Do not use bare `except`
return []
return []

Expand Down Expand Up @@ -1709,7 +1709,7 @@
continue
result = call_gemini(prompt, merged_config)
if result:
logger.info(f"[PROVIDER CHAIN] Success with gemini")

Check failure on line 1712 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1712:33: F541 f-string without any placeholders
return result

elif provider == 'openrouter':
Expand All @@ -1718,13 +1718,13 @@
continue
result = call_openrouter(prompt, merged_config)
if result:
logger.info(f"[PROVIDER CHAIN] Success with openrouter")

Check failure on line 1721 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1721:33: F541 f-string without any placeholders
return result

elif provider == 'ollama':
result = call_ollama(prompt, merged_config)
if result:
logger.info(f"[PROVIDER CHAIN] Success with ollama")

Check failure on line 1727 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1727:33: F541 f-string without any placeholders
return result

else:
Expand Down Expand Up @@ -1826,7 +1826,7 @@
return result
elif result and result.get('transcript'):
# Got transcript but no match - still useful, return for potential AI fallback
logger.info(f"[AUDIO CHAIN] BookDB returned transcript only")

Check failure on line 1829 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1829:37: F541 f-string without any placeholders
return result
elif result is None and attempt < max_retries - 1:
# Connection might be down, wait and retry
Expand Down Expand Up @@ -2158,11 +2158,11 @@
device = "cuda"
# int8 works on all CUDA devices including GTX 1080 (compute 6.1)
# float16 only works on newer GPUs (compute 7.0+)
logger.info(f"[WHISPER] Using CUDA GPU acceleration (10x faster)")

Check failure on line 2161 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:2161:29: F541 f-string without any placeholders
else:
logger.info(f"[WHISPER] Using CPU (no CUDA GPU detected)")

Check failure on line 2163 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:2163:29: F541 f-string without any placeholders
except ImportError:
logger.info(f"[WHISPER] Using CPU (ctranslate2 not available)")

Check failure on line 2165 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:2165:25: F541 f-string without any placeholders

_whisper_model = WhisperModel(model_name, device=device, compute_type=compute_type)
_whisper_model_name = model_name
Expand Down Expand Up @@ -2369,7 +2369,7 @@
if sample_path and os.path.exists(sample_path):
try:
os.unlink(sample_path)
except:

Check failure on line 2372 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (E722)

app.py:2372:13: E722 Do not use bare `except`
pass

return result
Expand Down
2 changes: 1 addition & 1 deletion library_manager/pipeline/layer_ai_queue.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ def process_queue(
allowed, calls_made, max_calls = check_rate_limit(config)
if not allowed:
logger.warning(f"Rate limit reached: {calls_made}/{max_calls} calls. Waiting...")
return 0, 0
return -1, 0 # Signal rate-limited (distinct from 0,0 = nothing to process)

# Check if AI verification is enabled (before opening connection)
if not config.get('enable_ai_verification', True):
Expand Down
26 changes: 26 additions & 0 deletions library_manager/worker.py
Original file line number Diff line number Diff line change
Expand Up @@ -391,7 +391,33 @@ def process_all_queue(
# At this point, we're trusting folder names as a last resort
processed, fixed = process_queue(config, verification_layer=4)

# Issue #160: processed == -1 means rate-limited, NOT "nothing to process"
# Don't count rate-limited batches toward the 3-strike exhaustion rule
if processed == -1:
logger.info("Batch skipped due to rate limiting - not counting toward exhaustion")
_processing_status["current"] = "Rate limited, waiting for cooldown..."
_processing_status["last_activity"] = "Waiting for rate limit cooldown"
_processing_status["last_activity_time"] = time.time()
time.sleep(30)
continue

if processed == 0:
# Check if AI providers are circuit-broken before counting as empty
# If providers are unavailable, this isn't a real "empty" result
ai_provider = config.get('ai_provider', 'gemini')
providers_to_check = [ai_provider]
if ai_provider != 'bookdb':
providers_to_check.append('bookdb')
any_circuit_open = any(is_circuit_open(p) for p in providers_to_check)

if any_circuit_open:
logger.info(f"AI providers circuit-broken ({', '.join(p for p in providers_to_check if is_circuit_open(p))}) - waiting for recovery, not counting toward exhaustion")
_processing_status["current"] = "AI provider cooling down, waiting..."
_processing_status["last_activity"] = "Waiting for circuit breaker recovery"
_processing_status["last_activity_time"] = time.time()
time.sleep(30)
continue

conn = get_db()
c = conn.cursor()
c.execute('SELECT COUNT(*) as count FROM queue')
Expand Down