Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

All notable changes to Library Manager will be documented in this file.

## [0.9.0-beta.130] - 2026-02-18
## [0.9.0-beta.131] - 2026-02-18

### Fixed

Expand All @@ -13,6 +13,9 @@ All notable changes to Library Manager will be documented in this file.
for recovery instead of marking books as "all processing layers exhausted." Previously, books
that were perfectly identifiable could be permanently marked as failed if providers were
rate-limited during processing.
- **Issue #160: Guard sentinel in manual process endpoint** - The `-1` rate-limit sentinel from
`process_queue` was leaking into the `/api/process` endpoint response, showing `processed: -1`
in the UI. Now guarded with `max(0, l2_processed)`. (Caught by vibe-check review.)

---

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

**Smart Audiobook Library Organizer with Multi-Source Metadata & AI Verification**

[![Version](https://img.shields.io/badge/version-0.9.0--beta.130-blue.svg)](CHANGELOG.md)
[![Version](https://img.shields.io/badge/version-0.9.0--beta.131-blue.svg)](CHANGELOG.md)
[![Docker](https://img.shields.io/badge/docker-ghcr.io-blue.svg)](https://ghcr.io/deucebucket/library-manager)
[![License](https://img.shields.io/badge/license-AGPL--3.0-blue.svg)](LICENSE)

Expand Down
5 changes: 3 additions & 2 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
- Multi-provider AI (Gemini, OpenRouter, Ollama)
"""

APP_VERSION = "0.9.0-beta.130"
APP_VERSION = "0.9.0-beta.131"
GITHUB_REPO = "deucebucket/library-manager" # Your GitHub repo

# Versioning Guide:
Expand Down Expand Up @@ -730,7 +730,7 @@
try:
with open(ERROR_REPORTS_PATH, 'r') as f:
reports = json.load(f)
except:

Check failure on line 733 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (E722)

app.py:733:13: E722 Do not use bare `except`
reports = []

# Add new report (keep last 100 reports to avoid file bloat)
Expand All @@ -754,7 +754,7 @@
try:
with open(ERROR_REPORTS_PATH, 'r') as f:
return json.load(f)
except:

Check failure on line 757 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (E722)

app.py:757:9: E722 Do not use bare `except`
return []
return []

Expand Down Expand Up @@ -1709,7 +1709,7 @@
continue
result = call_gemini(prompt, merged_config)
if result:
logger.info(f"[PROVIDER CHAIN] Success with gemini")

Check failure on line 1712 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1712:33: F541 f-string without any placeholders
return result

elif provider == 'openrouter':
Expand All @@ -1718,13 +1718,13 @@
continue
result = call_openrouter(prompt, merged_config)
if result:
logger.info(f"[PROVIDER CHAIN] Success with openrouter")

Check failure on line 1721 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1721:33: F541 f-string without any placeholders
return result

elif provider == 'ollama':
result = call_ollama(prompt, merged_config)
if result:
logger.info(f"[PROVIDER CHAIN] Success with ollama")

Check failure on line 1727 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1727:33: F541 f-string without any placeholders
return result

else:
Expand Down Expand Up @@ -1826,7 +1826,7 @@
return result
elif result and result.get('transcript'):
# Got transcript but no match - still useful, return for potential AI fallback
logger.info(f"[AUDIO CHAIN] BookDB returned transcript only")

Check failure on line 1829 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:1829:37: F541 f-string without any placeholders
return result
elif result is None and attempt < max_retries - 1:
# Connection might be down, wait and retry
Expand Down Expand Up @@ -2158,11 +2158,11 @@
device = "cuda"
# int8 works on all CUDA devices including GTX 1080 (compute 6.1)
# float16 only works on newer GPUs (compute 7.0+)
logger.info(f"[WHISPER] Using CUDA GPU acceleration (10x faster)")

Check failure on line 2161 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:2161:29: F541 f-string without any placeholders
else:
logger.info(f"[WHISPER] Using CPU (no CUDA GPU detected)")

Check failure on line 2163 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:2163:29: F541 f-string without any placeholders
except ImportError:
logger.info(f"[WHISPER] Using CPU (ctranslate2 not available)")

Check failure on line 2165 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (F541)

app.py:2165:25: F541 f-string without any placeholders

_whisper_model = WhisperModel(model_name, device=device, compute_type=compute_type)
_whisper_model_name = model_name
Expand Down Expand Up @@ -2369,7 +2369,7 @@
if sample_path and os.path.exists(sample_path):
try:
os.unlink(sample_path)
except:

Check failure on line 2372 in app.py

View workflow job for this annotation

GitHub Actions / lint

ruff (E722)

app.py:2372:13: E722 Do not use bare `except`
pass

return result
Expand Down Expand Up @@ -7779,7 +7779,8 @@
# Layer 2: AI verification for items that passed through Layer 1
if config.get('enable_ai_verification', True):
l2_processed, l2_fixed = process_queue(config, limit)
total_processed += l2_processed
# Issue #160: process_queue returns -1 when rate-limited
total_processed += max(0, l2_processed)
total_fixed += l2_fixed

# Layer 3: Audio analysis (if enabled)
Expand Down
3 changes: 2 additions & 1 deletion library_manager/pipeline/layer_ai_queue.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,8 @@ def process_queue(
verification_layer: Which layer's items to process (2=AI, 4=folder fallback)

Returns:
Tuple of (processed_count, fixed_count)
Tuple of (processed_count, fixed_count). Returns (-1, 0) when rate-limited
(distinct from (0, 0) which means nothing to process).

NOTE: This function uses a 3-phase approach to avoid holding DB locks during
external AI API calls (which can take 5-30+ seconds):
Expand Down