feat: implement SSRF protection for web page extraction#97
feat: implement SSRF protection for web page extraction#97guangtouwangba wants to merge 1 commit intomasterfrom
Conversation
This change introduces a `safe_fetch_url` utility function that prevents Server-Side Request Forgery (SSRF) vulnerabilities. It manually handles redirects and validates each URL against private IP ranges before fetching content. This ensures that the application cannot be coerced into accessing internal network resources. Security Improvements: - Added `safe_fetch_url` in `utils.py` using `httpx`. - Configured `httpx` with `verify=True` (strict SSL verification). - Replaced `trafilatura.fetch_url` with `safe_fetch_url` in `WebPageExtractor`. - Added User-Agent header to avoid being blocked by WAFs.
Summary of ChangesHello @guangtouwangba, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the application's security posture by introducing robust Server-Side Request Forgery (SSRF) protection for web page extraction. The core change involves a new utility function that meticulously validates URLs, including during redirects, to prevent unauthorized access to internal network resources, thereby safeguarding the system from a critical class of vulnerabilities. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a safe_fetch_url function to provide robust SSRF protection when fetching content from URLs, which is a great security enhancement. The implementation correctly handles redirects manually and validates each URL in the chain against private IP ranges.
My review focuses on improving the robustness and performance of the new safe_fetch_url function. I've identified a potential Denial of Service vulnerability due to memory handling of large responses and a performance issue related to a blocking call in the async context. The suggested changes address these by introducing response streaming and running blocking code in a thread executor.
Overall, this is a solid contribution to the security of the application. Addressing the feedback will make it even more resilient.
| try: | ||
| # 2. Fetch headers only first (optional, but good for checking redirects efficiently) | ||
| # For simplicity in handling 301/302/200, we just do a GET but without following redirects automatically. | ||
| # Note: 'stream=True' could be used to inspect headers before downloading body. | ||
| response = await client.get(current_url, headers=headers, timeout=settings.url_extraction_timeout) | ||
|
|
||
| # 3. Handle Redirects Manually | ||
| if response.is_redirect: | ||
| location = response.headers.get("Location") | ||
| if not location: | ||
| break | ||
|
|
||
| # Resolve relative URLs | ||
| next_url = str(httpx.URL(current_url).join(location)) | ||
|
|
||
| if next_url in visited_urls: | ||
| logger.warning(f"[SSRF] Circular redirect detected: {next_url}") | ||
| return None | ||
|
|
||
| visited_urls.add(next_url) | ||
| current_url = next_url | ||
| logger.info(f"[SSRF] Redirecting to: {current_url}") | ||
| continue | ||
|
|
||
| # 4. Return content if successful | ||
| if response.status_code == 200: | ||
| # Enforce content length limit | ||
| if len(response.content) > settings.url_content_max_length: | ||
| logger.warning(f"[SSRF] Content too large: {len(response.content)} > {settings.url_content_max_length}") | ||
| return truncate_content(response.text, settings.url_content_max_length) | ||
|
|
||
| return response.text | ||
|
|
||
| # 5. Handle error status | ||
| logger.warning(f"[SSRF] Failed to fetch {current_url}, status: {response.status_code}") | ||
| return None | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"[SSRF] Error fetching {current_url}: {e}") | ||
| return None |
There was a problem hiding this comment.
The current implementation reads the entire response body into memory before checking its size. This creates a Denial of Service (DoS) vulnerability, as a malicious server could provide a multi-gigabyte response, causing your application to run out of memory. To fix this, you should process the response as a stream. This involves using client.stream() and reading the response in chunks to enforce the size limit safely, while still being able to handle redirects correctly.
try:
async with client.stream("GET", current_url, headers=headers, timeout=settings.url_extraction_timeout) as response:
# 3. Handle Redirects Manually
if response.is_redirect:
location = response.headers.get("Location")
if not location:
break
# Resolve relative URLs
next_url = str(httpx.URL(current_url).join(location))
if next_url in visited_urls:
logger.warning(f"[SSRF] Circular redirect detected: {next_url}")
return None
visited_urls.add(next_url)
current_url = next_url
logger.info(f"[SSRF] Redirecting to: {current_url}")
continue
# 4. Return content if successful
if response.status_code == 200:
# Stream response to prevent memory exhaustion from large files.
content_parts = []
total_size = 0
limit = settings.url_content_max_length
content_too_large = False
async for text_chunk in response.aiter_text():
total_size += len(text_chunk)
content_parts.append(text_chunk)
if total_size > limit:
content_too_large = True
break
await response.aclose()
full_content_text = "".join(content_parts)
if content_too_large:
logger.warning(f"[SSRF] Content too large: > {limit}")
return truncate_content(full_content_text, limit)
return full_content_text
# 5. Handle error status
logger.warning(f"[SSRF] Failed to fetch {current_url}, status: {response.status_code}")
return None
except Exception as e:
logger.error(f"[SSRF] Error fetching {current_url}: {e}")
return None| # between this check and the subsequent fetch. Fixing this fully requires | ||
| # advanced socket control which is complex. This check catches the majority | ||
| # of accidental or simple SSRF attempts (including redirect-based ones). | ||
| is_valid, error = validate_url(current_url) |
There was a problem hiding this comment.
The validate_url function performs a DNS lookup (socket.gethostbyname), which is a blocking I/O operation. In an async context, this can block the entire event loop, severely impacting application performance under load. As noted in your comments, this should be run in a separate thread to avoid blocking. You can use asyncio.to_thread for this (available in Python 3.9+).
Note: This will require adding import asyncio at the top of the file.
| is_valid, error = validate_url(current_url) | |
| is_valid, error = await asyncio.to_thread(validate_url, current_url) |
This change introduces a
safe_fetch_urlutility function that prevents Server-Side Request Forgery (SSRF) vulnerabilities. It manually handles redirects and validates each URL against private IP ranges before fetching content. This ensures that the application cannot be coerced into accessing internal network resources.Security Improvements:
safe_fetch_urlinutils.pyusinghttpx.httpxwithverify=True(strict SSL verification).trafilatura.fetch_urlwithsafe_fetch_urlinWebPageExtractor.