Skip to content

feat: implement SSRF protection for web page extraction#97

Open
guangtouwangba wants to merge 1 commit intomasterfrom
sentinel-ssrf-fix-514789103092286147
Open

feat: implement SSRF protection for web page extraction#97
guangtouwangba wants to merge 1 commit intomasterfrom
sentinel-ssrf-fix-514789103092286147

Conversation

@guangtouwangba
Copy link
Owner

This change introduces a safe_fetch_url utility function that prevents Server-Side Request Forgery (SSRF) vulnerabilities. It manually handles redirects and validates each URL against private IP ranges before fetching content. This ensures that the application cannot be coerced into accessing internal network resources.

Security Improvements:

  • Added safe_fetch_url in utils.py using httpx.
  • Configured httpx with verify=True (strict SSL verification).
  • Replaced trafilatura.fetch_url with safe_fetch_url in WebPageExtractor.
  • Added User-Agent header to avoid being blocked by WAFs.

This change introduces a `safe_fetch_url` utility function that prevents Server-Side Request Forgery (SSRF) vulnerabilities.
It manually handles redirects and validates each URL against private IP ranges before fetching content.
This ensures that the application cannot be coerced into accessing internal network resources.

Security Improvements:
- Added `safe_fetch_url` in `utils.py` using `httpx`.
- Configured `httpx` with `verify=True` (strict SSL verification).
- Replaced `trafilatura.fetch_url` with `safe_fetch_url` in `WebPageExtractor`.
- Added User-Agent header to avoid being blocked by WAFs.
@gemini-code-assist
Copy link

Summary of Changes

Hello @guangtouwangba, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the application's security posture by introducing robust Server-Side Request Forgery (SSRF) protection for web page extraction. The core change involves a new utility function that meticulously validates URLs, including during redirects, to prevent unauthorized access to internal network resources, thereby safeguarding the system from a critical class of vulnerabilities.

Highlights

  • SSRF Protection Implementation: A new safe_fetch_url utility function has been introduced to prevent Server-Side Request Forgery (SSRF) vulnerabilities during web page extraction.
  • Secure URL Fetching Logic: The safe_fetch_url function manually handles redirects, performing URL validation against private IP ranges at each redirection step to ensure no internal resources are accessed.
  • Integration with WebPageExtractor: The WebPageExtractor now utilizes the safe_fetch_url function, replacing the previous trafilatura.fetch_url call, to ensure all web content fetching is secure.
  • Enhanced HTTP Client Configuration: The implementation uses httpx with strict SSL verification (verify=True) and includes a User-Agent header to improve reliability and security of requests.
  • Content Length Enforcement: Fetched content is now subject to a configurable maximum length, with truncation applied if the content exceeds this limit.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a safe_fetch_url function to provide robust SSRF protection when fetching content from URLs, which is a great security enhancement. The implementation correctly handles redirects manually and validates each URL in the chain against private IP ranges.

My review focuses on improving the robustness and performance of the new safe_fetch_url function. I've identified a potential Denial of Service vulnerability due to memory handling of large responses and a performance issue related to a blocking call in the async context. The suggested changes address these by introducing response streaming and running blocking code in a thread executor.

Overall, this is a solid contribution to the security of the application. Addressing the feedback will make it even more resilient.

Comment on lines +167 to +206
try:
# 2. Fetch headers only first (optional, but good for checking redirects efficiently)
# For simplicity in handling 301/302/200, we just do a GET but without following redirects automatically.
# Note: 'stream=True' could be used to inspect headers before downloading body.
response = await client.get(current_url, headers=headers, timeout=settings.url_extraction_timeout)

# 3. Handle Redirects Manually
if response.is_redirect:
location = response.headers.get("Location")
if not location:
break

# Resolve relative URLs
next_url = str(httpx.URL(current_url).join(location))

if next_url in visited_urls:
logger.warning(f"[SSRF] Circular redirect detected: {next_url}")
return None

visited_urls.add(next_url)
current_url = next_url
logger.info(f"[SSRF] Redirecting to: {current_url}")
continue

# 4. Return content if successful
if response.status_code == 200:
# Enforce content length limit
if len(response.content) > settings.url_content_max_length:
logger.warning(f"[SSRF] Content too large: {len(response.content)} > {settings.url_content_max_length}")
return truncate_content(response.text, settings.url_content_max_length)

return response.text

# 5. Handle error status
logger.warning(f"[SSRF] Failed to fetch {current_url}, status: {response.status_code}")
return None

except Exception as e:
logger.error(f"[SSRF] Error fetching {current_url}: {e}")
return None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation reads the entire response body into memory before checking its size. This creates a Denial of Service (DoS) vulnerability, as a malicious server could provide a multi-gigabyte response, causing your application to run out of memory. To fix this, you should process the response as a stream. This involves using client.stream() and reading the response in chunks to enforce the size limit safely, while still being able to handle redirects correctly.

            try:
                async with client.stream("GET", current_url, headers=headers, timeout=settings.url_extraction_timeout) as response:
                    # 3. Handle Redirects Manually
                    if response.is_redirect:
                        location = response.headers.get("Location")
                        if not location:
                            break

                        # Resolve relative URLs
                        next_url = str(httpx.URL(current_url).join(location))

                        if next_url in visited_urls:
                            logger.warning(f"[SSRF] Circular redirect detected: {next_url}")
                            return None

                        visited_urls.add(next_url)
                        current_url = next_url
                        logger.info(f"[SSRF] Redirecting to: {current_url}")
                        continue

                    # 4. Return content if successful
                    if response.status_code == 200:
                        # Stream response to prevent memory exhaustion from large files.
                        content_parts = []
                        total_size = 0
                        limit = settings.url_content_max_length
                        content_too_large = False

                        async for text_chunk in response.aiter_text():
                            total_size += len(text_chunk)
                            content_parts.append(text_chunk)
                            if total_size > limit:
                                content_too_large = True
                                break
                        
                        await response.aclose()
                        full_content_text = "".join(content_parts)

                        if content_too_large:
                            logger.warning(f"[SSRF] Content too large: > {limit}")
                            return truncate_content(full_content_text, limit)

                        return full_content_text

                    # 5. Handle error status
                    logger.warning(f"[SSRF] Failed to fetch {current_url}, status: {response.status_code}")
                    return None
            except Exception as e:
                logger.error(f"[SSRF] Error fetching {current_url}: {e}")
                return None

# between this check and the subsequent fetch. Fixing this fully requires
# advanced socket control which is complex. This check catches the majority
# of accidental or simple SSRF attempts (including redirect-based ones).
is_valid, error = validate_url(current_url)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The validate_url function performs a DNS lookup (socket.gethostbyname), which is a blocking I/O operation. In an async context, this can block the entire event loop, severely impacting application performance under load. As noted in your comments, this should be run in a separate thread to avoid blocking. You can use asyncio.to_thread for this (available in Python 3.9+).

Note: This will require adding import asyncio at the top of the file.

Suggested change
is_valid, error = validate_url(current_url)
is_valid, error = await asyncio.to_thread(validate_url, current_url)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant