Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transfer-Encoding: chunked causing denial of service #107

Open
dallyger opened this issue Oct 8, 2024 · 15 comments
Open

Transfer-Encoding: chunked causing denial of service #107

dallyger opened this issue Oct 8, 2024 · 15 comments

Comments

@dallyger
Copy link
Contributor

dallyger commented Oct 8, 2024

Summary:

A request with the header "Transfer-Encoding: chunked" will cause Caddy to
spawn a php_fastcgi process, which hangs forever.

After upgrading to Shopware 6.6 (using shopware/docker), our shop would
timeout every couple hours/days due to bot traffic until manually restarted.
Our old setup with nginx did not have that issue.

Temporary solution

I've added the following entry to our Varnish config to block such requests:

 sub vcl_recv {
 
+    if (req.http.Transfer-Encoding == "chunked") {
+        return(synth(411, "Length Required"));
+    }
 
     # ...
 }

Another option would be to set request_terminate_timeout = 120s in
php.conf, so hanging requests are automatically terminated after two minutes.

How to reproduce:

Setup a Shopware instance:

composer create-project shopware/production:6.6.6.1 .
composer req shopware/docker

Start a docker container from an image build by using that project by adding
the following to compose.yaml:

services:
  web:
    build:
      context: .
      dockerfile: docker/Dockerfile
    depends_on:
      - database
    env_file:
      - .env
    environment:
      DATABASE_URL: mysql://${MYSQL_USER:-shopware}:${MYSQL_PASSWORD:-!ChangeMe!}@database/${MYSQL_DATABASE:-shopware}
    ports:
      - 8000:8000
    volumes:
      - .:/var/www/html

And finally, send a broken request using curl:

curl --insecure -X POST http://127.0.0.1:8000/ \
    --header "Content-Type: application/json" \
    --header "Transfer-Encoding: chunked" \
    --http1.1 \
    --data '{}' -vvv

Request will hang and no body is returned:

Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Type: application/json
> Transfer-Encoding: chunked
>

Run the following command to observe an additional, forever open socket:

docker compose exec web netstat -alWx

Repeat this 5 times (default value of pm.max_children) and the shop is
unresponsive and any further requests will hang forever too, even if they're
not chunked. Or a 502 timeout occurs, if behind a reverse proxy.

See also

Some related issues I've found. However, I could not get the solutions
mentioned to work yet. The request_buffers 4k setting did not work for me.

@shyim
Copy link
Member

shyim commented Oct 8, 2024

Do you know if that happens also with Nginx?
Setting a request_terminate_timeout makes generally sense I think

@dallyger
Copy link
Contributor Author

dallyger commented Oct 8, 2024

@shyim Previously we've been using nginx on our Shopware 6.4 setup without any issues. Here is the config we've used: nginx.conf

@shyim
Copy link
Member

shyim commented Oct 11, 2024

php_fastcgi localhost:9000 {
    request_buffers 4k
    response_buffers 4k
}

would solve it right 🤔 . I don't know tbh if that is too low or too high 🙈

@dallyger
Copy link
Contributor Author

I found that too, but could not get that to work.

If I understood that correct, it would crash on requests greater than 4k.
But that may be okay, because currently it will always crash
At least it would be an improvement even when it's not a full fix?

Nginx would solve that by buffering the request to a file, which caddy does not support. Thats what someone wrote on some issue in those threads. I think that was an Issue I found over at Nextcloud.
Can't remember anymore where I've seen it.

@shyim
Copy link
Member

shyim commented Oct 11, 2024

I try some configuration on monday and add also an nginx variant. I think it's better maintained than caddy 😅

@shyim
Copy link
Member

shyim commented Oct 15, 2024

Buffers are not working, you cannot also forbid Transfer-Encoding requests in caddy: caddyserver/caddy#6628

I would suggest you to switch to nginx image for know: ghcr.io/shopware/docker-base:8.3-nginx, it's build in the same way as caddy.

I will add this to README

@shyim shyim pinned this issue Oct 15, 2024
@mholt
Copy link

mholt commented Oct 15, 2024

@shyim

I think it's better maintained than caddy 😅

caddyserver/caddy#6629

@shyim
Copy link
Member

shyim commented Oct 16, 2024

@shyim

I think it's better maintained than caddy 😅

caddyserver/caddy#6629

caddyserver/caddy#5420 (comment)

@panakour
Copy link
Contributor

panakour commented Oct 17, 2024

@shyim why don’t you consider dropping Caddy entirely and using the FrankenPHP variant, which doesn’t require FPM and is built on top of Caddy?

@WeidiDeng
Copy link

@dallyger Can you modify the dockerfile to test the patch here to see if it's fixed?

FROM caddy:2.8.4-builder AS builder

RUN xcaddy build \
    fastcgi-fix

FROM <your caddy dockerfile>

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

You can also deny requests with chunked encoding directly now.

@dallyger
Copy link
Contributor Author

@WeidiDeng can confirm, denial of service is no longer possible after applying that fix.

However, the malicious requests will get a 502 response now.
For my use cases this is fine, but maybe not for everyone?
Is your code snippet all I have to do to apply the patch? Or am I missing some parts?

$ curl --insecure -X POST http://127.0.0.1:8000/ --header "Content-Type: application/json" --header "Transfer-Encoding: chunked" --http1.1 --data '{}' -vvv
Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 127.0.0.1:8000...
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> POST / HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: curl/7.81.0
> Accept: */*
> Content-Type: application/json
> Transfer-Encoding: chunked
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 502 Bad Gateway
< Server: Caddy
< Date: Thu, 17 Oct 2024 09:53:18 GMT
< Content-Length: 0
<
* Connection #0 to host 127.0.0.1 left intact

@WeidiDeng
Copy link

Is this the caddyfile in use?

Can you enable debug logging? Add

{
    debug
}

to the top the caddyfile and post the resulting output when 502 is encountered.

@dallyger
Copy link
Contributor Author

Those are all the logs I get when sending a single request via curl.

web-1  | {"level":"debug","ts":1729164961.601515,"logger":"http.handlers.rewrite","msg":"rewrote request","request":{"remote_ip":"172.20.0.1","remote_port":"48440","client_ip":"172.20.0.1","proto":"HTTP/1.1","method":"POST","host":"127.0.0.1:8000","uri":"/index.php","headers":{"User-Agent":["curl/7.81.0"],"Accept":["*/*"],"Content-Type":["application/json"]}},"method":"POST","uri":"/index.php"}
web-1  | {"level":"debug","ts":1729164961.6016207,"logger":"http.handlers.reverse_proxy","msg":"selected upstream","dial":"/tmp/php-fpm.sock","total_upstreams":1}
web-1  | {"level":"debug","ts":1729164961.6017983,"logger":"http.reverse_proxy.transport.fastcgi","msg":"roundtrip","dial":"/tmp/php-fpm.sock","env":{"GATEWAY_INTERFACE":"CGI/1.1","REMOTE_ADDR":"172.20.0.1","REQUEST_METHOD":"POST","SERVER_PORT":"8000","HTTP_ACCEPT":"*/*","AUTH_TYPE":"","SERVER_PROTOCOL":"HTTP/1.1","DOCUMENT_ROOT":"/var/www/html/public","SCRIPT_NAME":"/index.php","SERVER_NAME":"127.0.0.1","CONTENT_LENGTH":"","SCRIPT_FILENAME":"/var/www/html/public/index.php","HTTP_CONTENT_TYPE":"application/json","REMOTE_IDENT":"","REMOTE_PORT":"48440","HTTP_X_FORWARDED_PROTO":"http","QUERY_STRING":"","REMOTE_HOST":"172.20.0.1","REQUEST_SCHEME":"http","HTTP_HOST":"127.0.0.1:8000","HTTP_USER_AGENT":"curl/7.81.0","PATH_INFO":"","SERVER_SOFTWARE":"Caddy/v2.9.0-beta.2.0.20241017083645-d26cd24a116e","HTTP_X_FORWARDED_HOST":"127.0.0.1:8000","REMOTE_USER":"","REQUEST_URI":"/","DOCUMENT_URI":"/index.php","HTTP_X_FORWARDED_FOR":"172.20.0.1","CONTENT_TYPE":"application/json"},"request":{"remote_ip":"172.20.0.1","remote_port":"48440","client_ip":"172.20.0.1","proto":"HTTP/1.1","method":"POST","host":"127.0.0.1:8000","uri":"/index.php","headers":{"Content-Type":["application/json"],"X-Forwarded-For":["172.20.0.1"],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["127.0.0.1:8000"],"User-Agent":["curl/7.81.0"],"Accept":["*/*"]}}}
web-1  | {"level":"debug","ts":1729164961.6022718,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"unix//tmp/php-fpm.sock","duration":0.000598788,"request":{"remote_ip":"172.20.0.1","remote_port":"48440","client_ip":"172.20.0.1","proto":"HTTP/1.1","method":"POST","host":"127.0.0.1:8000","uri":"/index.php","headers":{"Content-Type":["application/json"],"X-Forwarded-For":["172.20.0.1"],"X-Forwarded-Proto":["http"],"X-Forwarded-Host":["127.0.0.1:8000"],"User-Agent":["curl/7.81.0"],"Accept":["*/*"]}},"error":"http: invalid Read on closed Body"}
web-1  | {"level":"error","ts":1729164961.602404,"logger":"http.log.error","msg":"http: invalid Read on closed Body","request":{"remote_ip":"172.20.0.1","remote_port":"48440","client_ip":"172.20.0.1","proto":"HTTP/1.1","method":"POST","host":"127.0.0.1:8000","uri":"/","headers":{"User-Agent":["curl/7.81.0"],"Accept":["*/*"],"Content-Type":["application/json"]}},"duration":0.001253506,"status":502,"err_id":"ktcqv73tj","err_trace":"reverseproxy.statusError (reverseproxy.go:1332)"}
web-1  | {"level":"error","ts":1729164961.6024334,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"172.20.0.1","remote_port":"48440","client_ip":"172.20.0.1","proto":"HTTP/1.1","method":"POST","host":"127.0.0.1:8000","uri":"/","headers":{"User-Agent":["curl/7.81.0"],"Accept":["*/*"],"Content-Type":["application/json"]}},"bytes_read":2,"user_id":"","duration":0.001253506,"size":0,"status":502,"resp_headers":{"Server":["Caddy"]}}

@WeidiDeng
Copy link

@dallyger I think I know what's the problem. The request buffering part is more complicated than I initially think.

I created another patch fastcgi-cl-header that will respond 411 by default if chunked encoding is in use. Request buffering won't be enabled and isn't fixed in this branch, as it's inside another.

With both patches above mentioned patches applied, you can handle chunked encoding requests up to the specified size, requests more than that will still have a 411 response.

@dallyger
Copy link
Contributor Author

@WeidiDeng can confirm, if I use RUN xcaddy build fastcgi-cl-header, I get a 411 instead. But I don't know how to apply both patches like you mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants