Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graceful handling of invalid HTTP Methods #5881

Closed
jeremyjpj0916 opened this issue May 11, 2020 · 11 comments
Closed

Graceful handling of invalid HTTP Methods #5881

jeremyjpj0916 opened this issue May 11, 2020 · 11 comments

Comments

@jeremyjpj0916
Copy link
Contributor

jeremyjpj0916 commented May 11, 2020

Passing an invalid HTTP Method produces 500 level server errors.

Summary

/ $ curl -Xfoo localhost:8000/ -vv
*   Trying 127.0.0.1:8000...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8000 (#0)
> foo / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.66.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< Date: Mon, 11 May 2020 20:36:14 GMT
< Content-Type: text/html; charset=UTF-8
< Content-Length: 142
< Connection: close
< X-Response-Latency: 0
<
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
</body>
</html>
* Closing connection 0
/ $

Seems an odd behavior out of NGINX that affects Kong, I would expect a 400 Bad Request if anything on invalid HTTP Methods, otherwise to just allow it and pass through with no harm.

Steps To Reproduce

  1. Do what I did above on your Kong instance

Additional Details & Logs

  • Kong version 1.4.3
@thibaultcha
Copy link
Member

What error logs do you see when Kong produces this response?

@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 11, 2020

In debug and notice log level modes the webserver error logs produce nothing.

@thibaultcha
Copy link
Member

What jumps to my eyes (without attempting to reproduce locally) is that X-Response-Latency isn't a header that I am aware Kong sets. I suppose you disabled the server_tokens (hence no Server or Via header to determine where the response originated from). Have you tried bypassing Kong and directly sending the request to the upstream?

@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 11, 2020

What jumps to my eyes (without attempting to reproduce locally) is that X-Response-Latency isn't a header that I am aware Kong sets.

This is only because we modify constants.lua to not print Kong.
https://github.com/Kong/kong/blob/master/kong/constants.lua#L72

This call actually is to the base path / which would normally throw a 404 not found standard proxy response message like:

{ "message": "no Route matches with those values"} 

So this is a dead end endpoint generally, I am just testing this to test because some user just decided to randomly try sending invalid HTTP Statuses to the gateway url with no proxy path on it and I saw this behavior and thought it was interesting and odd.

@aaronhmiller
Copy link

aaronhmiller commented May 11, 2020

I setup a local install of 1.4.3 and cannot recreate the behavior:

%> curl -Xfoo localhost:8000/ -vv                                          *   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8000 (#0)
> foo / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 400 Bad Request
< Date: Mon, 11 May 2020 21:25:53 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 12
< Connection: close
< Server: kong/1.4.3
< X-Kong-Response-Latency: 1
< 
Bad request
* Closing connection 0

Kong logs:

172.23.0.1 - - [11/May/2020:21:24:08 +0000] "foo / HTTP/1.1" 400 12 "-" "-"
2020/05/11 21:24:08 [warn] 24#0: *905 [lua] reports.lua:73: log(): [reports] unknown request scheme: http while logging request, client: 172.23.0.1, server: kong, request: "foo / HTTP/1.1"

It looks as if there is another proxy in between the client and Kong throwing the 500?

@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 11, 2020

Ooo, it may be a nginx module I am running too with Kong, let me disable that and will re-test if it somehow plays a role. And no it would not be a proxy between this is me testing from my localhost Kong node, will confirm shortly if its somehow the nginx module causing this.

@aaronhmiller
Copy link

Thanks @jeremyjpj0916
Assuming things are working as designed and another module not interfering, expecting to see Kong log something like this:

2020/05/11 21:32:21 [info] 23#0: *131 client sent invalid method while reading client request line, client: 172.24.0.1, server: kong, request: "foo / HTTP/1.1"
172.24.0.1 - - [11/May/2020:21:32:21 +0000] "foo / HTTP/1.1" 400 12 "-" "-"
2020/05/11 21:32:21 [warn] 23#0: *131 [lua] reports.lua:73: log(): [reports] unknown request scheme: http while logging request, client: 172.24.0.1, server: kong, request: "foo / HTTP/1.1"

@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 11, 2020

Disabled the 3rd party module, the response to my curl client is still the same 500 error, taking a closer look at the logs I do see the first line similar to @aaronhmiller output.

2020/05/11 21:57:26 [info] 104#0: *13183 client sent invalid method while reading client request line, client: 127.0.0.1, server: kong, request: "foo / HTTP/1.1"

I lack the reports.lua:73 warning line though. Likely because I disable the Kong remote reporting stuff.

Error might be how I am configuring my ENV variables or NGX template then... Hmm

  - env:
            - name: KONG_LOG_LEVEL
              value: debug
            - name: KONG_PROXY_ACCESS_LOG
              value: 'off'
            - name: KONG_ADMIN_ACCESS_LOG
              value: 'off'
            - name: KONG_PROXY_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ANONYMOUS_REPORTS
              value: 'off'
            - name: KONG_PROXY_LISTEN
              value: '0.0.0.0:8000, 0.0.0.0:8443 ssl http2, 0.0.0.0:9443 ssl http2'
            - name: KONG_ADMIN_LISTEN
              value: '0.0.0.0:8001'
            - name: KONG_MEM_CACHE_SIZE
              value: 1024m
            - name: KONG_SSL_CERT
              value: /usr/local/kong/ssl/kongcert.crt
            - name: KONG_SSL_CERT_DER
              value: /usr/local/kong/ssl/kongcertder.der
            - name: KONG_SSL_CERT_KEY
              value: /usr/local/kong/ssl/kongprivatekey.key
            - name: KONG_CLIENT_SSL
              value: 'off'
            - name: KONG_UPSTREAM_KEEPALIVE
              value: '60'
            - name: KONG_HEADERS
              value: latency_tokens
            - name: KONG_REAL_IP_HEADER
              value: X-Forwarded-For
            - name: KONG_REAL_IP_RECURSIVE
              value: 'on'
            - name: KONG_CLIENT_MAX_BODY_SIZE
              value: 25m
            - name: KONG_CLIENT_BODY_BUFFER_SIZE
              value: 20m
            - name: KONG_ERROR_DEFAULT_TYPE
              value: text/plain
            - name: KONG_DATABASE
              value: cassandra
            - name: KONG_PG_SSL
              value: 'off'
            - name: KONG_CASSANDRA_PORT
              value: '9042'
            - name: KONG_CASSANDRA_KEYSPACE
              value: kong_dev
            - name: KONG_CASSANDRA_TIMEOUT
              value: '5000'
            - name: KONG_CASSANDRA_SSL
              value: 'on'
            - name: KONG_CASSANDRA_SSL_VERIFY
              value: 'on'
            - name: KONG_CASSANDRA_USERNAME
              value: XXXXX
            - name: KONG_CASSANDRA_PASSWORD
              value: XXXXX
            - name: KONG_CASSANDRA_CONSISTENCY
              value: LOCAL_QUORUM
            - name: KONG_CASSANDRA_LB_POLICY
              value: RequestDCAwareRoundRobin
            - name: KONG_CASSANDRA_LOCAL_DATACENTER
              value: DC2
            - name: KONG_DB_UPDATE_FREQUENCY
              value: '5'
            - name: KONG_DB_UPDATE_PROPAGATION
              value: '5'
            - name: KONG_DB_CACHE_TTL
              value: '0'
            - name: KONG_DNS_HOSTSFILE
              value: /etc/hosts
            - name: KONG_DNS_STALE_TTL
              value: '4'
            - name: KONG_DNS_NOT_FOUND_TTL
              value: '30'
            - name: KONG_DNS_ERROR_TTL
              value: '1'
            - name: KONG_DNS_NO_SYNC
              value: 'off'
            - name: KONG_LUA_SSL_TRUSTED_CERTIFICATE
              value: /usr/local/kong/ssl/kongcert.pem
            - name: KONG_LUA_SSL_VERIFY_DEPTH
              value: '3'
            - name: KONG_LUA_SOCKET_POOL_SIZE
              value: '30'
            - name: SPLUNK_HOST
              value: gateway-dev.company.com
            - name: KONG_CASSANDRA_CONTACT_POINTS
              value: >-
                server1234,server1234,server1234,server1234,server1234,server1234
            - name: KONG_DNS_ORDER
              value: 'LAST,SRV,A,CNAME'
            - name: KONG_PLUGINS
              value: >-
                kong-kafka-log,stargate-waf-error-log,mtls,stargate-oidc-token-revoke,kong-tx-debugger,kong-plugin-oauth,kong-ldap-auth,kong-error-log,kong-oidc-implicit-token,zipkin,kong-response-size-limiting,request-transformer,kong-service-virtualization,kong-cluster-drain,kong-upstream-jwt,kong-splunk-log,kong-spec-expose,kong-path-based-routing,kong-oidc-multi-idp,correlation-id,oauth2,statsd,jwt,rate-limiting,acl,request-size-limiting,request-termination,cors
            - name: KONG_TRUSTED_IPS
              value: '0.0.0.0/0,::/0'
            - name: KONG_ROUTER_CONSISTENCY
              value: eventual
            - name: KONG_DB_CACHE_WARMUP_ENTITIES
              value: 'services,plugins,consumers'
            - name: KONG_NGINX_HTTP_SSL_PROTOCOLS
              value: TLSv1.2 TLSv1.3
            - name: LUA_PATH
              value: >-
               /usr/local/kong/luarocks/share/lua/5.1/?.lua;;/usr/local/kong/luarocks/share/lua/5.1/?/init.lua;
            - name: KONG_CASSANDRA_REFRESH_FREQUENCY
              value: '0'
            - name: KONG_WAF_AUDIT
              value: 'On'
            - name: KONG_WAF_DEBUG_LEVEL
              value: '0'
            - name: KONG_WAF_MODE
              value: 'Off'
            - name: KONG_WAF_REQUEST_FILE_SIZE_LIMIT
              value: '25000000'
            - name: KONG_WAF_REQUEST_NO_FILE_SIZE_LIMIT
              value: '25000000'
            - name: KONG_WAF_PCRE_MATCH_LIMIT
              value: '10000'
            - name: KONG_WAF_PCRE_MATCH_LIMIT_RECURSION
              value: '10000'
            - name: KONG_WAF_PARANOIA_LEVEL
              value: '1'
            - name: KONG_WAF
              value: 'off'

I don't think any of these fields would do it though... maybe:

            - name: KONG_ERROR_DEFAULT_TYPE
              value: text/plain

My Kong template looks like this, was playing with named error_page locations but will revert that too and test again to see if that is whats causing it:

> if nginx_user then
user ${{NGINX_USER}};
> end
#worker_processes ${{NGINX_WORKER_PROCESSES}};
worker_processes 6;
daemon ${{NGINX_DAEMON}};

env SPLUNK_HOST;
env KONG_SSL_CERT_KEY;
env KONG_SSL_CERT_DER;
pid pids/nginx.pid;
error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
> if nginx_optimizations then
worker_rlimit_nofile ${{WORKER_RLIMIT}};
> end
events {
> if nginx_optimizations then
    worker_connections ${{WORKER_CONNECTIONS}};
    multi_accept on;
    accept_mutex off;
> end
}
http {
  charset UTF-8;
  error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
> if nginx_optimizations then
>-- send_timeout 60s;             # default value
>-- client_body_timeout 60s;      # default value
>-- client_header_timeout 60s;    # default value
> end

#Jeremy and Ross Nginx Performance Tweaks
tcp_nodelay on;			#  Enables or disables the use of the TCP_NODELAY option. The option is enabled only when a connection is transitioned into the keep-alive state
tcp_nopush on; 			#  Enables or disables the use of the TCP_NOPUSH socket option on FreeBSD or the TCP_CORK socket option on Linux.
proxy_buffer_size 32k; 	# Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform. It can be made smaller, however. 
proxy_buffers 10 2m; 		# Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform. 
proxy_busy_buffers_size 4m;	# When buffering of responses from the proxied server is enabled, limits the total size of buffers that can be busy sending a response to the client while the response is not yet fully read. In the meantime, the rest of the buffers can be used for reading the response and, if needed, buffering part of the response to a temporary file. By default, size is limited by the size of two buffers set by the proxy_buffer_size and proxy_buffers directives. 
reset_timedout_connection on;	# Enables or disables resetting timed out connections. The reset is performed as follows. Before closing a socket, the SO_LINGER option is set on it with a timeout value of 0. When the socket is closed, TCP RST is sent to the client, and all memory occupied by this socket is released. This helps avoid keeping an already closed socket with filled buffers in a FIN_WAIT1 state for a long time. It should be noted that timed out keep-alive connections are closed normally. 
keepalive_timeout 120s;		# The first parameter sets a timeout during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. The optional second parameter sets a value in the “Keep-Alive: timeout=time” response header field. Two parameters may differ. The “Keep-Alive: timeout=time” header field is recognized by Mozilla and Konqueror. MSIE closes keep-alive connections by itself in about 60 seconds. 
keepalive_requests 10000;   # The number of subsequent keepalive requests the client can make
http2_idle_timeout 75s;          #Comparible to the keepalive_timeout above.
http2_recv_buffer_size 20m;       #Sets the size of the per worker input buffer.
http2_body_preread_size 2m;    #Sets the size of the buffer per each request in which the request body may be saved before it is started to be processed. This 
          
client_max_body_size ${{CLIENT_MAX_BODY_SIZE}};
proxy_ssl_server_name on;
underscores_in_headers on;

# enable capturing error logs
lua_capture_error_log 50k;

# exclusive oauth shm caches
lua_shared_dict kong_oauth_cache       5m;
lua_shared_dict kong_oauth_cache_miss  2m;
lua_shared_dict kong_oauth_cache_locks 1m;

# exclusive ldap_auth shm caches
lua_shared_dict kong_ldap_auth_cache       10m;
lua_shared_dict kong_ldap_auth_cache_miss  6m;
lua_shared_dict kong_ldap_auth_cache_locks 3m;

# exclusive oidc shm caches
lua_shared_dict kong_oidc_cache      10m;
lua_shared_dict kong_oidc_cache_miss  6m;
lua_shared_dict kong_oidc_cache_locks 3m;
lua_shared_dict kong_ipc_shared_dict 1m;

lua_package_path '${{LUA_PACKAGE_PATH}};;';
lua_package_cpath '${{LUA_PACKAGE_CPATH}};;';
lua_socket_pool_size ${{LUA_SOCKET_POOL_SIZE}};
lua_max_running_timers 4096;
lua_max_pending_timers 16384;
lua_shared_dict kong                5m;
lua_shared_dict kong_db_cache       ${{MEM_CACHE_SIZE}};
> if database == "off" then
lua_shared_dict kong_db_cache_2     ${{MEM_CACHE_SIZE}};
> end
lua_shared_dict kong_db_cache_miss   12m;
> if database == "off" then
lua_shared_dict kong_db_cache_miss_2 12m;
> end
lua_shared_dict kong_locks          8m;
lua_shared_dict kong_process_events 5m;
lua_shared_dict kong_cluster_events 5m;
lua_shared_dict kong_healthchecks   5m;
lua_shared_dict kong_rate_limiting_counters 12m;

> if database == "cassandra" then
lua_shared_dict kong_cassandra      5m;
> end
lua_socket_log_errors off;
> if lua_ssl_trusted_certificate then
lua_ssl_trusted_certificate '${{LUA_SSL_TRUSTED_CERTIFICATE}}';
> end
lua_ssl_verify_depth ${{LUA_SSL_VERIFY_DEPTH}};

# injected nginx_http_* directives
> for _, el in ipairs(nginx_http_directives)  do
$(el.name) $(el.value);
> end

  init_by_lua_block {
      Kong = require 'kong'
      Kong.init()
  }
  init_worker_by_lua_block {
      Kong.init_worker()
  }
> if #proxy_listeners > 0 then
  upstream kong_upstream {
      server 0.0.0.1;
     
      balancer_by_lua_block {
          Kong.balancer()
      }

> if upstream_keepalive > 0 then
    #keepalive_timeout 60s; DEFAULT
    #keepalive_requests 100; DEFAULT
    keepalive_requests 10000;
    keepalive ${{UPSTREAM_KEEPALIVE}} pool_ssl_name=on;
> end
}
  
  server {
      server_name kong;
      modsecurity ${{WAF}};
      modsecurity_rules_file /usr/local/modsec_includes.conf;
      
      #WAF_DEBUG_LEVEL level 0 = off 9 = on
      modsecurity_rules '
        SecRuleEngine ${{WAF_MODE}}
        SecRequestBodyLimit ${{WAF_REQUEST_FILE_SIZE_LIMIT}}
        SecRequestBodyNoFilesLimit ${{WAF_REQUEST_NO_FILE_SIZE_LIMIT}}
        SecPcreMatchLimit ${{WAF_PCRE_MATCH_LIMIT}}
        SecPcreMatchLimitRecursion ${{WAF_PCRE_MATCH_LIMIT_RECURSION}}
        SecDebugLogLevel ${{WAF_DEBUG_LEVEL}}
        SecAuditEngine ${{WAF_AUDIT}}
        SecAction "id:900000,phase:1,nolog,pass,t:none,setvar:tx.paranoia_level=${{WAF_PARANOIA_LEVEL}}"
      ';
      
> for i = 1, #proxy_listeners do
      listen $(proxy_listeners[i].listener) deferred reuseport;
> end

      # Mutual TLS logic, check if server port is 9443
      ssl_verify_client optional;
      ssl_client_certificate /usr/local/kong/stargate-mutual-trust.ca;
      ssl_crl /usr/local/kong/stargate-mutual-trust.crl.pem;
      ssl_verify_depth 3;

      error_page 400 404 408 411 412 413 414 417 494 @kong_error_handler;
      error_page 500 502 503 504 @kong_error_handler;
      access_log ${{PROXY_ACCESS_LOG}};
      error_log ${{PROXY_ERROR_LOG}} ${{LOG_LEVEL}};
      client_body_buffer_size ${{CLIENT_BODY_BUFFER_SIZE}};
      client_header_buffer_size 8k;
      large_client_header_buffers 2 20k;
      
> if proxy_ssl_enabled then
    ssl_certificate ${{SSL_CERT}};
    ssl_certificate_key ${{SSL_CERT_KEY}};
    ssl_certificate_by_lua_block {
        Kong.ssl_certificate()
    }
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets on;
    ssl_session_timeout 10m;
    ssl_prefer_server_ciphers on;
    ssl_ciphers ${{SSL_CIPHERS}};
> end
> if client_ssl then
    proxy_ssl_certificate ${{CLIENT_SSL_CERT}};
    proxy_ssl_certificate_key ${{CLIENT_SSL_CERT_KEY}};
> end
    real_ip_header     ${{REAL_IP_HEADER}};
    real_ip_recursive  ${{REAL_IP_RECURSIVE}};
> for i = 1, #trusted_ips do
    set_real_ip_from   $(trusted_ips[i]);
> end

    # injected nginx_proxy_* directives
> for _, el in ipairs(nginx_proxy_directives)  do
    $(el.name) $(el.value);
> end
    rewrite_by_lua_block {
        Kong.rewrite()
        -- Check if Client was TLS verified
        local server_port = ngx.var.server_port
        if server_port == "9443" then
          local mtls_status = ngx.var.ssl_client_verify
          if mtls_status ~= "SUCCESS" then
            return kong.response.exit(401, { message = "Unauthorized" })
          end
        end
    }
    access_by_lua_block {
        Kong.access()
    }
    header_filter_by_lua_block {
        Kong.header_filter()
        ngx.header["Server"] = nil
        ngx.header["Via"] = nil
    }
    body_filter_by_lua_block {
        Kong.body_filter()
    }
    log_by_lua_block {
        Kong.log()
    }
    location / {
        default_type                     '';
        set $ctx_ref                     '';
        set $upstream_te                 '';
        set $upstream_host               '';
        set $upstream_upgrade            '';
        set $upstream_connection         '';
        set $upstream_scheme             '';
        set $upstream_uri                '';
        set $upstream_x_forwarded_for    '';
        set $upstream_x_forwarded_proto  '';
        set $upstream_x_forwarded_host   '';
        set $upstream_x_forwarded_port   '';
        set $kong_proxy_mode             'http';
        proxy_http_version 1.1;
        proxy_set_header   TE                $upstream_te;
        proxy_set_header   Host              $upstream_host;
        proxy_set_header   Upgrade           $upstream_upgrade;
        proxy_set_header   Connection        $upstream_connection;
        proxy_set_header   X-Forwarded-For   $upstream_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $upstream_x_forwarded_proto;
        proxy_set_header   X-Forwarded-Host  $upstream_x_forwarded_host;
        proxy_set_header   X-Forwarded-Port  $upstream_x_forwarded_port;
        proxy_set_header   X-Real-IP         $remote_addr;
        proxy_pass_header  Server;
        proxy_pass_header  Date;
        proxy_ssl_name     $upstream_host;
        
        # Added upstream certificate verification
  #      proxy_ssl_trusted_certificate '${{LUA_SSL_TRUSTED_CERTIFICATE}}';
  #      proxy_ssl_verify on;
  #      proxy_ssl_verify_depth 3;
        
        proxy_pass         $upstream_scheme://kong_upstream$upstream_uri;
    }

    location @grpc {
        internal;
        set $kong_proxy_mode  'grpc';
        grpc_set_header    Host              $upstream_host;
        grpc_set_header    X-Forwarded-For   $upstream_x_forwarded_for;
        grpc_set_header    X-Forwarded-Proto $upstream_x_forwarded_proto;
        grpc_set_header    X-Forwarded-Host  $upstream_x_forwarded_host;
        grpc_set_header    X-Forwarded-Port  $upstream_x_forwarded_port;
        grpc_set_header    X-Real-IP         $remote_addr;
        grpc_pass grpc://kong_upstream;
    }
    
    location @grpcs {
        internal;
        set $kong_proxy_mode  'grpc';
        grpc_set_header    Host              $upstream_host;
        grpc_set_header    X-Forwarded-For   $upstream_x_forwarded_for;
        grpc_set_header    X-Forwarded-Proto $upstream_x_forwarded_proto;
        grpc_set_header    X-Forwarded-Host  $upstream_x_forwarded_host;
        grpc_set_header    X-Forwarded-Port  $upstream_x_forwarded_port;
        grpc_set_header    X-Real-IP         $remote_addr;
        grpc_pass grpcs://kong_upstream;
    }
    
    location @kong_error_handler {
        internal;
        uninitialized_variable_warn off;
        rewrite_by_lua_block {;}
        access_by_lua_block {;}
        content_by_lua_block {
            Kong.handle_error()
        }
    }
}
> end

> if #admin_listeners > 0 then
 server {
    server_name kong_admin;
> for i = 1, #admin_listeners do
      listen $(admin_listeners[i].listener) deferred reuseport;
> end
      access_log ${{ADMIN_ACCESS_LOG}};
      error_log ${{ADMIN_ERROR_LOG}} ${{LOG_LEVEL}};
      client_max_body_size 10m;
      client_body_buffer_size 10m;
      client_header_buffer_size 8k;
      large_client_header_buffers 2 16k;
      
> if admin_ssl_enabled then
      ssl_certificate ${{ADMIN_SSL_CERT}};
      ssl_certificate_key ${{ADMIN_SSL_CERT_KEY}};
      ssl_session_cache shared:SSL:10m;
      ssl_session_timeout 10m;
      ssl_prefer_server_ciphers on;
      ssl_ciphers ${{SSL_CIPHERS}};
> end

    # injected nginx_admin_* directives
> for _, el in ipairs(nginx_admin_directives) do
    $(el.name) $(el.value);
> end

    location / {
        default_type application/json;
        content_by_lua_block {
            Kong.admin_content()
        }
        header_filter_by_lua_block {
            Kong.admin_header_filter()
        }
      }
      location /nginx_status {
          access_log off;
          stub_status;
      }
      location /robots.txt {
          return 200 'User-agent: *\nDisallow: /';
      }
      location /gc {
        content_by_lua_block {
            ngx.say(string.format("Worker %d: GC size: %.3f KB", ngx.var.pid, collectgarbage("count")))
        }
      }
 }
> end
}

@jeremyjpj0916
Copy link
Contributor Author

jeremyjpj0916 commented May 11, 2020

AH HA IT IS USING NAMED error_page locations. I removed that tweak and saw what I would expect:

/usr/local/kong $ curl -Xfoo localhost:8000/ -vv

  • Trying ::1:8000...
  • TCP_NODELAY set
  • connect to ::1 port 8000 failed: Connection refused
  • Trying 127.0.0.1:8000...
  • TCP_NODELAY set
  • Connected to localhost (127.0.0.1) port 8000 (#0)

foo / HTTP/1.1
Host: localhost:8000
User-Agent: curl/7.66.0
Accept: /

  • Mark bundle as not supporting multiuse
    < HTTP/1.1 400 Bad Request
    < Date: Mon, 11 May 2020 22:34:32 GMT
    < Content-Type: text/plain; charset=UTF-8
    < Content-Length: 12
    < Connection: close
    < X-Response-Latency: 1
    <
    Bad request
  • Closing connection 0
    /usr/local/kong $

I may still continue using them because this is an odd case and using named error_directives helps preserve METHOD and URI supposedly on proxy calls which helps with my 3rd party module. Closing this as its not actually a problem with default Kong but more so an NGINX issue when you start shaking things up.

@thibaultcha I thought you might find this interesting though, even though with invalid headers it does produce this oddity but in all other cases it sounds like how to native maintain HTTP Method + URI on error_page internal redirection with nginx without having to do it in Kong(which is already implemented now but might be a way to simplify in the future, minus the 1 drawback I just discovered):
owasp-modsecurity/ModSecurity-nginx#182 (comment)

@thibaultcha
Copy link
Member

@jeremyjpj0916 We do not use named locations for error_page for reasons elaborated in e709c95.

@jeremyjpj0916
Copy link
Contributor Author

Ah appreciate the link back, seems named locations has some other nuances(414 URI too long). NGINX is the gift that keeps on giving with other surprising nuances hah.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants