diff --git a/docs/config.md b/docs/config.md index 7c301b5f..aedcc76b 100644 --- a/docs/config.md +++ b/docs/config.md @@ -38,13 +38,73 @@ The following environment variables can be used to configure the *Docker Flow Pr ## Debug Format -The format used for logging in debug mode is as follows. +If debugging is enabled, *Docker Flow Proxy* will log HTTP and TCP requests. + +### HTTP Requests Debug Format + +An example output of a debug log produced by an HTTP request is as follows. + +``` +HAPRoxy: 10.255.0.3:52662 [10/Mar/2017:13:18:47.759] services go-demo_main-be8080/go-demo_main 10/0/30/69/109 200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +``` + +The format used for logging HTTP requests when the proxy is running in the debug mode is as follows. + +|Field|Format |Example | +|-----|--------------------------------------------------------------------------------------------------------|--------------------------| +|1 |Static text `HAProxy` indicating that the log entry comes directly from the proxy. |HAProxy | +|2 |Client IP and port. When used through Swarm networking, the IP and the port are of the *Ingress* network.|10.255.0.3:52662 | +|3 |Date and time when the request was accepted. |\[10/Mar/2017:13:18:47.759\]| +|4 |The name of the frontend. |services | +|5 |Backend and server name. When used through Swarm networking, the server name is the name of the destination service.|go-demo_main-be8080/go-demo_main| +|6 |Total time in milliseconds spent waiting for a full HTTP request from the client (not counting body) after the first byte was received. It can be "-1" if the connection was aborted before a complete request could be received or the a bad request was received. It should always be very small because a request generally fits in one single packet. Large times here generally indicate network issues between the client and haproxy or requests being typed by hand.|10| +|7 |Total time in milliseconds spent waiting in the various queues. It can be "-1" if the connection was aborted before reaching the queue.|0| +|8 |total time in milliseconds spent waiting for the connection to establish to the final server, including retries. It can be "-1" if the request was aborted before a connection could be established.|30| +|9 |Total time in milliseconds spent waiting for the server to send a full HTTP response, not counting data. It can be "-1" if the request was aborted before a complete response could be received. It generally matches the server's processing time for the request, though it may be altered by the amount of data sent by the client to the server. Large times here on "GET" requests generally indicate an overloaded server.|69| +|10 |The time the request remained active in haproxy, which is the total time in milliseconds elapsed between the first byte of the request was received and the last byte of response was sent. It covers all possible processing except the handshake and idle time.|109| +|11 |HTTP status code returned to the client. This status is generally set by the server, but it might also be set by proxy when the server cannot be reached or when its response is blocked by haproxy.|200| +|12 |The total number of bytes transmitted to the client when the log is emitted. This does include HTTP headers.|159 | +|13 |An optional "name=value" entry indicating that the client had this cookie in the request. The field is a single dash ('-') when the option is not set. Only one cookie may be captured, it is generally used to track session ID exchanges between a client and a server to detect session crossing between clients due to application bugs.|-| +|14 |An optional "name=value" entry indicating that the server has returned a cookie with its response. The field is a single dash ('-') when the option is not set. Only one cookie may be captured, it is generally used to track session ID exchanges between a client and a server to detect session crossing between clients due to application bugs.|-| +|15 |The condition the session was in when the session ended. This indicates the session state, which side caused the end of session to happen, for what reason (timeout, error, ...), just like in TCP logs, and information about persistence operations on cookies in the last two characters. The normal flags should begin with "--", indicating the session was closed by either end with no data remaining in buffers.|----| +|16 |Total number of concurrent connections on the process when the session was logged. It is useful to detect when some per-process system limits have been reached. For instance, if actconn is close to 512 or 1024 when multiple connection errors occur, chances are high that the system limits the process to use a maximum of 1024 file descriptors and that all of them are used.|1| +|17 |The total number of concurrent connections on the frontend when the session was logged. It is useful to estimate the amount of resource required to sustain high loads, and to detect when the frontend's "maxconn" has been reached. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack.|1| +|18 |The total number of concurrent connections handled by the backend when the session was logged. It includes the total number of concurrent connections active on servers as well as the number of connections pending in queues. It is useful to estimate the amount of additional servers needed to support high loads for a given application. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack.|0| +|19 |The total number of concurrent connections still active on the server when the session was logged. It can never exceed the server's configured "maxconn" parameter. If this value is very often close or equal to the server's "maxconn", it means that traffic regulation is involved a lot, meaning that either the server's maxconn value is too low, or that there aren't enough servers to process the load with an optimal response time. When only one of the server's "srv_conn" is high, it usually means that this server has some trouble causing the requests to take longer to be processed than on other servers.|0| +|20 |The number of connection retries experienced by this session when trying to connect to the server. It must normally be zero, unless a server is being stopped at the same moment the connection was attempted. Frequent retries generally indicate either a network problem between haproxy and the server, or a misconfigured system backlog on the server preventing new connections from being queued. This field may optionally be prefixed with a '+' sign, indicating that the session has experienced a redispatch after the maximal retry count has been reached on the initial server. In this case, the server name appearing in the log is the one the connection was redispatched to, and not the first one, though both may sometimes be the same in case of hashing for instance. So as a general rule of thumb, when a '+' is present in front of the retry count, this count should not be attributed to the logged server.|0| +|21 |The total number of requests which were processed before this one in the server queue. It is zero when the request has not gone through the server queue. It makes it possible to estimate the approximate server's response time by dividing the time spent in queue by the number of requests in the queue. It is worth noting that if a session experiences a redispatch and passes through two server queues, their positions will be cumulated. A request should not pass through both the server queue and the backend queue unless a redispatch occurs.|0| +|22 |The total number of requests which were processed before this one in the backend's global queue. It is zero when the request has not gone through the global queue. It makes it possible to estimate the average queue length, which easily translates into a number of missing servers when divided by a server's "maxconn" parameter. It is worth noting that if a session experiences a redispatch, it may pass twice in the backend's queue, and then both positions will be cumulated. A request should not pass through both the server queue and the backend queue unless a redispatch occurs.|0| +|23 |The complete HTTP request line, including the method, request and HTTP version string. Non-printable characters are encoded. This field might be truncated if the request is huge and does not fit in the standard syslog buffer (1024 characters).|"GET /demo/random-error HTTP/1.1"| + +### TCP Requests Debug Format + +An example output of a debug log produced by a TCP request is as follows. ``` -%ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]} %{+Q}r +HAPRoxy: 10.255.0.3:55569 [10/Mar/2017:16:15:40.806] tcpFE_6379 redis_main-be6379/redis_main 0/0/5007 12 -- 0/0/0/0/0 0/0 ``` -Please consult [Custom log format](https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.2.4) from the HAProxy documentation for the info about each field. +The format used for logging TCP requests when the proxy is running in the debug mode is as follows. + +|Field|Format |Example | +|-----|--------------------------------------------------------------------------------------------------------|--------------------------| +|1 |Static text `HAProxy` indicating that the log entry comes directly from the proxy. |HAProxy | +|2 |Client IP and port. When used through Swarm networking, the IP and the port are of the *Ingress* network.|10.255.0.3:55569 | +|3 |Date and time when the request was accepted. |\[10/Mar/2017:13:18:47.759\]| +|4 |The name of the frontend. |tcpFE_6379 | +|5 |Backend and server name. When used through Swarm networking, the server name is the name of the destination service.|redis_main-be6379/redis_main| +|6 |Total time in milliseconds spent waiting in the various queues. It can be "-1" if the connection was aborted before reaching the queue.|0| +|7 |total time in milliseconds spent waiting for the connection to establish to the final server, including retries. It can be "-1" if the request was aborted before a connection could be established.|0| +|8 |Total time in milliseconds spent waiting for the server to send a full HTTP response, not counting data. It can be "-1" if the request was aborted before a complete response could be received. It generally matches the server's processing time for the request, though it may be altered by the amount of data sent by the client to the server. Large times here on "GET" requests generally indicate an overloaded server.|5007| +|9 |Total number of bytes transmitted from the server to the client when the log is emitted. |159 | +|10 |The condition the session was in when the session ended. This indicates the session state, which side caused the end of session to happen, for what reason (timeout, error, ...), just like in TCP logs, and information about persistence operations on cookies in the last two characters. The normal flags should begin with "--", indicating the session was closed by either end with no data remaining in buffers.|--| +|11 |Total number of concurrent connections on the process when the session was logged. It is useful to detect when some per-process system limits have been reached. For instance, if actconn is close to 512 or 1024 when multiple connection errors occur, chances are high that the system limits the process to use a maximum of 1024 file descriptors and that all of them are used.|0| +|12 |The total number of concurrent connections on the frontend when the session was logged. It is useful to estimate the amount of resource required to sustain high loads, and to detect when the frontend's "maxconn" has been reached. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack.|0| +|13 |The total number of concurrent connections handled by the backend when the session was logged. It includes the total number of concurrent connections active on servers as well as the number of connections pending in queues. It is useful to estimate the amount of additional servers needed to support high loads for a given application. Most often when this value increases by huge jumps, it is because there is congestion on the backend servers, but sometimes it can be caused by a denial of service attack.|0| +|14 |The total number of concurrent connections still active on the server when the session was logged. It can never exceed the server's configured "maxconn" parameter. If this value is very often close or equal to the server's "maxconn", it means that traffic regulation is involved a lot, meaning that either the server's maxconn value is too low, or that there aren't enough servers to process the load with an optimal response time. When only one of the server's "srv_conn" is high, it usually means that this server has some trouble causing the requests to take longer to be processed than on other servers.|0| +|15 |The number of connection retries experienced by this session when trying to connect to the server. It must normally be zero, unless a server is being stopped at the same moment the connection was attempted. Frequent retries generally indicate either a network problem between haproxy and the server, or a misconfigured system backlog on the server preventing new connections from being queued. This field may optionally be prefixed with a '+' sign, indicating that the session has experienced a redispatch after the maximal retry count has been reached on the initial server. In this case, the server name appearing in the log is the one the connection was redispatched to, and not the first one, though both may sometimes be the same in case of hashing for instance. So as a general rule of thumb, when a '+' is present in front of the retry count, this count should not be attributed to the logged server.|0| +|16 |The total number of requests which were processed before this one in the server queue. It is zero when the request has not gone through the server queue. It makes it possible to estimate the approximate server's response time by dividing the time spent in queue by the number of requests in the queue. It is worth noting that if a session experiences a redispatch and passes through two server queues, their positions will be cumulated. A request should not pass through both the server queue and the backend queue unless a redispatch occurs.|0| +|17 |The total number of requests which were processed before this one in the backend's global queue. It is zero when the request has not gone through the global queue. It makes it possible to estimate the average queue length, which easily translates into a number of missing servers when divided by a server's "maxconn" parameter. It is worth noting that if a session experiences a redispatch, it may pass twice in the backend's queue, and then both positions will be cumulated. A request should not pass through both the server queue and the backend queue unless a redispatch occurs.|0| ## Secrets diff --git a/docs/debugging.md b/docs/debugging.md index 68ecfc18..69f777aa 100644 --- a/docs/debugging.md +++ b/docs/debugging.md @@ -75,7 +75,7 @@ docker service logs proxy_proxy We can see log entries from the requests sent by `swarm-listener`, but there is no trace of the two requests we made. We need to enable debugging. -## Logging With The Debug Mode +## Logging HTTP Requests With The Debug Mode By default, debugging is disabled for a reason. It slows down the proxy. While that might not be noticeable in this demo, when working with thousands of requests per second, debugging can prove to be a bottleneck. @@ -118,8 +118,8 @@ Please go back to the other terminal and observe the logs. The relevant part of the output is as follows. ``` -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 150 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/hello HTTP/1.1" -HAPRoxy: services services/ -1/-1/-1/-1/0 503 1271 - - SC-- 0/0/0/0/0 0/0 {-,"",""} "GET /this/endpoint/does/not/exist HTTP/1.1" +HAPRoxy: 10.255.0.3:52639 [10/Mar/2017:13:18:00.780] services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 150 - - ---- 1/1/0/0/0 0/0 "GET /dem +HAPRoxy: 10.255.0.3:52647 [10/Mar/2017:13:18:00.995] services services/ -1/-1/-1/-1/0 503 1271 - - SC-- 0/0/0/0/0 0/0 "GET /this/endpoint/ ``` As you can see, both requests were recorded. @@ -135,35 +135,85 @@ do done ``` -The output with only relevant parts is as follows. +The output, stripped from the fields before status codes, is as follows. ``` -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/0/0 200 159 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/0/0 200 159 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/0/0 500 196 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/0/0 500 196 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/0/0 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/0/0 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/1/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 500 196 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" -HAPRoxy: services go-demo_main-be8080/go-demo_main 0/0/0/1/1 200 159 - - ---- 1/1/0/0/0 0/0 {-,"",""} "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/1/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/1/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +500 196 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +500 196 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +500 196 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/0/0 0/0 "GET /demo/random-error HTTP/1.1" +200 159 - - ---- 1/1/0/1/0 0/0 "GET /demo/random-error HTTP/1.1" ``` Approximately, one out of ten responses returned status code `500`. Logs contain quite a lot of other useful information. I suggest you consult [Debug Format](#debug-format) for a complete description of the output. +What about debugging TCP requests? + +## Logging TCP Requests With The Debug Mode + +*Docker Flow Proxy* supports debugging of TCP requests as well. + +Let's take a look at an example. + +We'll deploy the [Redis](https://github.com/vfarcic/docker-flow-stacks/blob/master/db/redis-df-proxy.yml) from the [vfarcic/docker-flow-stacks repository](https://github.com/vfarcic/docker-flow-stacks). + +```bash +curl -o redis.yml \ + https://raw.githubusercontent.com/vfarcic/docker-flow-stacks/master/db/redis-df-proxy.yml + +docker stack deploy -c redis.yml redis + +docker service update --publish-add 6379:6379 proxy_proxy +``` + +We deployed the `redis` stack and opened the port `6379`. + +!!! tip + Normally, you would not need to route DB requests through the proxy unless they should be accessed from outside Swarm. Your services should be able to connect to your DBs through Docker Overlay network. In this case, we're adding Redis to the proxy only as a demonstration of TCP debugging. + +It might take a while until the `redis` stack is deployed. Please use the `docker stack ps redis` to confirm that it is running. + +We'll use the [redis_check.sh](https://github.com/vfarcic/docker-flow-proxy/blob/master/integration_tests/redis_check.sh) script to send a TCP request to Redis through the proxy. The same script is used by *Docker Flow Proxy* automated tests. + +```bash +curl -o redis_check.sh \ + https://raw.githubusercontent.com/vfarcic/docker-flow-proxy/master/scripts/redis_check.sh + +chmod +x redis_check.sh + +ADDR=$(docker-machine ip node-1) PORT=6379 \ + ./redis_check.sh +``` + +We sent a request to Redis (through the proxy) and it responded. + +The log entry from the request is as follows. + +``` +HAPRoxy: 10.255.0.3:55569 [10/Mar/2017:16:15:40.806] tcpFE_6379 redis_main-be6379/redis_main 1/0/1 12 -- 0/0/0/0/0 0/0 +``` + +As you can see, the TCP request is recorded. + +## What Now? + We are finished with the short introduction to *Docker Flow Proxy* debugging feature. We should destroy the demo cluster and free our resources for something else. ```bash diff --git a/proxy/ha_proxy.go b/proxy/ha_proxy.go index 3d53fc88..16591eb5 100644 --- a/proxy/ha_proxy.go +++ b/proxy/ha_proxy.go @@ -223,8 +223,8 @@ func (m HaProxy) getConfigData() ConfigData { d.ExtraGlobal += ` log 127.0.0.1:1514 local0` d.ExtraFrontend += ` - log global - log-format "%ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]} %{+Q}r"` + option httplog + log global` } else { d.ExtraDefaults += ` option dontlognull @@ -319,6 +319,11 @@ frontend tcpFE_%d port, port, ) + if strings.EqualFold(os.Getenv("DEBUG"), "true") { + tmpl += ` + option tcplog + log global` + } for _, s := range services { var backend string if len(s.ServiceDomain) > 0 { diff --git a/proxy/ha_proxy_test.go b/proxy/ha_proxy_test.go index 0d7d6bd1..0dc53d18 100644 --- a/proxy/ha_proxy_test.go +++ b/proxy/ha_proxy_test.go @@ -294,27 +294,9 @@ func (s HaProxyTestSuite) Test_CreateConfigFromTemplates_AddsLogging_WhenDebug() defer func() { os.Setenv("DEBUG", debugOrig) }() os.Setenv("DEBUG", "true") var actualData string - tmpl := strings.Replace(s.TemplateContent, "tune.ssl.default-dh-param 2048", "tune.ssl.default-dh-param 2048\n log 127.0.0.1:1514 local0", -1) - tmpl = strings.Replace(tmpl, " option dontlognull\n option dontlog-normal\n", "", -1) - tmpl = strings.Replace( - tmpl, - `frontend services - bind *:80 - bind *:443 - mode http -`, - `frontend services - bind *:80 - bind *:443 - mode http - - log global - log-format "%ft %b/%s %Tq/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs {%[ssl_c_verify],%{+Q}[ssl_c_s_dn],%{+Q}[ssl_c_i_dn]} %{+Q}r"`, - -1, - ) expectedData := fmt.Sprintf( "%s%s", - tmpl, + s.getTemplateWithLogs(), s.ServicesContent, ) writeFile = func(filename string, data []byte, perm os.FileMode) error { @@ -640,6 +622,41 @@ frontend tcpFE_5678 s.Equal(expectedData, actualData) } +func (s HaProxyTestSuite) Test_CreateConfigFromTemplates_AddsLoggingToTcpFrontends() { + debugOrig := os.Getenv("DEBUG") + defer func() { os.Setenv("DEBUG", debugOrig) }() + os.Setenv("DEBUG", "true") + var actualData string + expectedData := fmt.Sprintf( + `%s + +frontend tcpFE_1234 + bind *:1234 + mode tcp + option tcplog + log global + default_backend my-service-1-be1234%s`, + s.getTemplateWithLogs(), + s.ServicesContent, + ) + writeFile = func(filename string, data []byte, perm os.FileMode) error { + actualData = string(data) + return nil + } + p := NewHaProxy(s.TemplatesPath, s.ConfigsPath) + data.Services["my-service-1"] = Service{ + ReqMode: "tcp", + ServiceName: "my-service-1", + ServiceDest: []ServiceDest{ + {SrcPort: 1234, Port: "4321"}, + }, + } + + p.CreateConfigFromTemplates() + + s.Equal(expectedData, actualData) +} + func (s HaProxyTestSuite) Test_CreateConfigFromTemplates_AddsContentFrontEndSNI() { var actualData string tmpl := s.TemplateContent @@ -1316,6 +1333,30 @@ func (s *HaProxyTestSuite) Test_AddService_RemovesService() { s.Equal(data.Services[s3.ServiceName], s3) } +// Util + +func (s *HaProxyTestSuite) getTemplateWithLogs() string { + tmpl := strings.Replace(s.TemplateContent, "tune.ssl.default-dh-param 2048", "tune.ssl.default-dh-param 2048\n log 127.0.0.1:1514 local0", -1) + tmpl = strings.Replace(tmpl, " option dontlognull\n option dontlog-normal\n", "", -1) + tmpl = strings.Replace( + tmpl, + `frontend services + bind *:80 + bind *:443 + mode http +`, + `frontend services + bind *:80 + bind *:443 + mode http + + option httplog + log global`, + -1, + ) + return tmpl +} + // Mocks type FileInfoMock struct {