Replies: 4 comments 4 replies
-
Hey, 651KB is a really big payload to send at once, uWebsocket (base for socketify) will be faster when streaming 64KB packages, our WSGI solution is not optimized yet, and I already comparing it with fastWSGI internally in some scenarios. I normally use the same parameters as TechEmPower, but with different payloads. Socketify WSGI and ASGI itself need a lot of optimizations for streaming files, and sending larger data (like if the user tries to send 651KB like you did, in the future I will force send in smaller packages), this is already in progress actually #75 but unfinished because I working on Bun now, but I will get back on it soon. (I moved it to a discussion because is already been WIP to solve this in another issue) In this case, socketify is slower because of back pressure and a lack of optimizations for this scenario. Some comparisons: I will keep this discussion open, to remember me that I need to test again with the same scenario after optimizations are done. Today, if you return an iterator/array and send 651KB in 64KB, will be a lot of back pressure because is using Sorry that I did not meet your expectations yet, but we will do better in the future 🦾 |
Beta Was this translation helpful? Give feedback.
-
And I didn’t notice right away that discussions are available here.
Well then, in six months (or a year) I will again conduct such testing. Benchmark with file-like data (
|
Beta Was this translation helpful? Give feedback.
-
@remittor weird thing that using version fastwsgi-0.0.7 i get segfault check: wrk -H "Host: tfb-server" -H "Accept: application/zip" -H "Connection: keep-alive" --latency -d 5 -c 100 --timeout 8 -t 8 http://localhost:8000 payload = None
with open("xml.zip", "rb") as file:
payload = file.read()
chunks = []
chunk_size = 64 * 1024
content_length = len(payload)
def app_chunked(environ, start_response):
start_response('200 OK', [('Content-Type', 'application/zip'), ('Transfer-Encoding', 'chunked')])
sended = 0
while content_length > sended:
end = sended + chunk_size
yield payload[sended:end]
sended = end
def app(environ, start_response):
start_response('200 OK', [('Content-Type', 'application/zip'), ('Content-Length', str(content_length))])
sended = 0
while content_length > sended:
end = sended + chunk_size
yield payload[sended:end]
sended = end
if __name__ == "__main__":
# from socketify import WSGI
# WSGI(app_chunked).listen(8000, lambda config: print(f"Listening on port http://localhost:{config.port} now\n")).run(1)
import fastwsgi
fastwsgi.run(wsgi_app=app_chunked, host='127.0.0.1', port=8000)
Result:
Socketify in my test branch still needs work but:
Content-Length:
|
Beta Was this translation helpful? Give feedback.
-
@remittor @jamesroberts Even in this simple example:
def app_hello(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain'), ('Content-Length', '13')])
yield b'Hello, World!'
if __name__ == "__main__":
import fastwsgi
fastwsgi.run(wsgi_app=app_chunked, host='127.0.0.1', port=8000) Also if using pipeline + main branch I got: wrk -H "Host: tfb-server" -H "Accept: text/plain,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,/;q=0.7" -H "Connection: keep-alive" --latency -d 5 -c 4096 --timeout 8 -t 8 http://localhost:8000 -s pipeline.lua -- 16 Error:
I used the last commit:
My configs: I will try again later |
Beta Was this translation helpful? Give feedback.
-
Machine: AMD EPYC 7543 @ 3.7GHz, Debian 11, Python 3.9
Payload for testing: https://github.com/MiloszKrajewski/SilesiaCorpus/blob/master/xml.zip (651 KiB)
Server test app: https://gist.github.com/remittor/c9411e62b5ea4776200bee288a331016
FastWSGI project: https://github.com/jamesroberts/fastwsgi
Socketify (multi-threaded)
> python3 server.py -g si -t 8
> python3 server.py -g si -t 8 -f xml.zip -b
Socketify (single-threaded)
> python3 server.py -g si -t 1
> python3 server.py -g si -t 1 -f xml.zip -b
FastWSGI (single-threaded)
> python3 server.py -g fw -t 1
> python3 server.py -g fw -t 1 -f xml.zip -b
Conclusion:
Project
socketify
is very fast when using multithreading. But in single-threaded mode, serverFastWSGI
turned out to be faster.Beta Was this translation helpful? Give feedback.
All reactions