Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

<feature> Ability to store context (user data) for later retrieval in callbacks #48

Closed
domsson opened this issue Apr 22, 2022 · 14 comments
Labels

Comments

@domsson
Copy link

domsson commented Apr 22, 2022

Working with wsServer today, I ran into one issue. The code inside of the callback functions (onopen(), onclose(), etc) does not have any context other than the client connection that is being passed in. This makes it a bit hard to tie the code in with whatever goes on in the surrounding program.

For example, let's say I wanted to maintain a list of connected clients. I could whip up some struct/array to do that. Now, in onopen(), I would want to add the newly connected client to my collection. But how do I access it from this function? The only solution seems to be a global variable, which isn't exactly great.

If we could hand in a pointer to user data upon calling ws_socket(), then the callbacks could include this as the last parameter. Like so:

int ws_socket(struct ws_events *evs, uint16_t port, int thread_loop, void *context);
void onopen(ws_cli_conn_t *client, void *context);

Alternatively, two functions could be introduced. One to set/ save a context, the other one to get/ retrieve it. This way, the current interface would remain the same, keeping backwards compatibility.

Or is there some other way you can already do this without a global variable in the user code?

@Theldus
Copy link
Owner

Theldus commented Apr 22, 2022

Hi,
In the current implementation I don't think it would make much difference: if you notice, there is no 'struct ws_server' or similar, but only the event structure, which stores only the events.

Any other additional data (like the internal client list) is already global. If I were to save a user pointer, I would end up having to save it globally as well.

However, I do intend to move all global variables to an appropriate struct, so ws_socket() can safely be invoked multiple times without any problem. When this occurs, then it makes sense to save some user-defined pointer.

What do you think?

@gloveboxes
Copy link
Contributor

gloveboxes commented Apr 24, 2022

I'd like to see some sort of context object attached to a client connection as well. As a work around, I create a client ledger (just an array of clients) and using that to maintain some state. https://github.com/gloveboxes/Altair8800.Emulator.UN-X/blob/main/AltairHL_emulator/web_socket_server.c

I want to keep just one client active, so a new client connection will kick out an existing client connection. Party because my app scales from desktop computer to microcontroller (scarce resources). Context would make this a bit easier.

@Theldus and thank you for your excellent library, it is perfect from my Altair project. Cheers Dave

@Theldus
Copy link
Owner

Theldus commented Apr 25, 2022

Hi @gloveboxes,
Keeping a user-defined pointer per client also sounds pretty cool. It could, for example, hold pointers to specific data structures for certain types of clients, etc. Perhaps this is more interesting than keeping a single pointer per 'ws_socket' invocation. What do you guys think? @domsson, @gloveboxes.

It would also be possible to have both: a 'global' for each ws_socket invocation and an individual one for each client.

I want to keep just one client active, so a new client connection will kick out an existing client connection. Party because my app scales from desktop computer to microcontroller (scarce resources). Context would make this a bit easier.

Reading your src it looks like you're doing unnecessary work (correct me if I'm wrong):
Do you keep your own client list (with the appropriate locks, etc.) just to be able to keep a single client?

You can limit the number of clients by the macro 'MAX_CLIENTS', defined in the file include/ws.h. If set to 1, any subsequent connections will be automatically rejected (and no events will be fired for them).

Also, if the first parameter of ws_sendframe[_txt,_bin] is NULL, it will broadcast the message to all connected clients. I mean, you don't have to keep the 'ws_client_conn_t' of your only connected client, just broadcast to 'everyone'.

So I believe your code can be greatly simplified if:

  • Change MAX_CLIENTS to 1 in include/ws.h.
  • Change your 'ws_sendframe(current_ws, ...)' to 'ws_sendframe(NULL, ...)'.

My latest change in wsServer was precisely to address situations like this: avoid knowing a client in beforehand to send a broadcast. So I can't blame you =).


Anyway, your project Altair8800.Emulator.UN-X looks pretty interesting. I admit I've seen it before, but I don't understand where wsServer is used in it... if you don't mind, could you tell me? I got curious about your project.

@gloveboxes
Copy link
Contributor

Hey there @Theldus, agree my src maybe doing too much. These were the problems I was trying to solve.

  1. I submodule in your code to my app and I don't want to touch your code base. Two reasons - maintenance and licensing, so I didn't want to change MAX_CLIENTS to 1. I tried undefining MAX_CLIENTS and then redefining in my code to 1, but didn't work, maybe just me :). I think your code would work better as a submodule if you first checked if MAX_CLIENTS was defined, this would allow for the main cmakefile to set MAX_CLIENTS.
#ifndef MAX_CLIENTS 
#define MAX_CLIENTS 8
#endif
  1. The open/close life cycle works well when client calls close or the web browser is closed. But where I found it failed was when the client browser machine went to sleep - for example sleeping an iPad or PC, ie there was no close called from the client app. I found in this situation the close callback was never called on the server side. So I did a work around that I'm still not 100% sure about...

From comment above

Keeping a user-defined pointer per client also sounds pretty cool. It could, for example, hold pointers to specific data structures for certain types of clients, etc. Perhaps this is more interesting than keeping a single pointer per 'ws_socket' invocation. What do you guys think?

Yeah this is what I had in mind. But appreciate this may not be the use case @domsson was thinking about...

Cheers Dave

@Theldus
Copy link
Owner

Theldus commented Apr 26, 2022

@gloveboxes

I submodule in your code to my app and I don't want to touch your code base. Two reasons - maintenance and licensing, so I didn't want to change MAX_CLIENTS to 1. I tried undefining MAX_CLIENTS and then redefining in my code to 1, but didn't work, maybe just me :). I think your code would work better as a submodule if you first checked if MAX_CLIENTS was defined, this would allow for the main cmakefile to set MAX_CLIENTS.

Understandable. Commit 7e44960 adds this check.
Now you can compile with:

# via Makefile
CFLAGS="-DMAX_CLIENTS=1" make

# or via CMake
mkdir build && cd build/
CFLAGS="-DMAX_CLIENTS=1" cmake ..
make

The open/close life cycle works well when client calls close or the web browser is closed. But where I found it failed was when the client browser machine went to sleep - for example sleeping an iPad or PC, ie there was no close called from the client app. I found in this situation the close callback was never called on the server side. So I did a work around that I'm still not 100% sure about...

You brought up a very interesting point here (and one that deserves a separate issue that I'll create later). (edit: issue #50)

I've never tested in scenarios like this, where a device can sleep or even when the connection is abruptly interrupted (like a weak WiFi signal or 3G/4G from a cell phone): in a quick test here (turning off the router's WiFi) , the connection isn't actually closed and the on_close events aren't fired either.

I've also noticed that send() invocations can return as if the packet has been sent, while the OS keeps trying to send them to no avail. So the return value of ws_sendframe*() cannot be trusted in this scenario.

I can think of two solutions for this:

  • a) TCP keepalive
  • b) Sending PING packets with timeout

The first is to adjust how the OS sees a connection as inactive and what it should do about it. In theory, this requires few lines of code and a dropped connection is detected a short (configurable) time later. However, I'm afraid that this is quite OS dependent.

The second option is to periodically send PING frames and wait for the PONG to be received within a specific time interval. While it sounds simple, this adds some complexity to the wsServer code: since I need a timer that sends PINGs to all active clients, wait for them within some interval, and those that don't: abort the connection.

I always thought that the periodic sending of PINGs was unnecessary and that's why I never bothered to implement it... until now.


Good references about TCP keepalive:

@domsson
Copy link
Author

domsson commented Apr 26, 2022

Keeping a user-defined pointer per client also sounds pretty cool. It could, for example, hold pointers to specific data structures for certain types of clients, etc. Perhaps this is more interesting than keeping a single pointer per 'ws_socket' invocation. What do you guys think? @domsson, @gloveboxes.

In my limited experience, a general purpose user-defined pointer is pretty common. A lot of programs keep their overall state in some struct that references all other relevant bits and bobs. Think a game engine; it would probably have a game_state struct/object that holds references to the renderer, physics engine, audio engine, etc. -- as long as you hand this one pointer around, you will always have a way to get to whatever you need to get to without having to resort to globally scoped variables.

Then again, what @gloveboxes suggested might be even more convenient in this case. The user could still stuff their global state into each and every client-based user data if they so desired. But if required, they could use client-specific structures. Those could optionally hold a pointer to the global state if needed. So it should give all the benefits of my idea, plus additional (and optional) flexibility.

Anyway, either should be fine I think. Maybe just go with whatever works better with your code and/ or makes for a cleaner and leaner implementation?

@Theldus
Copy link
Owner

Theldus commented Apr 26, 2022

@domsson Nice, so I think I'll follow the @gloveboxes's approach and add two methods, to configure and retrieve the user defined pointer, something like:

void ws_set_client_context(ws_cli_conn_t *cli, void *ptr);
void *ws_get_client_context(ws_cli_conn_t *cli);

I prefer creating two methods (instead of letting the user directly access the ws_cli_conn_t) because the structure also contains wsServer internal data, so I prefer to keep this typedef as an opaque structure to the user.

Oh, before I forget: needless to say, the lifetime of each 'ws_cli_conn_t' (its content, not the pointer) remains until the user disconnects. So if the user disconnects, the pointer content will be cleared again. Also, if a new user connects to the same slot as a recently disconnected one, the pointer will point to that new user.

So this requires some care and the user cannot blindly trust a client's ws_cli_conn_t forever.

@domsson
Copy link
Author

domsson commented Apr 27, 2022

I prefer creating two methods (instead of letting the user directly access the ws_cli_conn_t) because the structure also contains wsServer internal data, so I prefer to keep this typedef as an opaque structure to the user.

Sounds great to me!

Oh, before I forget: needless to say, the lifetime of each 'ws_cli_conn_t' (its content, not the pointer) remains until the user disconnects. So if the user disconnects, the pointer content will be cleared again. Also, if a new user connects to the same slot as a recently disconnected one, the pointer will point to that new user.

I think this should be fine as long as we can still retrieve the context pointer while in the onclose() callback. Once the code returns from that function, the user should expect the ws_cli_conn_t to be gone. Would such an implementation be possible?

@Theldus
Copy link
Owner

Theldus commented Apr 27, 2022

I think this should be fine as long as we can still retrieve the context pointer while in the onclose() callback.

Yes, that's what I want too.

Once the code returns from that function, the user should expect the ws_cli_conn_t to be gone. Would such an implementation be possible?

It is possible, but not at the moment. To keep things simple wsServer maintains a static list of all connected clients, so the pointer received in on_[open,close,message] events belongs to that list. So the pointer doesn't go away, but it can store data from another client once the 'slot' is available again.

I could maintain a dynamic list, but the current approach (with one thread per client) doesn't scale well to hundreds or thousands of clients, as well as consuming more resources than a single-thread or mixed approach. So it made sense to keep a fixed list as well.

@gloveboxes
Copy link
Contributor

gloveboxes commented Apr 27, 2022

@Theldus Hey in answer to a question you had above re Altair and the use of your socket server.

This link will describes the app architecture and how web sockets is being used...

https://github.com/gloveboxes/Altair8800.Emulator.UN-X/wiki#altair-8800-emulator-architecture

I used to route messages over MQTT - but switched to your web socket library as it simplified comms eliminating the need to an MQTT broker.

@gloveboxes
Copy link
Contributor

@gloveboxes

I submodule in your code to my app and I don't want to touch your code base. Two reasons - maintenance and licensing, so I didn't want to change MAX_CLIENTS to 1. I tried undefining MAX_CLIENTS and then redefining in my code to 1, but didn't work, maybe just me :). I think your code would work better as a submodule if you first checked if MAX_CLIENTS was defined, this would allow for the main cmakefile to set MAX_CLIENTS.

Understandable. Commit 7e44960 adds this check. Now you can compile with:

# via Makefile
CFLAGS="-DMAX_CLIENTS=1" make

# or via CMake
mkdir build && cd build/
CFLAGS="-DMAX_CLIENTS=1" cmake ..
make

The open/close life cycle works well when client calls close or the web browser is closed. But where I found it failed was when the client browser machine went to sleep - for example sleeping an iPad or PC, ie there was no close called from the client app. I found in this situation the close callback was never called on the server side. So I did a work around that I'm still not 100% sure about...

You brought up a very interesting point here (and one that deserves a separate issue that I'll create later). (edit: issue #50)

I've never tested in scenarios like this, where a device can sleep or even when the connection is abruptly interrupted (like a weak WiFi signal or 3G/4G from a cell phone): in a quick test here (turning off the router's WiFi) , the connection isn't actually closed and the on_close events aren't fired either.

I've also noticed that send() invocations can return as if the packet has been sent, while the OS keeps trying to send them to no avail. So the return value of ws_sendframe*() cannot be trusted in this scenario.

I can think of two solutions for this:

  • a) TCP keepalive
  • b) Sending PING packets with timeout

The first is to adjust how the OS sees a connection as inactive and what it should do about it. In theory, this requires few lines of code and a dropped connection is detected a short (configurable) time later. However, I'm afraid that this is quite OS dependent.

The second option is to periodically send PING frames and wait for the PONG to be received within a specific time interval. While it sounds simple, this adds some complexity to the wsServer code: since I need a timer that sends PINGs to all active clients, wait for them within some interval, and those that don't: abort the connection.

I always thought that the periodic sending of PINGs was unnecessary and that's why I never bothered to implement it... until now.

Good references about TCP keepalive:

@Theldus Thanks for implementing the MAX_CLIENTS compiler flag,,,

@dkorolev
Copy link
Contributor

dkorolev commented Aug 8, 2024

Quick attempt to put this idea into #91. It adds:

void ws_set_client_context(ws_cli_conn_t *cli, void *ptr);
void *ws_get_client_context(ws_cli_conn_t *cli);

@Theldus
Copy link
Owner

Theldus commented Aug 8, 2024

Hi @domsson,
Finally this feature was added in #91 as mentioned before.

I apologize for the colossal delay in addressing this, I usually don't take so long to address feature requests that are quick to implement. Unfortunately I maintain wsServer in my spare time, and sometimes I end up getting distracted by other projects, but I always try to maintain wsServer.

Thanks again for the feature request, and also thanks to @dkorolev for reminding me again about this feature and implementing it as suggested.

I believe I can finally close this issue.

@Theldus Theldus closed this as completed Aug 8, 2024
@dkorolev
Copy link
Contributor

dkorolev commented Aug 9, 2024

Thank YOU for the wsServer, @Theldus !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants