Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal for migrating to fastify for performance reasons #155

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

Uzlopak
Copy link

@Uzlopak Uzlopak commented Nov 19, 2023

This is a poc. It lacks rn the blocking channel support.

I chose fastify because I am a maintainer of fastify and know the framework very well. I would have also considered to implement it via polka or similar performance focussed rest api framework, because it is faster, but polka has not a very active community and replacing a basically dead framework with another dead framework makes no sense.

I think the memory footprint of the server should be significantly lower.

Now only one timer is triggering the pinging. It is iterating over the active set of replies and pings them.
Also the redis implementation should be now improved significantly, as we just process it if, a client is connected to the specific instance. The current master is basically spamming all instances, thus making it less horizontally scalable.

I have to investigate if the iteration over the Set is actually good or not. Will ask around.

So this should help any discussion...

@Uzlopak Uzlopak changed the title proposal for migrating to fastify for perforance reasons proposal for migrating to fastify for performance reasons Nov 19, 2023
@Uzlopak
Copy link
Author

Uzlopak commented Nov 19, 2023

maybe create subdomain ui.smee.io running nginx supplying the frontend stuff. Then the server is purely taking care the webhook stuff. reducing the pressure to supply the frontend shizzle.

@Uzlopak
Copy link
Author

Uzlopak commented Nov 20, 2023

added channel blocking

@austinpray-mixpanel
Copy link

Chiming in because we self-host smee.io to support dev on our internal probot app. With 5 on and off users we have never noticed any performance issues. And the memory footprint seems low enough.

image

Is this gonna push the memory usage even lower than this?

@Uzlopak
Copy link
Author

Uzlopak commented Nov 20, 2023

5 users is not noticible.

The issues will arise, when you have like 10 probot services running, connected via redis pub sub. In the current state, every webhook call of probot gets propagated to all probot instances. So while it was about probot being horizontally being scalable, it is not. Our Redis Pub Sub has the issue, because we dont filter on redis which channels we want, so we have process all incoming redis messages. So basically the currently deployed solution is DoSing itself.

In this PR I actually check if the channel is even currently handled by the probot instance (=> means if a client/browser is currently connected to the probot instance and actively listens via eventsource to the channel) and if we listen than we call JSON.parse. If not we just dont process the message.

@Uzlopak
Copy link
Author

Uzlopak commented Nov 22, 2023

@wolfy1339

I think the last commits will have significant impact on the performance.

I think the last feature missing is FORCE_SSL or whatever it was called before. That should only take 30 min of work. Then I think it is feature parity with the old implementation.

@wolfy1339
Copy link
Contributor

While we're at it, let's drop dead node versions, require >= 18 and switch to ESM

Comment on lines -25 to -28
if (process.env.SENTRY_DSN) {
Raven.config(process.env.SENTRY_DSN).install()
app.use(Raven.requestHandler())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be added back and migrated to the new @sentry/node package

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants