-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize for thousands of subscribers #195
Comments
Hi, Sending bulk webpushes heavily uses network and CPU. Be sure to use latest version of the library (4.x versions) I think you can send 500 webpushes in 5-7 seconds by doing the simple things above. Of course there are many things (like storing and reusing local keys, etc..) that you can do but you have to deep dive to the code to make advanced optimizations. BTW We send around 1 million webpushes/minute using this library (and lots of servers ) but we did many optimizations to do that. |
Could you please share those? At least like a list of things you did above. |
Hi, We use 45 servers (aws t2.micro 1GB RAM) to send WebPushes. We send 500 webpushes in one flush() function and it takes 1 second to send 500 webpushes in one server. (if there is no network issue).
As I know using same local keys and shared secrets are not recommended by protocol. So we change local keys and shared secrets daily for each subscription. These are the main things we did to make sending process faster. I hope it helps. Please let me know if you need any extra information. |
Thanks for the advices. Now I store the local public key and shared secret in DB. |
Great 👍 but 47 seconds is too long as well. I think there must be any other factor on your server. Can you check the things below :
Please let me know if these advices works for you. |
And the last one : try to disable auto padding if it's enabled. |
@ozgurhangisi, thanks for the answer. I will try it. |
No I don't have any CPU issue. I prefer to use many small servers instead of 1 powerful server. I use cloud servers and if I want to get powerful server I should spend 400-500 $ in a month. So I can get 40 small servers for this price. If you use 1 server and if there is a problem on the server you can not send webpushes. So if you have many you wouldn't have this problem. |
@Keffr3n answering your questions:
Yes, see @ozgurhangisi's comments above. But that would require changing the lib source code, I guess.
I don't think that would be much of help here 😟
Yes, @ozgurhangisi claims to be sending about 500 in a batch small (aws t2.micro 1GB RAM) server. Also, my five cents:
Are your questions answered? Can we close this issue? |
Yes it does support multi consumers. we use a few producers and 45 consumers on beanstalkd. (sometimes we do some critical changes and old and new system has to work at the same time. We use up to 90 consumers at this point. I don't know beanstalkd's limit but it can support 90 consumers at least. And It's enough for us.). As I know it has FIFO rule. It doesn't give you random item. It gives you first waiting item on the queue. |
We use crontab and each cron works in every minute and check the queue. All of our servers are php producers and php consumers. We developed special cron class and we can disable/enable or slow down each server. Also we add last cron status, last startdate, last end date, memory usage, cpu usage to the db and watch them. If something is wrong system send us an automated email. |
Yes we get every critical data from the cron class. |
I haven't use v5 yet. We get expired endpoints from the results so we should get all of the results to see which endpoints are expired. So in both version you should get all of the results. I don't think it's a performance improvement. (May be it causes efficient memory usage but we send max. 500 pushes in We use completely different version of this library. So It's not easy for us to use the new version. BTW It's my opinion. I don't have any performance results. It's just an opinion :) |
I didn't try anything except this library. May be @t1gor can help you to understand the problem. |
In my opinion, there are 2 answers here:
You probably want to iterate over the results to clear the expired subs, as mentioned above.
That is something I need to confirm. The way we use the lib in our project, the v5 vs v5 didn't change much, except for the results format, as we are checking the results. If that is true, I guess doing a simple solution like below should hot-fix it, while we're working on a proper solution. $flushResults = $webpush->flush();
// do not save or process, just loop
iterator_to_array($flushResults); |
Do you have a benchmark? I don't think this should be relevant as only the place of actually sending requests is changed, not the payload size or smth. |
It's not the data that you get on subscription. It's the data that the library creates when you send webpushes. If you have thousands of customers and want to send webpushes so fast I recommend you to send it without payload and you can get payload in service worker. In this way you can send 1000 webpush in under a second and it's gonna be so fast. We send webpushes with payloads but we spend about 6 months, we developed our own encryption library and we did many things to reach this speed and we use many servers queue systems, etc.. Sending webpushes with payload (to many users) spends so much cpu, ram and network resources. (Because library has to encrypt payload for each subscriber and has to send one by one to each subscriber. There is no bulk sending option for webpush with payload.) If you need to send your webpushes so fast I recommend you to send it without payload or to use a service provider. |
As @ozgurhangisi suggested in web-push-libs#195
As @ozgurhangisi suggested in web-push-libs#195
* Implemented VAPID header caching As @ozgurhangisi suggested in #195 * Implemented VAPID header caching As @ozgurhangisi suggested in #195 * Docs updated
Hello, after the update of Minishlink things are way faster. However to make them even better I tried to start asynchronously a few processes. If I send all the notifications (about 30 000 now) at one process it takes about 10 minutes and no problems. If I try to send it in parallel processes some of them get this error: PHP Fatal error: Uncaught Error: Class 'GuzzleHttp\Psr7\Response' not found in /home/www/client2/web/tools/composer-php72/vendor/guzzlehttp/guzzle/src/Handler/EasyHandle.php:76 |
do you want to share with us ?😄 |
Hello @ozgurhangisi You talked about "Creating Local keys and shared secret gets ~%90 of total process. ". Thanks! |
Hello |
Hi, I am so new to push. Thanks |
Just the very first command this project
brings in 10 other libraries. If you want to optimize anything, we should optimize by reducing the number of libraries the autoload has to go through every time your website gets a page hit. ` ` |
@agron2017 if you're trying to look smarter, it did not work. This lib does not run on page hits, at least it should not. |
And your response with a personal insult is somehow smarter, you think Igor? Whatever. I don't have to listen your insults. I fork it and clone it and you'll never see my dumb ass in here again. |
Hello,
If I flush() more than 200 notifications at a time, the push fails. Also, each batch of 200 messages needs over 10 seconds to get sent to endpoints, so for 20k subscribers i would need 20 ~min.
Can this be optimized? What If i filter Chrome endpoints and Firefox endpoints so that each batch has same endpoint?
Did anyone managed to send more than 200 notifications at a time?
I am using php 7.2
The text was updated successfully, but these errors were encountered: