Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EventTarget MaxListenersExceededWarning #91

Closed
zquestz opened this issue Sep 18, 2024 · 18 comments
Closed

EventTarget MaxListenersExceededWarning #91

zquestz opened this issue Sep 18, 2024 · 18 comments
Labels
bug Something isn't working

Comments

@zquestz
Copy link

zquestz commented Sep 18, 2024

Seeing the following in my logs.

date,stream,content
2024/09/18 09:14:59,stdout,(node:1) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit

Can we set the default a little higher? It defaults to 10 but my setup is a little larger.

@jliuhtonen
Copy link
Owner

I noticed the same. This seems to be because of a problem in Node's native fetch-api implementation undici: nodejs/undici#1711

If you are using the Docker container, then you have an old version. I'll update Node to latest LTS version, let's see if the problem goes away with that. Otherwise, this could be a subtle bug in the cancellation/retry code.

@jliuhtonen
Copy link
Owner

Version 1.2.1 released!

@zquestz
Copy link
Author

zquestz commented Sep 19, 2024

Unfortunately I am still seeing the issue in my logs.

@jliuhtonen
Copy link
Owner

Thanks for reporting this. Back to the drawing board then!

This might be related to for example long polling BluOS players. However, before increasing the amount of maximum listeners I'd like to make sure that the current request cancellation implementation or the fetch api wrapper library do not leak listeners, because the error message would be exactly the same.

@zquestz
Copy link
Author

zquestz commented Sep 21, 2024

Absolutely, take your time. Makes sense to find the root cause and not mask real errors. =)

@jliuhtonen jliuhtonen added the bug Something isn't working label Sep 22, 2024
@jliuhtonen
Copy link
Owner

I released a new version v1.3.0 with a bugfix for cancellation using AbortSignal. Can you try that @zquestz? You can also try running with DEBUG_MAX_EVENT_LISTENERS=<number> env variable but I don't recommend changing the value if absolutely not necessary as the default value should work ok.

@zquestz
Copy link
Author

zquestz commented Sep 26, 2024

I have updated to the latest release and will get back to you. 😀

@zquestz
Copy link
Author

zquestz commented Sep 26, 2024

Unfortunately I am seeing the log warnings with the latest release. =\

Should I try updating that ENV var?

@jliuhtonen
Copy link
Owner

Should I try updating that ENV var?

Yes, that'd be great!

@jliuhtonen
Copy link
Owner

@zquestz I noted that there's something wrong with scrobbling in the latest release. I'll release a patch release once I've had chance to fix.

@jliuhtonen
Copy link
Owner

v1.3.1 fixes accidentally broken long polling. Sorry about that. Please provide any additional information that you can find @zquestz, this is a difficult issue to reproduce with my setup.

@zquestz
Copy link
Author

zquestz commented Sep 30, 2024

Still seeing the exact same error:

(node:1) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit

Odd thing is that I have DEBUG_MAX_EVENT_LISTENERS set to 30!

@jliuhtonen
Copy link
Owner

Still seeing the exact same error:

Odd thing is that I have DEBUG_MAX_EVENT_LISTENERS set to 30!

This is really weird. Practically it would mean that event listeners are leaking that would eventually cause the out of memory killer to stop the container so you would see crashing. I could try to run a profiler to see how the memory footprint develops over time.

If you are not seeing a memory leak, it means that Node's undici still produces this warning for some reason but it is harmless. I guess you have double checked already that you have the latest container?

One could also switch away from using fetch API, but as it is becoming the standard in Node it does not seem worth it.

@zquestz
Copy link
Author

zquestz commented Oct 1, 2024

You were totally right, updating the image on my Synology and restarting the container was not enough to get the new version. It is now updated and testing continues. =)

@zquestz
Copy link
Author

zquestz commented Oct 2, 2024

All appears to be working fantastic now. =)

@jliuhtonen
Copy link
Owner

You were totally right, updating the image on my Synology and restarting the container was not enough to get the new version. It is now updated and testing continues. =)

Great to hear that it works now! I have run into something similar when updating images locally. Cache invalidation, the hardest of problems. 😂 The new releases are tagged with both the version number and "latest".

Let's keep this issue still open for a little while if something still comes up related to this.

@jliuhtonen
Copy link
Owner

Everything seems to be working, so I'm closing this issue for now. Thank you for helping to make the project better! 🙏

@zquestz
Copy link
Author

zquestz commented Oct 5, 2024

My pleasure. Thanks for all the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants