Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds caching for rule sets and rule translations #36

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

lukejacksonn
Copy link
Owner

Addresses #4 by adding a cache for both rule sets and individual rules applied under a given theme. I'm not 100% sure wether this is necessary or even beneficial with regards perf in real life scenarios.

It will increase the memory footprint somewhat but will make repeat lookups for rules and rule sets O(1). Probably the most advantageous aspect of this is skipping calls to merge (which is deep) and the creation of variants which also involves object creation/manipulation which might also be quite costly.

If anyone has any insight into wether this is a good or bad thing to do, or if there is a better approach we could take here, then I am all ears!

@bebraw
Copy link
Contributor

bebraw commented Nov 28, 2020

You can measure the impact with https://www.npmjs.com/package/tachometer . I am not sure of the exact setup in this case but I've found the tool somewhat useful for figuring out if a change is beneficial or not.

@lukejacksonn
Copy link
Owner Author

lukejacksonn commented Nov 28, 2020

Thanks, yeh I tried that but the tests I were running seemed inconclusive.. this setup seems a bit more accurate https://github.com/kenoxa/beamwind/tree/main/benchmarks. I would also like to see how this PR affects these results.

@bebraw
Copy link
Contributor

bebraw commented Nov 28, 2020

Ah, yeah. That's an interesting benchmark. Do you understand what's making the difference so big?

@lukejacksonn
Copy link
Owner Author

After speaking with @sastan about it we think it might just be this caching feature.. that is why I am curious to see how this PR compares in the test.

However, beamwind takes a totally different approach to translation lookup and variant application to oceanwind. It is really quite interesting to see. We plan on having a meeting next week to discuss how we can combine efforts generally.

Although we have both taken quite different approaches to the problem, I think we have the same goals and there is a nice middleground to be had here somewhere.

If you would like to be involved in the conversation then I'd appreciate you input!

@bebraw
Copy link
Contributor

bebraw commented Nov 28, 2020

If you would like to be involved in the conversation then I'd appreciate you input!

Awesome news. Ping me then. 👍

@sastan
Copy link

sastan commented Nov 30, 2020

I have re-run the benchmark with this PR and i think it looks a little bit better. But not as much as we have expected:

# Setup and first insert
  oceanwind x 15,868 ops/sec ±2.55% (79 runs sampled)
  beamwind x 48,841 ops/sec ±19.29% (77 runs sampled)

# Strings
  oceanwind x 13,186 ops/sec ±9.06% (78 runs sampled)
  beamwind x 80,186 ops/sec ±11.62% (69 runs sampled)

# Arrays
  oceanwind x 96,961 ops/sec ±8.58% (72 runs sampled)
  beamwind x 828,937 ops/sec ±4.95% (79 runs sampled)

# Objects
  oceanwind x 80,148 ops/sec ±5.16% (80 runs sampled)
  beamwind x 591,048 ops/sec ±4.06% (83 runs sampled)

# Grouping
  oceanwind x 14,730 ops/sec ±20.15% (47 runs sampled)
  beamwind x 175,654 ops/sec ±4.66% (84 runs sampled)

Maybe something with the benchmark is not right or favors beamwind.

I have it pushed the update in the benchmark folder: https://github.com/kenoxa/beamwind/tree/main/benchmarks

@lukejacksonn
Copy link
Owner Author

Thanks @sastan I also ran the same tests and found similar results. I'm going to start stubbing parts of the code out to try find out where the bottleneck is.. have considered using https://github.com/lukeed/clsx function for the argument normalizing as it essentially does everything we need.

@sastan
Copy link

sastan commented Dec 2, 2020

clsx is a very good choice for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants