Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMCaches B-tree implementation allocating too many pages with huge pages #1

Open
fluffychaos opened this issue Jul 6, 2023 · 0 comments

Comments

@fluffychaos
Copy link

fluffychaos commented Jul 6, 2023

I am currently working on huge pages support for vmcache and during this process I noticed that too many pages get allocated when using 2MB pages.

The changes I did:

  • change pageSize from 4096 to 2097152
  • include MAP_HUGETLB | MAP_HUGE_2MB in the mmap flags in the non-exmap case (I currently only try to get huge pages working outside of exmap)
  • change all indices and sizes in BTreeNodeHeader, BTreeNode and BTree from u16 to u32 to avoid overflows when addressing in the full 2MB range
  • changed virtAllocSize from 1ul << 16 to max(pageSize, 1ul << 16) to align the mmap allocation size to the page size
  • the madvice call for virtMem was made conditional depending on pageSize being 4096 to avoid it when using huge pages

How I ran vmcache:

  • THREADS=1
  • DATASIZE=1
  • VIRTGB=6
  • PHYSGB=4
  • BLOCK pointing to a 16GB file
  • All other environment variables for vmcache on default => TPC-C got used
  • 32GB of RAM
  • A Ryzen 7 1800X CPU
  • 4096 2MB pages allocated via sysctl

The problem symptoms:

  • I get the message VIRTGB too low
  • BTreeNode::mergeNodes never gets called
  • The last node to split is full, triggering a call to BTree::trySplit

Does anyone have an idea for what the cause could be? Apart from the indices and sizes the implementation looks generic over the page size to me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant