-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do I need or can I use a squid cache proxy for the cvmfs-csi? #49
Comments
@johnstile alien (or external) cache is a shared scratch space for CVMFS clients to store the pulled data. It is to be used as a drop-in replacement for the regular local cache which is always used by a single client only. Alien cache on the other hand can be used by multiple clients at the same time -- this requires the volume for the cache to be backed by some shared filesystem though, because of concurrent writes from possibly different nodes. You can check https://cvmfs.readthedocs.io/en/stable/cpt-configure.html#alien-cache for more information. You can use squid and alien caches in conjunction with this too, AFAIK. |
Trying to enable alien caches, the values.yaml has no effect:
I am expecting to find 'mountPath: /cvmfs-aliencache'
|
Hmm, works for me:
And the alien cache volume is being populated with data after running the examples:
|
I think you have a typo in the key name, it should be "enabled", not "enable". |
Thank you! That helped me to the next issue. With alien cache enabled, I had to create a pvc for the nodeplugin to start, but mounts fail when it is enabled. This is the pvc I added: cvmfs-csi/templates/nodeplugin-alien-cache.pvc.yaml
The alien cache volume is being populated
In the nodeplugin pod I see 3 directories:
Contents of the other two:
I think I should be able to do this to mount: ls /cvmfs/alice.cern.ch/ |
Yes, cluster admin is expected to provide a reference to an existing PVC (or anything that's accepted in
Is this when running |
I enabled the alien cache, tailed the cvmfs pod logs, and tried to
Before running ls
Ran ls
Log
|
The configs
I changed CVMFS_QUOTA_LIMIT to -1 because, "The CVMFS_ALIEN_CACHE requires CVMFS_QUOTA_LIMIT=-1 and CVMFS_SHARED_CACHE=no" seen here: https://cvmfs.readthedocs.io/en/stable/cpt-configure.html |
The following is a log for
|
With a test pod, I verified I see persistent data in the pvc cvmfs-alien-cache, verified by touching a file, taking down the pod, and creating a new pod, and I see the data persist.
|
In the nodeplugin container, running mount manually shows an error.
When CVMFS_NFS_SOURCE=yes, I need the same path in both CVMFS_ALIEN_CACHE and CVMFS_WORKSPACE. A change to
Once CVMFS_WORKSPACE was set the mount works |
Thanks for looking into this, @johnstile! Are you using squid cache you mentioned earlier? |
I have not reached the squid proxy yet (I guess that is next). I am not sure I need a squid proxy if the alien cache is working (and now it is). Currently, I have set CVMFS_HTTP_PROXY=DIRECT What do you think? |
This really depends on your use case. Alien cache is "append-only", with squid you have a lot more control over how you store the data.
For CVMFS specific questions I think you'll have better chances of getting qualified answers from https://cernvm-forum.cern.ch/ |
Do I need or can I use a squid cache proxy for the cvmfs-csi?
I do not see any mention of a squid proxy for this cvmfs solution.
In the docs I see the mention of "alien cache volume" but I do not know what that is.
The text was updated successfully, but these errors were encountered: