Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove grand #475

Merged
merged 8 commits into from
Sep 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Note: `hourstomove` must be greater than or equal to the available balance for t

Submit jobs to a suballocation. Note that the user should be on the suballocation’s user list

`Eg: qsub -l select=10,walltime=30:00,filesystems=grand:home -A <suballoctionID> -q demand test.sh`
`Eg: qsub -l select=10,walltime=30:00,filesystems=eagle:home -A <suballoctionID> -q demand test.sh`

Note: Once submanagement is enabled for a project allocation, all job submissions must specify the `suballocationID`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,16 +30,16 @@ Totals:
Jobs : 3
```

### List your project's quota on Grand and/or Eagle File system
### List your project's quota on Eagle File system
```
> sbank-list-allocations -p ProjectX -r grand
> sbank-list-allocations -p ProjectX -r eagle
Allocation Suballocation Start End Resource Project Quota
---------- ------------- ---------- ---------- -------- ----------- -----
6687 6555 2020-12-16 2022-01-01 grand ProjectX 1.0
6687 6555 2020-12-16 2022-01-01 eagle ProjectX 1.0

Totals:
Rows: 1
Grand:
Eagle:
Quota: 1.0 TB

> sbank-list-allocations -p ProjectX -r eagle
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ While requesting an allocation, users can choose from:
* Polaris

**File System:**
* Grand
* Eagle (Community Sharing)

## Policy Information Related to Allocations
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Before your project begins, you will receive an email with the following project
- **Project Proxies**: Project members designated by PIs that are authorized to add or renew project members on your behalf.
- **Allocation System(s) and Allocation Amount**: The approved system(s) and amount of your award in node hours.
- **Approved Quota**: The approved amount of disk space for your project directory.
- **File System**: The file system where your project directory will reside. For information on the Grand and Eagle file systems, see Storage and Networking.
- **File System**: The file system where your project directory will reside. For information on the Eagle file system, see Storage and Networking.
- **Assigned Catalyst**: INCITE projects will have ALCF staff members that are assigned to the projects who are available to assist the team throughout the duration of the INCITE allocation.
- **Allocation Start Date**: The start date of your award.
- **Allocation End Date**: The end date of your award.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ If your build system does not require GPUs for the build process, compilation of

## Filesystem

It is helpful to realize that currently there is a single _temporary_ filesystem `gecko` mounted on the Aurora login and compute nodes available to users, where both `home` and `project` spaces reside. It is important to realize that this filesystem is not backed up and users should take care to retain copies of important files (e.g. local resources or ALCF's `grand` and `eagle` filesystems).
It is helpful to realize that currently there is a single _temporary_ filesystem `gecko` mounted on the Aurora login and compute nodes available to users, where both `home` and `project` spaces reside. It is important to realize that this filesystem is not backed up and users should take care to retain copies of important files (e.g. local resources or ALCF's `eagle` filesystem).

## OneAPI Programming Environment

Expand Down
6 changes: 3 additions & 3 deletions docs/aurora/data-management/lustre/gecko.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ When you use an SSH proxy, it takes the authentication mechanism from the local

### Transferring files from other ALCF systems

With the bastion pass-through nodes currently used to access both Sunspot and Aurora, users will find it helpful to modify their .ssh/config files on Aurora appropriately to facilitate transfers to Aurora from other ALCF systems. These changes are similar to what Sunspot users may have already implemented. From an Aurora login-node, this readily enables one to transfer files from Sunspot's gila filesystem or one of the production filesystems at ALCF (home, grand, and eagle) mounted on an ALCF system's login node. With the use of ProxyJump below, entering the MobilePass+ or Cryptocard passcode twice will be needed (once for bastion and once for the other resource). A simple example shows the .ssh/config entries for Polaris and the scp command for transferring from Polaris:
With the bastion pass-through nodes currently used to access both Sunspot and Aurora, users will find it helpful to modify their `.ssh/config` files on Aurora appropriately to facilitate transfers to Aurora from other ALCF systems. These changes are similar to what Sunspot users may have already implemented. From an Aurora login-node, this readily enables one to transfer files from Sunspot's `gila` filesystem or one of the production filesystems at ALCF (`home` and `eagle`) mounted on an ALCF system's login node. With the use of `ProxyJump` below, entering the MobilePass+ or Cryptocard passcode twice will be needed (once for bastion and once for the other resource). A simple example shows the `.ssh/config` entries for Polaris and the `scp` command for transferring from Polaris:

```
$ cat .ssh/config
Expand All @@ -40,7 +40,7 @@ Host polaris.alcf.anl.gov
```

```
knight@aurora-uan-0009:~> scp knight@polaris.alcf.anl.gov:/grand/catalyst/proj-shared/knight/test.txt ./
knight@aurora-uan-0009:~> scp knight@polaris.alcf.anl.gov:/eagle/catalyst/proj-shared/knight/test.txt ./
---------------------------------------------------------------------------
Notice to Users
...
Expand All @@ -50,5 +50,5 @@ knight@aurora-uan-0009:~> scp knight@polaris.alcf.anl.gov:/grand/catalyst/proj-s
...
[Password:
knight@aurora-uan-0009:~> cat test.txt
from_polaris grand
from_polaris eagle
```
8 changes: 4 additions & 4 deletions docs/aurora/sunspot-to-aurora.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Host polaris.alcf.anl.gov
user knight
```

From an Aurora login-node, this readily enables one to transfer files from Sunspot's gila filesystem or one of the production filesystems at ALCF (home, grand, and eagle). With the use of ProxyJump here, entering the MobilePass+ or Cryptocard passcode twice will be needed (once for bastion and once for the other resource).
From an Aurora login-node, this readily enables one to transfer files from Sunspot's `gila` filesystem or one of the production filesystems at ALCF (`home` and `eagle`). With the use of `ProxyJump` here, entering the MobilePass+ or Cryptocard passcode twice will be needed (once for bastion and once for the other resource).

This simple example transfers a file from Sunspot.

Expand All @@ -49,10 +49,10 @@ knight@aurora-uan-0009:~> cat test.txt
from_sunspot gila
```

This simple example transfers a file from the grand filesystem via Polaris.
This simple example transfers a file from the eagle filesystem via Polaris.

```
knight@aurora-uan-0009:~> scp knight@polaris.alcf.anl.gov:/grand/catalyst/proj-shared/knight/test.txt ./
knight@aurora-uan-0009:~> scp knight@polaris.alcf.anl.gov:/eagle/catalyst/proj-shared/knight/test.txt ./
---------------------------------------------------------------------------
Notice to Users
...
Expand All @@ -62,7 +62,7 @@ knight@aurora-uan-0009:~> scp knight@polaris.alcf.anl.gov:/grand/catalyst/proj-s
...
[Password:
knight@aurora-uan-0009:~> cat test.txt
from_polaris grand
from_polaris eagle
```

## Default software environment
Expand Down
28 changes: 14 additions & 14 deletions docs/data-management/acdc/eagle-data-sharing.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sharing Data on Grand/Eagle Using Globus Guest Collections
# Sharing Data on Eagle Using Globus Guest Collections
## Overview

Collaborators throughout the scientific community have the ability to write data to and read scientific data from the Eagle filesystem using Globus sharing capability. This capability provides PIs with a natural and convenient storage space for collaborative work.
Expand Down Expand Up @@ -26,7 +26,7 @@ Type or scroll down to "Argonne LCF" in the "Use your existing organizational lo

You will be taken to a familiar-looking page for ALCF login. Enter your ALCF login username and password.

## Accessing your Grand/Eagle Project Directory ##
## Accessing your Eagle Project Directory ##
<!--- There are two ways for a PI to access their project directory on Eagle.

1. **Web Interface:** By logging in to Globus interface directly and navigating to the ALCF Eagle endpoint. -->
Expand Down Expand Up @@ -61,8 +61,8 @@ A project PI needs to have an 'active' ALCF account in place to create and share

There are multiple ways to Navigate to the Collections tab in "Endpoints":
1. [Click the link to get started](https://app.globus.org/file-manager/collections/05d2c76a-e867-4f67-aa57-76edeb0beda0/shares). It will take you to the Collections tab for Eagle. **OR**
2. Click on 'Endpoints' located in the left panel of the [Globus web app](https://app.globus.org/endpoints). Type "alcf#dtn_eagle" (for Eagle) or "alcf#dtn_grand" (for Grand) in the search box located at the top of the page and click the magnifying glass to search. Click on the Managed Public Endpoint "alcf#dtn_eagle" or "alcf#dtn_grand" from the search results. Click on the Collections tab. **OR**
3. Click on 'File Manager' located in the left panel of the Globus web app. Search for 'alcf#dtn_Eagle' (or "alcf#dtn_grand") and select it in the Collection field. Select your project directory or a sub directory that you would like to share with collaborators as a Globus guest collection. Click on 'Share' on the right side of the panel, which will take you to the Collections tab.
2. Click on 'Endpoints' located in the left panel of the [Globus web app](https://app.globus.org/endpoints). Type "alcf#dtn_eagle" (for Eagle) or "alcf#dtn_eagle" (for Eagle) in the search box located at the top of the page and click the magnifying glass to search. Click on the Managed Public Endpoint "alcf#dtn_eagle" or "alcf#dtn_eagle" from the search results. Click on the Collections tab. **OR**
3. Click on 'File Manager' located in the left panel of the Globus web app. Search for 'alcf#dtn_Eagle' (or "alcf#dtn_eagle") and select it in the Collection field. Select your project directory or a sub directory that you would like to share with collaborators as a Globus guest collection. Click on 'Share' on the right side of the panel, which will take you to the Collections tab.

**Note:** <!--- Shared endpoints always remain active.---> When you select an endpoint to transfer data to/from, you may be asked to authenticate with that endpoint. Follow the instructions on screen to activate the endpoint and to authenticate. You may also have to provide Authentication/Consent for the Globus web app to manage collections on this endpoint

Expand Down Expand Up @@ -142,8 +142,8 @@ Globus supports setting permissions at a folder level, so there is no need to cr
<figcaption>Create new group</figcaption>
</figure>

## Transferring data from Grand/Eagle
Log in to [Globus](https://app.globus.org) using your ALCF credentials. After authenticating, you will be taken to the Globus File Manager tab. In the 'Collection' box, type the name of Eagle/Grand managed endpoint (```alcf#dtn_eagle``` or ```alcf#dtn_grand```). Navigate to the folder/file you want to transfer. HTTPS access (read-only) is enabled so you can download files by clicking the "Download" button.
## Transferring data from Eagle
Log in to [Globus](https://app.globus.org) using your ALCF credentials. After authenticating, you will be taken to the Globus File Manager tab. In the 'Collection' box, type the name of Eagle managed endpoint (```alcf#dtn_eagle```) Navigate to the folder/file you want to transfer. HTTPS access (read-only) is enabled so you can download files by clicking the "Download" button.

Click on 'Download' to download the required file.

Expand Down Expand Up @@ -241,15 +241,15 @@ Alternatively, you can encrypt the files before transfer using any method on you

## FAQs
### General FAQs:
**1. What are Eagle and Grand file systems?**
**1. What is the Eagle file system?**

They are Lustre file systems residing on an HPE ClusterStor E1000 platform equipped with 100 Petabytes of usable capacity across 8480 disk drives. Each ClusterStor platform also provides 160 Object Storage Targets and 40 Metadata Targets with an aggregate data transfer rate of 650GB/s.

**2. What is the difference between a Guest, Shared, and Mapped collection?**

- Guest collections: A Guest collection is a logical construct that a PI sets up on their project directory in Globus that makes it accessible to collaborators. The PI creates a guest collection at or below their project and shares it with the Globus account holders.
- Shared collection: A guest collection becomes a shared collection when it is shared with a user/group.
- Mapped Collections: Mapped Collections are created by the endpoint administrators. In the case of Eagle/Grand, these are created by ALCF.
- Mapped Collections: Mapped Collections are created by the endpoint administrators. In the case of Eagle, these are created by ALCF.

**3. Who can create Guest collections?**

Expand Down Expand Up @@ -289,18 +289,18 @@ Yes. The PI needs to have an 'active' ALCF account in place to create and share

**3. What endpoint should the PI use?**

```alcf#dtn_eagle``` (project on Eagle) or ```alcf#dtn_eagle``` (project on Grand)
```alcf#dtn_eagle``` (project on Eagle)

**4. What are the actions a PI can perform?**

- Create and delete guest collections, groups
- Create, delete and share the data with ALCF users and external collaborators
- Specify someone as a Proxy (Access Manager) for the guest collections
- Transfer data between the guest collection on Eagle/Grand and other Globus endpoints/collections
- Transfer data between the guest collection on Eagle and other Globus endpoints/collections

**5. How can a PI specify someone as a Proxy on the Globus side?**

Go to alcf#dtn_eagle (or alcf#dtn_grand) -> collections -> shared collection -> roles -> select 'Access Manager'
Go to alcf#dtn_eagle (or alcf#dtn_eagle) -> collections -> shared collection -> roles -> select 'Access Manager'

<figure markdown>
![Roles](files/roles.png){ width="700" }
Expand All @@ -317,7 +317,7 @@ Go to alcf#dtn_eagle (or alcf#dtn_grand) -> collections -> shared collection ->
1. PI requests a compute or data-only allocation project.
2. Once the request is approved, ALCF staff sets up a project, unixgroup, and project directory.
3. A Globus sharing policy is created for the project with appropriate access controls, provided the PI has an active ALCF account.
4. PI creates a guest collection for the project, using the Globus mapped collection for the file system (alcf#dtn_eagle or alcf#dtn_grand).
4. PI creates a guest collection for the project, using the Globus mapped collection for the file system (alcf#dtn_eagle)

- **Note:** PI needs to have an active ALCF Account and will need to log in to Globus using their ALCF credentials.
- If PI has an existing Globus account, it needs to be linked to their ALCF account.
Expand All @@ -326,7 +326,7 @@ Go to alcf#dtn_eagle (or alcf#dtn_grand) -> collections -> shared collection ->

**7. How can project members with ALCF accounts access the project directory via Globus?**

Users that have active ALCF accounts and are part of the project in the ALCF Account and Project Management system will automatically have access to the project directory which they can access by browsing the Globus endpoint ```alcf#dtn_eagle or alcf#dtn_grand```. If they want to access the files using the Globus guest collection set up by the PI, the PI will need to explicitly give them permissions to that guest collection. The purpose of Globus guest collections is to share the data with collaborators that don't have ALCF accounts or are not part of the project in the ALCF Account and Project Management system.
Users that have active ALCF accounts and are part of the project in the ALCF Account and Project Management system will automatically have access to the project directory which they can access by browsing the Globus endpoint ```alcf#dtn_eagle``` . If they want to access the files using the Globus guest collection set up by the PI, the PI will need to explicitly give them permissions to that guest collection. The purpose of Globus guest collections is to share the data with collaborators that don't have ALCF accounts or are not part of the project in the ALCF Account and Project Management system.

**8. Who has the permissions to create a guest collection?**

Expand Down Expand Up @@ -386,7 +386,7 @@ No. An access manager cannot create a collection, only a PI can do that. The acc

**7. Can an Access Manager leave a globus group or withdraw membership request for collaborators?**

Yes.[Go to alcf#dtn_eagle (or alcf#dtn_grand)-> Groups > group_name -> Members -> click on specific user -> Role & Status -> Set the appropriate status]
Yes.[Go to alcf#dtn_eagle -> Groups > group_name -> Members -> click on specific user -> Role & Status -> Set the appropriate status]

<figure markdown>
![Permission denied](files/roles.png){ width="700" }
Expand Down
3 changes: 1 addition & 2 deletions docs/data-management/data-transfer/using-globus.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,13 @@ Basic documentation for getting started with Globus can be found at the followin
[https://docs.globus.org/how-to/](https://docs.globus.org/how-to/)

## Data Transfer Node
Several data transfer nodes (DTNs) for `/home`, Grand, Eagle, and HPSS are available to ALCF users, allowing users to perform wide and local area data transfers. Access to the DTNs is provided via the following Globus endpoints.
Several data transfer nodes (DTNs) for `/home`, Eagle, and HPSS are available to ALCF users, allowing users to perform wide and local area data transfers. Access to the DTNs is provided via the following Globus endpoints.

## ALCF Globus Endpoints
The Globus endpoint and the path to use depends on where your data resides. If your data is on:

- `/home` which is where your home directory resides: `alcf#dtn_home` for accessing `/home` (i.e. home directories on swift-home filesystem). Use the path `/<username\>`
- HPSS: `alcf#dtn_hpss`
- Grand filesystem: `alcf#dtn_grand` for accessing `/lus/grand/projects` or `/grand` (i.e. project directories on Grand filesystem). Use the path `/grand/<project name\>`
- Eagle filesystem: `alcf#dtn_eagle` for accessing /`lus/eagle/projects` or `/eagle` (i.e project directories on Eagle filesystem). Use the path `/eagle/<project name\>`

After [registering](https://app.globus.org/), simply use the appropriate ALCF endpoint, as well as other sources or destinations. Use your ALCF credentials (your OTP generated by the CryptoCARD token with PIN or Mobilepass app) to activate the ALCF endpoint.
Expand Down
7 changes: 1 addition & 6 deletions docs/data-management/filesystem-and-storage/data-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,6 @@ The ALCF operates a number of file systems that are mounted globally across all
### Home
A Lustre file system residing on a DDN AI-400X NVMe Flash platform. It has 24 NVMe drives with 7 TB each with 123 TB of usable space. It provides 8 Object Storage Targets and 4 Metadata Targets.

### Grand
A Lustre file system residing on an HPE ClusterStor E1000 platform equipped with 100 Petabytes of usable capacity across 8480 disk drives. This ClusterStor platform provides 160 Object Storage Targets and 40 Metadata Targets with an aggregate data transfer rate of 650GB/s. The primary use of grand is compute campaign storage.

Also see [ALCF Data Policies](https://www.alcf.anl.gov/support-center/facility-policies/data-policy) and [Data Transfer](../data-transfer/using-globus.md)

### Eagle
A Lustre file system residing on an HPE ClusterStor E1000 platform equipped with 100 Petabytes of usable capacity across 8480 disk drives. This ClusterStor platform provides 160 Object Storage Targets and 40 Metadata Targets with an aggregate data transfer rate of 650GB/s. The primary use of eagle is data sharing with the research community. Eagle has community sharing community capabilities which allow PIs to [share their project data with external collabortors](../acdc/eagle-data-sharing.md) using Globus. Eagle can also be used for compute campaign storage.

Expand All @@ -36,7 +31,7 @@ HSI can be invoked by simply entering hsi at your normal shell prompt. Once auth

You may enter "help" to display a brief description of available commands.

If archiving from or retrieving to grand or eagle you must disable the Transfer Agent. -T off
If archiving from or retrieving to eagle you must disable the Transfer Agent. -T off

Example archive
```
Expand Down
Loading
Loading