You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This needs to be discussed further. If we save the json file on the server, we need to make sure the file names don't clash [solution: include the date/time (up to microseconds) row-id for the rows selected in the string on which md5sum is calculated; if DB is updated then the file will be updated], and also will require periodic purge [e.g., after two weeks; if a user task where this file is being referred to tries to use it past the two-week purge time then it will be problem (solution: user can save locally in the workspace after fetching and then subsequently use the local copy from the workspace; will have to pay for storage in cloud)].
If save the json on the server, e.g., in ..../data/tmp/ or ..../data/cache folder, then construct full https path of the file to allow programmatic fetch from some R/python program, etc.
Sumana suggested that for now, let the user download it locally and upload to their workspace as needed using the file upload dialog. from the workspace. What about when we implement a user workspace/account on the portal itself: then can save the file to user's workspace/account (will file permission be an issue).
The text was updated successfully, but these errors were encountered:
Daniel clarified that there is no need to actually save the json file. If we can construct a URL which will emulate calling an API with all the q,t,row_nums parameters which can build the json blob then using that URL, the json blob can be fetched by R/python like program in any workspace.
However, if many rows selected, the URL length might exceed the limit.
This needs to be discussed further. If we save the json file on the server, we need to make sure the file names don't clash [solution: include the date/time (up to microseconds) row-id for the rows selected in the string on which md5sum is calculated; if DB is updated then the file will be updated], and also will require periodic purge [e.g., after two weeks; if a user task where this file is being referred to tries to use it past the two-week purge time then it will be problem (solution: user can save locally in the workspace after fetching and then subsequently use the local copy from the workspace; will have to pay for storage in cloud)].
If save the json on the server, e.g., in ..../data/tmp/ or ..../data/cache folder, then construct full https path of the file to allow programmatic fetch from some R/python program, etc.
Sumana suggested that for now, let the user download it locally and upload to their workspace as needed using the file upload dialog. from the workspace. What about when we implement a user workspace/account on the portal itself: then can save the file to user's workspace/account (will file permission be an issue).
The text was updated successfully, but these errors were encountered: