-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue #264: Update qviz #401
Conversation
Cubes are obtained by using: These cubes are in a panda Series and have the following structure (this is one of these cubes): Each of these are blocks, which are related to a cube through the cubeId. |
This is the structure of the metadata for revision_id = 1 :
|
Merge main branch into qviz-bug branch, so it is up to date with the main repository.
Can you add |
Also, remove the redundant test tables. You can use the most complex one to cover all the test cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some small things that I noticed while reviewing !
…fied by 0.01 in the dash server
….0 version of Apache Spark
I'm closing this PR as I'm not allowed to update the branch. We will be working on this issue with #437 instead. |
Description
Fixes #264 .
With the new qbeast format, when we try to run the OTree Index Visualization (qviz) it crashes, so we need to update it.
Type of change
Bug fix.
Checklist:
How Has This Been Tested? (Optional)
The OTree Visualization (qviz) was run locally on my PC.
Also, a pyspark shell was created in order to do some tests at the implemented fix. Then, a huge table of 4.000.000 rows was created, full of fake data. From this table, three Qbeast tables were created and read as Delta tables.
From these tables, we compared each of their total number of elements with the sumatory of the element_count of their respective cubes. Since these values matched, we concluded it worked as it should.