-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Doros for new entries #19
Conversation
PS: with regards to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the h5repack
thing: how about we just create a new HDF5 file from scratch, and write into it the relevant parts? This way without copying first we don't have to worry about data deletion in HDF5. What do you think?
I was thinking about it, and even trying it ... but then you kind of need to know what the other data is, that is in the file and how many "levels" deep it is in the hdf5, as you cannot simply copy a whole "tree" into another file. And if we build up a fake-file from scratch, we might miss something that they (the DOROS people) put in that make the reader fail (as was the case with the new entries leading to this PR). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I'm all ok with this PR, thanks!
The Doros team added more entries on the root-level of the HDF5 files.
Before, there were only BPMs and
METADATA
and the BPMs were identified by not beingMETADATA
.As now there is also
TIMESTAMPS_INDEX
andTIMESTAMPS_TABLE
(I told Manuel this maybe should go intoMETADATA
, asMETADATA
itself seems to be empty anyway... but so far they have not moved it).I considered going by name (e.g. find
_DOROS
in the name) but then we are limited to which machine data we can write into this format.I now identify BPMs as entries that have a subgroup
nbOrbitSamplesRead
which should be pretty robust.Added a test with a reduced version of the new data (script to generate the data is also attached).