Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Number of timestamps and neural samples not the same in particular file #24

Open
eackermann opened this issue Oct 24, 2017 · 2 comments
Labels

Comments

@eackermann
Copy link
Collaborator

@jchutrue please post the code snippet below!

Context: one of the 7/10 .dat files (2200--2400) seems problematic. Using the default block size, I think it "worked", but created lots of small epochs (while using max_gap_size=300), which is unexpected for this file. When we used block_size=2**24, we got an error, saying that the number of expected channel data packets to pack was 0 (which implies an empty new_ts), but that approx 2 million samples were passed for packing.

What's going on?
Is jagular missing some special case? Is the data corrupt? Both?

...

@eackermann eackermann added the bug label Oct 24, 2017
@eackermann
Copy link
Collaborator Author

filename = '/media/jchu/DataHDD/data/install/long-recording/untethered/07-11-2017/merged/install_07-11-2017_2200_0000_sd08.rec'
jfm = jag.io.JagularFileMap(filename)
jfm.timestamps

# just look at channel 0 for the ripples
start = time.time()

jag.utils.extract_channels(jfm=jfm,
                           max_gap_size=300,
                           ts_out = '/home/jchu/data/install/long-recording/three-day-analysis/temp/timestamps-test.raw',
                           ch_out_prefix='/home/jchu/data/install/long-recording/three-day-analysis/temp/test-',
                           subset=[0],
                           block_size=2**24,
                           verbose=False)
print("Took {} minutes to extract channel 0".format((time.time() - start)/60))

@eackermann
Copy link
Collaborator Author

update: this issue was related to datatype inconsistencies, particularly going from np.uint32 to np.int32. However, we have since migrated almost exclusively to np.int64, although there are still a few np.uint64s floating around, too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant