Skip to content
This repository has been archived by the owner on Jun 7, 2022. It is now read-only.

Option to Store Error-Correction Data Externally (Feature Request) #277

Open
Cygon opened this issue Aug 8, 2020 · 5 comments
Open

Option to Store Error-Correction Data Externally (Feature Request) #277

Cygon opened this issue Aug 8, 2020 · 5 comments

Comments

@Cygon
Copy link

Cygon commented Aug 8, 2020

Feel free to close this feature request if such functionality isn't technically possible or you want to keep it as a container format :)

I'm using BlockyArchive to ensure integrity of HD audio tracks and mountain biking clips I'm storing on my NAS (w/RAID). The issue is, I want to still access the files while also protecting them from bit rot, bad shutdowns etc.

It would be nice if BlockyArchive was able to store its error-correction data in a separate file and leave the file-to-be-protected untouched.

I don't know if BlockyArchive currently does change the file's sequence of bytes in its container (to protect against a block of n consecutive corrupted bytes), but from my naive understanding, it would still work if one i.e. stepped through a file in 100 byte steps (for example) and calculated the sum of those bytes, offset the entire process by one byte, repeat...

@darrenldl
Copy link
Owner

Thanks for the interest! It's a bit late over here, so my apologies if I miss anything in the following reply - I'll add the missing info when I wake up.

I'm using BlockyArchive to ensure integrity of HD audio tracks and mountain biking clips I'm storing on my NAS (w/RAID). The issue is, I want to still access the files while also protecting them from bit rot, bad shutdowns etc.

It would be nice if BlockyArchive was able to store its error-correction data in a separate file and leave the file-to-be-protected untouched.

So your need would be closer to what is provided by parchive2, where the error correction data is stored as separate files. I am not personally inclined to retrofit that into blockyarchive, since that was not the intended design of blockyarchive or SeqBox (the original file design), and also since parchive2 already does that very well.

I'll note that I don't know which parchive2 implementation to recommend specifically: it's a rather old format, and several mature implementations are available, but I don't use it in my everyday life.

One caveat with using parchive2 is that if your file system metadata is corrupted, par2 archives may not survive the fragmentation and data recovery may be very difficult. A demo illustrating the point of fragmentation can be seen in the README of SeqBox.

You can obviously use par2 then use blockyarchive, but then you end up having a duplicate of the original data again (back to square one). So a bit pointless to use both since your concern seems to be space usage.

I don't know if BlockyArchive currently does change the file's sequence of bytes in its container (to protect against a block of n consecutive corrupted bytes), but from my naive understanding, it would still work if one i.e. stepped through a file in 100 byte steps (for example) and calculated the sum of those bytes, offset the entire process by one byte, repeat...

So yes, that would absolutely work - essentially just moving all the headers and FEC data in individual blocks into a separate file. But would somewhat defeat the purpose of surviving fragmentation, as implied above.

Possible solutions

So from my perspective, there are two main ways to go about this.

  1. Utilise the streaming behaviour of blockyarchive's decode mode somehow. The behaviour is triggered by asking it to use stdout as decode output, e.g. blkar decode file.ecsbx -, the original file bytes sequence would be piped to stdout in the original order (assuming the archive is intact and so on). If the music player can at some point accept a stream of bytes as input, this might work.

For example, with mplayer, you'd do something like blkar decode file.ecsbx - | mplayer -cache 1024 -.

Or if over the network using socat, on server side: blkar decode file.ecsbx - | ncat -l 4000,
on client side: ncat server_ip 4000 | mplayer -cache 1024 -

I'll say I have no clue how feasible this is with your particular NAS and client computer.

  1. Use a more robust file system to reduce the risk of file system metadata corruption, and swap to parchive2.

Again, all bets are off if FS metadata is lost, but this gives you best space saving. You'll have to gauge how well the selected FS tolerates faults, and how it would interact with RAID. Note that RAID can also propagate faults - I'm uncertain as to whether that would damage the FS further during an incident, however.

Transitioning to parchive2

In terms of transitioning to parchive2, the easiest way would seem to just decode everything (blkar would notify you if any faults were detected during decoding). Then use repair mode on any damaged containers and redo decoding again.

Alternatively, you can use check mode first, then repair if needed, then decode.

Conclusion

Overall blockyarchive does not seem to exactly suit your need, and other solutions might be better. I'm happy to help out with transitions if needed, and if time permits.

@darrenldl
Copy link
Owner

darrenldl commented Aug 9, 2020

Okay I just remembered there is a project that fits your description of "blockyarchive but recovery data stored separately" while still providing data recovery in file system metadata loss case, so BlockHashLoc would be the project, but I don't have an implementation for it.

I don't know how robust the original implementation is exactly, however.

EDIT: Okay, thinking a bit more, this would be an interesting addition either as a separate project or integrated into blockyarchive. I'd say I'm quite interested in making a modified implementation of BlockHashLoc, but I likely don't have the time or spirit to commit to that right now (blockyarchive took many months of designing, coding and testing, and sits at 19k lines of program code, and 17k lines of test code).

BlockHashLoc looks (much) simpler in principle, but I would not rule out the possibility of project growing complex in similar manner.

@Cygon
Copy link
Author

Cygon commented Aug 10, 2020

Wow, thank you for the detailed response and suggestions!

Decoding on-the-fly is an idea I didn't think of and should work so long as I don't jump around in the file.

PAR2 looks like it might fit the bill (not sure about sequential bit corruption yet), but most tooling is positively ancient. I'll do a bit of testing, if it's survives sequential bit errors and the code still builds without too much hassle, I'll use it for files I still intent to watch/listen to.

BlockHashLoc looks confusing. From the description, it seems to merely store the hashes of FS allocation blocks so that it can put a file together again if the FS index table is gone (but the data stored for the file is still intact).

(
My scenario are just plain files on ext4 w/RAID-5. The (proprietary) NAS can "scrub" (read, verify and re-write all data), but unless a drive is busted, if the parity doesn't match, it has no way of knowing which of the drives is in error.

Overall, I'm less concerned about the entire file system becoming corrupt (after all, there are multiple copies of the FS index and RAID on top) and more about silent bit flips that I only notice when I watch my footage after a few years.
)

@darrenldl
Copy link
Owner

darrenldl commented Aug 10, 2020

Decoding on-the-fly is an idea I didn't think of and should work so long as I don't jump around in the file.

This reminds me: as the original data is still readable from the container as long as you calculate the offsets correctly, one could provide a file system for read-only access via things like FUSE (if it's linux).

PAR2 looks like it might fit the bill (not sure about sequential bit corruption yet), but most tooling is positively ancient. I'll do a bit of testing, if it's survives sequential bit errors and the code still builds without too much hassle, I'll use it for files I still intent to watch/listen to.

For error correction:

From my understanding, PAR2 is much more robust compared to ECSBX (default output format of blockyarchive), as it can repair errors of any pattern (if my understanding is correct) as long as the total error rate is less than the redundancy rate. This is not the case for ECSBX, i.e. there are cases where the total error rate is less than the redundancy rate, but blockyarchive would not be able to repair.

For whether code builds without too much hassle:

Afaik it is still buildable for most platforms. There might no one maintaining it anymore, which might be fine as I'd imagine it is very well tested, but I can't say anything concrete really as I didn't look into the code base for review.

BlockHashLoc looks confusing. From the description, it seems to merely store the hashes of FS allocation blocks so that it can put a file together again if the FS index table is gone (but the data stored for the file is still intact).

That is correct. It indeed stores the hashes of the allocation blocks, so it can be used to discover the blocks if FS index table is gone.

Error correction was never the intention for SeqBox from my conversation with the author, and I am guessing that is also the case for BlockHashLoc. So yes, the assumption of both projects would be that the data is completely intact.

In the general case, data discovery is easier for SBX/ECSBX containers than BlockHashLoc. Namely for SBX/ECSBX, the software can find the allocation blocks containing the container by just scanning for blocks which start with "SBx", but BlockHashLoc needs to hash every single allocation block of the FS.

My scenario are just plain files on ext4 w/RAID-5. The (proprietary) NAS can "scrub" (read, verify and re-write all data), but unless a drive is busted, if the parity doesn't match, it has no way of knowing which of the drives is in error.

Fair enough, looks like a sane/typical setup.

Overall, I'm less concerned about the entire file system becoming corrupt (after all, there are multiple copies of the FS index and RAID on top) and more about silent bit flips that I only notice when I watch my footage after a few years.

Okay yeah, in that case, either blockyarchive or PAR2 should fit the bill for long term storage. I actually used to think bitrots on perfectly normal device don't exactly happen that often now, but a user of blockyarchive suggested otherwise, so there you go.


Practical plan for a more suitable software for your case would be quite straightforward, and can reuse most code base of blockyarchive. Let me know if PAR2 ends up working okay for you, if not, then I might be more inclined to add the feature discussed into blockyarchive.

@darrenldl
Copy link
Owner

darrenldl commented Aug 21, 2020

Hey @Cygon (more out of curiosity than due to any urgency, so don't worry if you're too busy/whatever to reply) any updates on your tests? Also how do you test these things as well? (Always curious how people evaluation whether they trust something in practice).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants