Remove improperly supported "unbounded" size #125
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This should follow #124.
Squeezes out another 5% or so from the result of #124. Although technically, unbounded encoding length could be considered as part of the spec, this would be hard to do properly without taking a significant performance hit. If one really wants to support it, the entire stack must be overhauled.
There are two rationales:
int.MaxValue
.byte[]
to be of length greater thanint.MaxValue
. Especiallybuffer[offset]
whereoffset
is greater thanint.MaxValue
is likely to fail.int.MaxValue
, the maximum size ofbyte[]
comes out to be about 2GB. For all intents and purposes, this should be enough for now for encoding a singleIValue
.Note that serialization of
BigInteger
, i.e. an integer without a limit (as long as it doesn't exceedint.MaxValue
number of digits in length), is still supported.Encode()
without bloat