Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make conv return floats when given integer arrays #411

Closed
wants to merge 1 commit into from

Conversation

rpigott
Copy link

@rpigott rpigott commented Mar 14, 2021

Ref #410, suggestion from the comments.

This removes an unnecessary round. Dispatch for this case is now handled the same as for arrays of generic Number type, so the return type of conv([1, 2, 3], [4, 5, 6]) is now Float64 instead of Int, correctly reflecting the internal precision. Users can get the old behavior by calling round themselves, i.e. round.(Int, conv(a, b)) or perhaps even round.(promote_type(eltype(a), eltype(b)), conv(a, b)) for other integer types.

If DSP learns a method of integer convolution in the future, the return type for that method should be an integer.

This removes an unnecessary round. Dispatch for this case is now
handled the same as for arrays of generic Number type.
@galenlynch
Copy link
Member

galenlynch commented Mar 15, 2021 via email

@galenlynch
Copy link
Member

Any comments?

@ssfrr
Copy link
Contributor

ssfrr commented Mar 30, 2021

I know that convolution is used in non-DSP contexts for other purposes (like multiplying huge numbers) and in those contexts true integer convolution makes sense, and should probably be implemented without doing FFTs.

That said, this is a DSP package and if I remember correctly, supporting the integer case has added complexity in the past. I think it's reasonable to say that convolution returns floats.

@galenlynch
Copy link
Member

Well I'm in favor of making this change for now until we have real integer convolution. Could you please fix the tests?

@martinholters
Copy link
Member

I'd like to have a true integer convolution, but while we don't have it, being honest and returning floats seems better than rounding. So generally in favor, unless someone surprises me by submitting a proper integer-capable convolution.

@galenlynch
Copy link
Member

galenlynch commented Apr 26, 2021

I have one that is basically complete, though I've stalled out on how to select which algorithm to use. I've added a bunch of direct convolution algorithms that perform differently for different inputs, and have also added a bunch of functions that try to estimate the run time based on some simple fits that I did to a battery of benchmarks.

I would love some help brain-storming how to structure the code that picks the best algorithm for a given input size and type. The code is on this branch: https://github.com/galenlynch/DSP.jl/blob/merge_conv_filt/src/dspbase.jl

I can make a draft PR soon and discuss where I've stalled out.

EDIT:
In the mean time in that branch I added:

  • multithreaded convolution for overlap-save
  • direct convolution (+multithreading) for ND convolution
  • direct convolution (+multithreading) for ND separable convolution
  • direct convolution (+multithreading) for vector convolution

@danilo-bc
Copy link

I'd like to have a true integer convolution, but while we don't have it, being honest and returning floats seems better than rounding. So generally in favor, unless someone surprises me by submitting a proper integer-capable convolution.

I agree with @martinholters. Integer convolutions have their niche, but if there's no proper true integer convolution, it's better to be fair and return a float.

@martinholters
Copy link
Member

Hers is a simple N-D time-domain convolution I've cooked up:

function tdconv(u::AbstractArray{<:Number, N}, v::AbstractArray{<:Number, N}) where {N}
    return [
        sum(u[m] * v[n-m]
            # valid ranges (u and v assumed zero outside):
            #    firstindex(u) <= m <= lastindex(u)
            # and
            #    firstindex(v) <= n-m <= lastindex(v)
            # or equivalently
            #    n-lastindex(v) <= m <= n-firstindex(v)
            for m in CartesianIndices(ntuple(d ->
                max(firstindex(u,d),n[d]-lastindex(v,d)):min(lastindex(u,d), n[d]-firstindex(v,d)),
                N
            ))
        )
        for n in CartesianIndices(ntuple(d -> 2:size(u,d)+size(v,d), N))
    ]
end

I hope to find some time to hook it up so that conv dispatches to it for integer input and probably also for small-sized float inputs, but I need to do some benchmarking to figure out what a reasonable cut-off would be.

@wheeheee
Copy link
Contributor

wheeheee commented Mar 8, 2024

Should be obsoleted by #545. Close?

@martinholters
Copy link
Member

Should be obsoleted by #545. Close?

Should be auto-closed when #545 is merged, but if you prefer, you can also close, and I think we're pretty sure we'll go with #545 once that is fully in shape.

@rpigott rpigott closed this Mar 11, 2024
@rpigott rpigott deleted the noround branch March 11, 2024 10:23
@rpigott
Copy link
Author

rpigott commented Mar 11, 2024

#545 looks good! Thanks for working on this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants