Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basis for Nystrom methods #6

Merged
merged 1 commit into from
Oct 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ version = "0.1.0"
[deps]
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"

[weakdeps]
Expand All @@ -19,6 +20,7 @@ IntiVTKExt = "WriteVTK"

[compat]
Gmsh = "0.2"
SpecialFunctions = "2"
StaticArrays = "1"
WriteVTK = "1"
julia = "1.6"
5 changes: 5 additions & 0 deletions src/Inti.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ const PROJECT_ROOT = pkgdir(Inti)

using LinearAlgebra
using StaticArrays
using SpecialFunctions
using Printf

# helper functions
Expand All @@ -21,6 +22,10 @@ include("domain.jl")
include("mesh.jl")
include("quadrature.jl")

# Nyström methods
include("kernels.jl")
include("nystrom.jl")

# # integral operators
# include("integral_potentials.jl")
# include("integral_operators.jl")
Expand Down
274 changes: 274 additions & 0 deletions src/kernels.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,274 @@
"""
abstract type AbstractKernel{T}

A kernel functions `K` with the signature `K(target,source)::T`.

See also: [`GenericKernel`](@ref), [`SingleLayerKernel`](@ref), [`DoubleLayerKernel`](@ref), [`AdjointDoubleLayerKernel`](@ref), [`HyperSingularKernel`](@ref)
"""
abstract type AbstractKernel{T} end

return_type(::AbstractKernel{T}) where {T} = T

Check warning on line 10 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L10

Added line #L10 was not covered by tests

"""
struct GenericKernel{T,F} <: AbstractKernel{T}

An [`AbstractKernel`](@ref) with `kernel` of type `F`.
"""
struct GenericKernel{T,F} <: AbstractKernel{T}
kernel::F
end

"""
abstract type AbstractPDE{N}

A partial differential equation in dimension `N`. `AbstractPDE` types are used
to define `AbstractPDEKernel`s.
"""
abstract type AbstractPDE{N} end

ambient_dimension(::AbstractPDE{N}) where {N} = N

Check warning on line 29 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L29

Added line #L29 was not covered by tests

"""
abstract type AbstractPDEKernel{T,Op} <: AbstractKernel{T}

An [`AbstractKernel`](@ref) with an associated `pde::Op` field.
"""
abstract type AbstractPDEKernel{T,Op} <: AbstractKernel{T} end

"""
pde(K::AbstractPDEKernel)

Return the underlying `AbstractPDE` when `K` correspond to the kernel related to
the underlying Greens function of a PDE.
"""
pde(k::AbstractPDEKernel) = k.pde

Check warning on line 44 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L44

Added line #L44 was not covered by tests

parameters(k::AbstractPDEKernel) = parameters(pde(k))

Check warning on line 46 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L46

Added line #L46 was not covered by tests

# convenient constructor
function (K::Type{<:AbstractPDEKernel})(op, T::DataType=default_kernel_eltype(op))
return K{T,typeof(op)}(op)

Check warning on line 50 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L49-L50

Added lines #L49 - L50 were not covered by tests
end

"""
struct SingleLayerKernel{T,Op} <: AbstractPDEKernel{T,Op}

The free-space single-layer kernel (i.e. the fundamental solution) of an `OP <:
AbstractPDE`.
"""
struct SingleLayerKernel{T,Op} <: AbstractPDEKernel{T,Op}
pde::Op
end

"""
struct DoubleLayerKernel{T,Op} <: AbstractPDEKernel{T,Op}

Given an operator `Op`, construct its free-space double-layer kernel. This
corresponds to the `γ₁` trace of the [`SingleLayerKernel`](@ref). For operators
such as [`Laplace`](@ref) or [`Helmholtz`](@ref), this is simply the normal
derivative of the fundamental solution respect to the source variable.
"""
struct DoubleLayerKernel{T,Op} <: AbstractPDEKernel{T,Op}
pde::Op
end

"""
struct AdjointDoubleLayerKernel{T,Op} <: AbstractPDEKernel{T,Op}

Given an operator `Op`, construct its free-space adjoint double-layer kernel.
This corresponds to the `transpose(γ₁,ₓ[G])`, where `G` is the
[`SingleLayerKernel`](@ref). For operators such as [`Laplace`](@ref) or
[`Helmholtz`](@ref), this is simply the normal derivative of the fundamental
solution respect to the target variable.
"""
struct AdjointDoubleLayerKernel{T,Op} <: AbstractPDEKernel{T,Op}
pde::Op
end

"""
struct HyperSingularKernel{T,Op} <: AbstractPDEKernel{T,Op}

Given an operator `Op`, construct its free-space hypersingular kernel. This
corresponds to the `transpose(γ₁,ₓγ₁[G])`, where `G` is the
[`SingleLayerKernel`](@ref). For operators such as [`Laplace`](@ref) or
[`Helmholtz`](@ref), this is simply the normal derivative of the fundamental
solution respect to the target variable of the `DoubleLayerKernel`.
"""
struct HyperSingularKernel{T,Op} <: AbstractPDEKernel{T,Op}
pde::Op
end

################################################################################
################################# LAPLACE ######################################
################################################################################

"""
struct Laplace{N}

Laplace equation in `N` dimension: Δu = 0.
"""
struct Laplace{N} <: AbstractPDE{N} end

Laplace(; dim=3) = Laplace{dim}()

Check warning on line 112 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L112

Added line #L112 was not covered by tests

function Base.show(io::IO, pde::Laplace)
return print(io, "Δu = 0")

Check warning on line 115 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L114-L115

Added lines #L114 - L115 were not covered by tests
end

default_kernel_eltype(::Laplace) = Float64
default_density_eltype(::Laplace) = Float64

Check warning on line 119 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L118-L119

Added lines #L118 - L119 were not covered by tests

parameters(::Laplace) = nothing

Check warning on line 121 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L121

Added line #L121 was not covered by tests

function (SL::SingleLayerKernel{T,Laplace{N}})(target, source,

Check warning on line 123 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L123

Added line #L123 was not covered by tests
r=coords(target) - coords(source))::T where {N,
T}
d = norm(r)
filter = !(d == 0)
if N == 2
return filter * (-1 / (2π) * log(d))
elseif N == 3
return filter * (1 / (4π) / d)

Check warning on line 131 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L126-L131

Added lines #L126 - L131 were not covered by tests
else
notimplemented()

Check warning on line 133 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L133

Added line #L133 was not covered by tests
end
end

function (DL::DoubleLayerKernel{T,Laplace{N}})(target, source,

Check warning on line 137 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L137

Added line #L137 was not covered by tests
r=coords(target) - coords(source))::T where {N,
T}
ny = normal(source)
d = norm(r)
filter = !(d == 0)
if N == 2
return filter * (1 / (2π) / (d^2) * dot(r, ny))
elseif N == 3
return filter * (1 / (4π) / (d^3) * dot(r, ny))

Check warning on line 146 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L140-L146

Added lines #L140 - L146 were not covered by tests
else
notimplemented()

Check warning on line 148 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L148

Added line #L148 was not covered by tests
end
end

function (ADL::AdjointDoubleLayerKernel{T,Laplace{N}})(target, source,

Check warning on line 152 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L152

Added line #L152 was not covered by tests
r=coords(target) - coords(source))::T where {N,
T}
nx = normal(target)
d = norm(r)
filter = !(d == 0)
if N == 2
return filter * (-1 / (2π) / (d^2) * dot(r, nx))
elseif N == 3
return filter * (-1 / (4π) / (d^3) * dot(r, nx))

Check warning on line 161 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L155-L161

Added lines #L155 - L161 were not covered by tests
end
end

function (HS::HyperSingularKernel{T,Laplace{N}})(target, source,

Check warning on line 165 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L165

Added line #L165 was not covered by tests
r=coords(target) - coords(source))::T where {N,
T}
nx = normal(target)
ny = normal(source)
d = norm(r)
d == 0 && (return zero(T))
if N == 2
return 1 / (2π) / (d^2) * transpose(nx) * ((I - 2 * r * transpose(r) / d^2) * ny)
elseif N == 3
ID = SMatrix{3,3,Float64,9}(1, 0, 0, 0, 1, 0, 0, 0, 1)
RRT = r * transpose(r) # r ⊗ rᵗ
return 1 / (4π) / (d^3) * transpose(nx) * ((ID - 3 * RRT / d^2) * ny)

Check warning on line 177 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L168-L177

Added lines #L168 - L177 were not covered by tests
end
end

################################################################################
################################# Helmholtz ####################################
################################################################################

"""
struct Helmholtz{N,T}

Helmholtz equation in `N` dimensions: Δu + k²u = 0.
"""
struct Helmholtz{N,K} <: AbstractPDE{N}
k::K
end

Helmholtz(; k, dim=3) = Helmholtz{dim,typeof(k)}(k)

Check warning on line 194 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L194

Added line #L194 was not covered by tests

function Base.show(io::IO, ::Helmholtz)

Check warning on line 196 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L196

Added line #L196 was not covered by tests
# k = parameters(pde)
return print(io, "Δu + k² u = 0")

Check warning on line 198 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L198

Added line #L198 was not covered by tests
end

parameters(pde::Helmholtz) = pde.k

Check warning on line 201 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L201

Added line #L201 was not covered by tests

default_kernel_eltype(::Helmholtz) = ComplexF64
default_density_eltype(::Helmholtz) = ComplexF64

Check warning on line 204 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L203-L204

Added lines #L203 - L204 were not covered by tests

function (SL::SingleLayerKernel{T,<:Helmholtz{N}})(target, source)::T where {N,T}
x = coords(target)
y = coords(source)
k = parameters(SL)
r = x - y
d = norm(r)
filter = !(d == 0)
if N == 2
return filter * (im / 4 * hankelh1(0, k * d))
elseif N == 3
return filter * (1 / (4π) / d * exp(im * k * d))

Check warning on line 216 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L206-L216

Added lines #L206 - L216 were not covered by tests
end
end

# Double Layer Kernel
function (DL::DoubleLayerKernel{T,<:Helmholtz{N}})(target, source)::T where {N,T}
x, y, ny = coords(target), coords(source), normal(source)
k = parameters(DL)
r = x - y
d = norm(r)
filter = !(d == 0)
if N == 2
val = im * k / 4 / d * hankelh1(1, k * d) .* dot(r, ny)
return filter * val
elseif N == 3
val = 1 / (4π) / d^2 * exp(im * k * d) * (-im * k + 1 / d) * dot(r, ny)
return filter * val

Check warning on line 232 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L221-L232

Added lines #L221 - L232 were not covered by tests
end
end

# Adjoint double Layer Kernel
function (ADL::AdjointDoubleLayerKernel{T,<:Helmholtz{N}})(target, source)::T where {N,T}
x, y, nx = coords(target), coords(source), normal(target)
k = parameters(ADL)
r = x - y
d = norm(r)
filter = !(d == 0)
if N == 2
val = -im * k / 4 / d * hankelh1(1, k * d) .* dot(r, nx)
return filter * val
elseif N == 3
val = -1 / (4π) / d^2 * exp(im * k * d) * (-im * k + 1 / d) * dot(r, nx)
return filter * val

Check warning on line 248 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L237-L248

Added lines #L237 - L248 were not covered by tests
end
end

# Hypersingular kernel
function (HS::HyperSingularKernel{T,S})(target, source)::T where {T,S<:Helmholtz}
x, y, nx, ny = coords(target), coords(source), normal(target), normal(source)
N = ambient_dimension(pde(HS))
k = parameters(pde(HS))
r = x - y
d = norm(r)
filter = !(d == 0)
if N == 2
RRT = r * transpose(r) # r ⊗ rᵗ

Check warning on line 261 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L253-L261

Added lines #L253 - L261 were not covered by tests
# TODO: rewrite the operation below in a more clear/efficient way
val = transpose(nx) * ((-im * k^2 / 4 / d^2 * hankelh1(2, k * d) * RRT +

Check warning on line 263 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L263

Added line #L263 was not covered by tests
im * k / 4 / d * hankelh1(1, k * d) * I) * ny)
return filter * val
elseif N == 3
RRT = r * transpose(r) # r ⊗ rᵗ
term1 = 1 / (4π) / d^2 * exp(im * k * d) * (-im * k + 1 / d) * I
term2 = RRT / d * exp(im * k * d) / (4 * π * d^4) *

Check warning on line 269 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L265-L269

Added lines #L265 - L269 were not covered by tests
(3 * (d * im * k - 1) + d^2 * k^2)
val = transpose(nx) * (term1 + term2) * ny
return filter * val

Check warning on line 272 in src/kernels.jl

View check run for this annotation

Codecov / codecov/patch

src/kernels.jl#L271-L272

Added lines #L271 - L272 were not covered by tests
end
end
Loading
Loading