From d1bd98169cc2121f8cdd25ff99901e4589923c95 Mon Sep 17 00:00:00 2001 From: bmxitalia Date: Wed, 2 Oct 2024 14:35:25 +0200 Subject: [PATCH] version 1.0.2 is now released --- docs/core.html | 110 +++++++++++------------ docs/fuzzy_ops.html | 214 ++++++++++++++++++++++---------------------- setup.py | 2 +- 3 files changed, 163 insertions(+), 163 deletions(-) diff --git a/docs/core.html b/docs/core.html index b9b1e0b..b188ec2 100644 --- a/docs/core.html +++ b/docs/core.html @@ -137,37 +137,37 @@

Members
class ltn.core.LTNObject(value, var_labels)
-

Bases: object

+

Bases: object

Class representing a generic LTN object.

In LTNtorch, LTN objects are constants, variables, and outputs of predicates, formulas, functions, connectives, and quantifiers.

Parameters
-
valuetorch.Tensor

The grounding (value) of the LTN object.

+
valuetorch.Tensor

The grounding (value) of the LTN object.

-
var_labelslist of str

The labels of the free variables contained in the LTN object.

+
var_labelslist of str

The labels of the free variables contained in the LTN object.

Raises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

Notes

    -
  • in LTNtorch, the groundings of the LTN objects (symbols) are represented using PyTorch tensors, namely torch.Tensor instances;

  • +
  • in LTNtorch, the groundings of the LTN objects (symbols) are represented using PyTorch tensors, namely torch.Tensor instances;

  • LTNObject is used by LTNtorch internally. The user should not create LTNObject instances by his/her own, unless strictly necessary.

Attributes
-
valuetorch.Tensor

See value parameter.

+
valuetorch.Tensor

See value parameter.

-
free_varslist of str

See var_labels parameter.

+
free_varslist of str

See var_labels parameter.

@@ -179,7 +179,7 @@

Members
Returns
-
torch.Size

The shape of the grounding of the LTN object.

+
torch.Size

The shape of the grounding of the LTN object.

@@ -198,9 +198,9 @@

Members
Parameters
-
valuetorch.Tensor

The grounding of the LTN constant. It can be a tensor of any order.

+
valuetorch.Tensor

The grounding of the LTN constant. It can be a tensor of any order.

-
trainablebool, default=False

Flag indicating whether the LTN constant is trainable (embedding) or not.

+
trainablebool, default=False

Flag indicating whether the LTN constant is trainable (embedding) or not.

@@ -254,11 +254,11 @@

Members
Parameters
-
var_labelstr

Name of the variable.

+
var_labelstr

Name of the variable.

-
individualstorch.Tensor

Sequence of individuals (tensors) that becomes the grounding the LTN variable.

+
individualstorch.Tensor

Sequence of individuals (tensors) that becomes the grounding the LTN variable.

-
add_batch_dimbool, default=True

Flag indicating whether a batch dimension (first dimension) has to be added to the +

add_batch_dimbool, default=True

Flag indicating whether a batch dimension (first dimension) has to be added to the vale of the variable or not. If True, a dimension will be added only if the value attribute of the LTN variable has one single dimension. In all the other cases, the first dimension will be considered as batch dimension, so no dimension will be added.

@@ -267,9 +267,9 @@

MembersRaises
-
TypeError

Raises when the types of the input parameters are not correct.

+
TypeError

Raises when the types of the input parameters are not correct.

-
ValueError

Raises when the value of the var_label parameter is not correct.

+
ValueError

Raises when the value of the var_label parameter is not correct.

@@ -334,7 +334,7 @@

Members
class ltn.core.Predicate(model=None, func=None)
-

Bases: torch.nn.modules.module.Module

+

Bases: torch.nn.modules.module.Module

Class representing an LTN predicate.

An LTN predicate is grounded as a mathematical function (either pre-defined or learnable) that maps from some n-ary domain of individuals to a real number in [0,1] (fuzzy), which can be interpreted as a @@ -345,7 +345,7 @@

Members
Parameters
-
modeltorch.nn.Module, default=None

PyTorch model that becomes the grounding of the LTN predicate.

+
modeltorch.nn.Module, default=None

PyTorch model that becomes the grounding of the LTN predicate.

funcfunction, default=None

Function that becomes the grounding of the LTN predicate.

@@ -353,9 +353,9 @@

MembersRaises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

-
ValueError

Raises when the values of the input parameters are incorrect.

+
ValueError

Raises when the values of the input parameters are incorrect.

@@ -373,7 +373,7 @@

MembersLTN broadcasting, see ltn.core.diag().

Examples

-

Unary predicate defined using a torch.nn.Sequential.

+

Unary predicate defined using a torch.nn.Sequential.

>>> import ltn
 >>> import torch
 >>> predicate_model = torch.nn.Sequential(
@@ -401,7 +401,7 @@ 

MembersPredicate(model=LambdaModel())

-

Binary predicate defined using a torch.nn.Module. Note the call to torch.cat to merge +

Binary predicate defined using a torch.nn.Module. Note the call to torch.cat to merge the two inputs of the binary predicate.

>>> class PredicateModel(torch.nn.Module):
 ...     def __init__(self):
@@ -524,7 +524,7 @@ 

Members
Attributes
-
modeltorch.nn.Module or function

The grounding of the LTN predicate.

+
modeltorch.nn.Module or function

The grounding of the LTN predicate.

@@ -537,7 +537,7 @@

Members
Parameters
-
inputstuple of ltn.core.LTNObject

Tuple of LTN objects for which the predicate has to be computed.

+
inputstuple of ltn.core.LTNObject

Tuple of LTN objects for which the predicate has to be computed.

@@ -550,9 +550,9 @@

MembersRaises
-
TypeError

Raises when the types of the inputs are incorrect.

+
TypeError

Raises when the types of the inputs are incorrect.

-
ValueError

Raises when the values of the output are not in the range [0., 1.].

+
ValueError

Raises when the values of the output are not in the range [0., 1.].

@@ -564,7 +564,7 @@

Members
class ltn.core.Function(model=None, func=None)
-

Bases: torch.nn.modules.module.Module

+

Bases: torch.nn.modules.module.Module

Class representing LTN functions.

An LTN function is grounded as a mathematical function (either pre-defined or learnable) that maps from some n-ary domain of individuals to a tensor (individual) in the Real field.

@@ -575,7 +575,7 @@

Members
Parameters
-
modeltorch.nn.Module, default=None

PyTorch model that becomes the grounding of the LTN function.

+
modeltorch.nn.Module, default=None

PyTorch model that becomes the grounding of the LTN function.

funcfunction, default=None

Function that becomes the grounding of the LTN function.

@@ -583,9 +583,9 @@

MembersRaises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

-
ValueError

Raises when the values of the input parameters are incorrect.

+
ValueError

Raises when the values of the input parameters are incorrect.

@@ -603,7 +603,7 @@

MembersLTN broadcasting, see ltn.core.diag().

Examples

-

Unary function defined using a torch.nn.Sequential.

+

Unary function defined using a torch.nn.Sequential.

>>> import ltn
 >>> import torch
 >>> function_model = torch.nn.Sequential(
@@ -631,7 +631,7 @@ 

MembersFunction(model=LambdaModel())

-

Binary function defined using a torch.nn.Module. Note the call to torch.cat to merge +

Binary function defined using a torch.nn.Module. Note the call to torch.cat to merge the two inputs of the binary function.

>>> class FunctionModel(torch.nn.Module):
 ...     def __init__(self):
@@ -766,7 +766,7 @@ 

Members
Attributes
-
modeltorch.nn.Module or function

The grounding of the LTN function.

+
modeltorch.nn.Module or function

The grounding of the LTN function.

@@ -779,7 +779,7 @@

Members
Parameters
-
inputstuple of ltn.core.LTNObject

Tuple of LTN objects for which the function has to be computed.

+
inputstuple of ltn.core.LTNObject

Tuple of LTN objects for which the function has to be computed.

@@ -792,7 +792,7 @@

MembersRaises
-
TypeError

Raises when the types of the inputs are incorrect.

+
TypeError

Raises when the types of the inputs are incorrect.

@@ -809,21 +809,21 @@

Members
Parameters
-
varstuple of ltn.core.Variable

Tuple of LTN variables for which the diagonal quantification has to be set.

+
varstuple of ltn.core.Variable

Tuple of LTN variables for which the diagonal quantification has to be set.

Returns
-
list of ltn.core.Variable

List of the same LTN variables given in input, prepared for the use of diagonal quantification.

+
list of ltn.core.Variable

List of the same LTN variables given in input, prepared for the use of diagonal quantification.

Raises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

-
ValueError

Raises when the values of the input parameters are incorrect.

+
ValueError

Raises when the values of the input parameters are incorrect.

@@ -894,19 +894,19 @@

Members
Parameters
-
varstuple of ltn.core.Variable

Tuple of LTN variables for which the diagonal quantification setting has to be removed.

+
varstuple of ltn.core.Variable

Tuple of LTN variables for which the diagonal quantification setting has to be removed.

Returns
-
list

List of the same LTN variables given in input, with the diagonal quantification setting removed.

+
list

List of the same LTN variables given in input, with the diagonal quantification setting removed.

Raises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

@@ -967,7 +967,7 @@

Members
class ltn.core.Connective(connective_op)
-

Bases: object

+

Bases: object

Class representing an LTN connective.

An LTN connective is grounded as a fuzzy connective operator.

In LTNtorch, the inputs of a connective are automatically broadcasted before the computation of the connective, @@ -982,7 +982,7 @@

MembersRaises
-
TypeError

Raises when the type of the input parameter is incorrect.

+
TypeError

Raises when the type of the input parameter is incorrect.

@@ -1009,7 +1009,7 @@

Members
Parameters
-
operandstuple of ltn.core.LTNObject

Tuple of LTN objects representing the operands to which the fuzzy connective +

operandstuple of ltn.core.LTNObject

Tuple of LTN objects representing the operands to which the fuzzy connective operator has to be applied.

@@ -1023,9 +1023,9 @@

MembersRaises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

-
ValueError

Raises when the values of the input parameters are incorrect. +

ValueError

Raises when the values of the input parameters are incorrect. Raises when the truth values of the operands given in input are not in the range [0., 1.].

@@ -1102,7 +1102,7 @@

Members
class ltn.core.Quantifier(agg_op, quantifier)
-

Bases: object

+

Bases: object

Class representing an LTN quantifier.

An LTN quantifier is grounded as a fuzzy aggregation operator. See quantification in LTN for more information about quantification.

@@ -1111,15 +1111,15 @@

Members
agg_opltn.fuzzy_ops.AggregationOperator

The fuzzy aggregation operator that becomes the grounding of the LTN quantifier.

-
quantifierstr

String indicating the quantification that has to be performed (‘e’ for ∃, or ‘f’ for ∀).

+
quantifierstr

String indicating the quantification that has to be performed (‘e’ for ∃, or ‘f’ for ∀).

Raises
-
TypeError

Raises when the type of the agg_op parameter is incorrect.

+
TypeError

Raises when the type of the agg_op parameter is incorrect.

-
ValueError

Raises when the value of the quantifier parameter is incorrect.

+
ValueError

Raises when the value of the quantifier parameter is incorrect.

@@ -1149,11 +1149,11 @@

Members
Parameters
-
varslist of ltn.core.Variable

List of LTN variables on which the quantification has to be performed.

+
varslist of ltn.core.Variable

List of LTN variables on which the quantification has to be performed.

formulaltn.core.LTNObject

Formula on which the quantification has to be performed.

-
cond_varslist of ltn.core.Variable, default=None

List of LTN variables that appear in the guarded quantification condition.

+
cond_varslist of ltn.core.Variable, default=None

List of LTN variables that appear in the guarded quantification condition.

cond_fnfunction, default=None

Function representing the guarded quantification condition.

@@ -1161,9 +1161,9 @@

MembersRaises
-
TypeError

Raises when the types of the input parameters are incorrect.

+
TypeError

Raises when the types of the input parameters are incorrect.

-
ValueError

Raises when the values of the input parameters are incorrect. +

ValueError

Raises when the values of the input parameters are incorrect. Raises when the truth values of the formula given in input are not in the range [0., 1.].

@@ -1316,7 +1316,7 @@

Members
agg_opltn.fuzzy_ops.AggregationOperator

See agg_op parameter.

-
quantifierstr

See quantifier parameter.

+
quantifierstr

See quantifier parameter.

diff --git a/docs/fuzzy_ops.html b/docs/fuzzy_ops.html index fd24c28..bcda920 100644 --- a/docs/fuzzy_ops.html +++ b/docs/fuzzy_ops.html @@ -178,14 +178,14 @@

Members
class ltn.fuzzy_ops.ConnectiveOperator
-

Bases: object

+

Bases: object

Abstract class for connective operators.

Every connective operator implemented in LTNtorch must inherit from this class and implements the __call__() method.

Raises
-
NotImplementedError

Raised when __call__() is not implemented in the sub-class.

+
NotImplementedError

Raised when __call__() is not implemented in the sub-class.

@@ -202,7 +202,7 @@

Members
Raises
-
NotImplementedError

Raised when __call__() is not implemented in the sub-class.

+
NotImplementedError

Raised when __call__() is not implemented in the sub-class.

@@ -219,7 +219,7 @@

Members
Raises
-
NotImplementedError

Raised when __call__() is not implemented in the sub-class.

+
NotImplementedError

Raised when __call__() is not implemented in the sub-class.

@@ -255,13 +255,13 @@

Members
Parameters
-
xtorch.Tensor

Operand on which the operator has to be applied.

+
xtorch.Tensor

Operand on which the operator has to be applied.

Returns
-
torch.Tensor

The standard fuzzy negation of the given operand.

+
torch.Tensor

The standard fuzzy negation of the given operand.

@@ -299,13 +299,13 @@

Members
Parameters
-
xtorch.Tensor

Operand on which the operator has to be applied.

+
xtorch.Tensor

Operand on which the operator has to be applied.

Returns
-
torch.Tensor

The Godel fuzzy negation of the given operand.

+
torch.Tensor

The Godel fuzzy negation of the given operand.

@@ -354,15 +354,15 @@

Members
Parameters
-
xtorch.Tensor

First operand on which the operator has to be applied.

+
xtorch.Tensor

First operand on which the operator has to be applied.

-
ytorch.Tensor

Second operand on which the operator has to be applied.

+
ytorch.Tensor

Second operand on which the operator has to be applied.

Returns
-
torch.Tensor

The Godel fuzzy conjunction of the two operands.

+
torch.Tensor

The Godel fuzzy conjunction of the two operands.

@@ -380,13 +380,13 @@

Members
Parameters
-
stablebool, default=True

Flag indicating whether to use the stable version of the operator or not.

+
stablebool, default=True

Flag indicating whether to use the stable version of the operator or not.

Notes

-

The Gougen fuzzy conjunction could have vanishing gradients if not used in its stable version.

+

The Gougen fuzzy conjunction could have vanishing gradients if not used in its stable version.

Examples

Note that:

    @@ -421,17 +421,17 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    -
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    Returns
    -
    torch.Tensor

    The Goguen fuzzy conjunction of the two operands.

    +
    torch.Tensor

    The Goguen fuzzy conjunction of the two operands.

    @@ -441,7 +441,7 @@

    Members
    Attributes
    -
    stablebool

    See stable parameter.

    +
    stablebool

    See stable parameter.

    @@ -488,15 +488,15 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    Returns
    -
    torch.Tensor

    The Lukasiewicz fuzzy conjunction of the two operands.

    +
    torch.Tensor

    The Lukasiewicz fuzzy conjunction of the two operands.

    @@ -545,15 +545,15 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    Returns
    -
    torch.Tensor

    The Godel fuzzy disjunction of the two operands.

    +
    torch.Tensor

    The Godel fuzzy disjunction of the two operands.

    @@ -571,13 +571,13 @@

    Members
    Parameters
    -
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

Notes

-

The Gougen fuzzy disjunction could have vanishing gradients if not used in its stable version.

+

The Gougen fuzzy disjunction could have vanishing gradients if not used in its stable version.

Examples

Note that:

    @@ -612,17 +612,17 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    -
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    Returns
    -
    torch.Tensor

    The Goguen fuzzy disjunction of the two operands.

    +
    torch.Tensor

    The Goguen fuzzy disjunction of the two operands.

    @@ -632,7 +632,7 @@

    Members
    Attributes
    -
    stablebool

    See stable parameter.

    +
    stablebool

    See stable parameter.

    @@ -679,15 +679,15 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    Returns
    -
    torch.Tensor

    The Lukasiewicz fuzzy disjunction of the two operands.

    +
    torch.Tensor

    The Lukasiewicz fuzzy disjunction of the two operands.

    @@ -736,15 +736,15 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    Returns
    -
    torch.Tensor

    The Kleene Dienes fuzzy implication of the two operands.

    +
    torch.Tensor

    The Kleene Dienes fuzzy implication of the two operands.

    @@ -793,15 +793,15 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    Returns
    -
    torch.Tensor

    The Godel fuzzy implication of the two operands.

    +
    torch.Tensor

    The Godel fuzzy implication of the two operands.

    @@ -819,13 +819,13 @@

    Members
    Parameters
    -
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

Notes

-

The Reichenbach fuzzy implication could have vanishing gradients if not used in its stable version.

+

The Reichenbach fuzzy implication could have vanishing gradients if not used in its stable version.

Examples

Note that:

    @@ -860,17 +860,17 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    -
    stable: :obj:`bool`, default=None

    Flag indicating whether to use the stable version of the operator or not.

    +
    stable: :obj:`bool`, default=None

    Flag indicating whether to use the stable version of the operator or not.

    Returns
    -
    torch.Tensor

    The Reichenbach fuzzy implication of the two operands.

    +
    torch.Tensor

    The Reichenbach fuzzy implication of the two operands.

    @@ -880,7 +880,7 @@

    Members
    Attributes
    -
    stablebool

    See stable parameter.

    +
    stablebool

    See stable parameter.

    @@ -896,13 +896,13 @@

    Members
    Parameters
    -
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

Notes

-

The Goguen fuzzy implication could have vanishing gradients if not used in its stable version.

+

The Goguen fuzzy implication could have vanishing gradients if not used in its stable version.

Examples

Note that:

    @@ -937,17 +937,17 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    -
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    Returns
    -
    torch.Tensor

    The Goguen fuzzy implication of the two operands.

    +
    torch.Tensor

    The Goguen fuzzy implication of the two operands.

    @@ -957,7 +957,7 @@

    Members
    Attributes
    -
    stablebool

    See stable parameter.

    +
    stablebool

    See stable parameter.

    @@ -1024,15 +1024,15 @@

    Members
    Parameters
    -
    xtorch.Tensor

    First operand on which the operator has to be applied.

    +
    xtorch.Tensor

    First operand on which the operator has to be applied.

    -
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    +
    ytorch.Tensor

    Second operand on which the operator has to be applied.

    Returns
    -
    torch.Tensor

    The fuzzy equivalence of the two operands.

    +
    torch.Tensor

    The fuzzy equivalence of the two operands.

    @@ -1054,14 +1054,14 @@

    Members
    class ltn.fuzzy_ops.AggregationOperator
    -

    Bases: object

    +

    Bases: object

    Abstract class for aggregation operators.

    Every aggregation operator implemented in LTNtorch must inherit from this class and implement the __call__() method.

    Raises
    -
    NotImplementedError

    Raised when __call__() is not implemented in the sub-class.

    +
    NotImplementedError

    Raised when __call__() is not implemented in the sub-class.

    @@ -1098,27 +1098,27 @@

    Members
    Parameters
    -
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    +
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    -
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    +
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    -
    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after +

    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.

    -
    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded +

    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.

    Returns
    -
    torch.Tensor

    Min fuzzy aggregation of the formula.

    +
    torch.Tensor

    Min fuzzy aggregation of the formula.

    Raises
    -
    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same +

    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.

    @@ -1159,27 +1159,27 @@

    Members
    Parameters
    -
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    +
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    -
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    +
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    -
    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after +

    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.

    -
    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded +

    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.

    Returns
    -
    torch.Tensor

    Mean fuzzy aggregation of the formula.

    +
    torch.Tensor

    Mean fuzzy aggregation of the formula.

    Raises
    -
    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same +

    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.

    @@ -1199,9 +1199,9 @@

    Members
    Parameters
    -
    pint, default=2

    Value of hyper-parameter p of the pMean fuzzy aggregation operator.

    +
    pint, default=2

    Value of hyper-parameter p of the pMean fuzzy aggregation operator.

    -
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    @@ -1235,31 +1235,31 @@

    Members
    Parameters
    -
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    +
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    -
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    +
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    -
    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after +

    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.

    -
    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded +

    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.

    -
    pint, default=None

    Value of hyper-parameter p of the pMean fuzzy aggregation operator.

    +
    pint, default=None

    Value of hyper-parameter p of the pMean fuzzy aggregation operator.

    -
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=None

    Flag indicating whether to use the stable version of the operator or not.

    Returns
    -
    torch.Tensor

    pMean fuzzy aggregation of the formula.

    +
    torch.Tensor

    pMean fuzzy aggregation of the formula.

    Raises
    -
    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same +

    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.

    @@ -1271,9 +1271,9 @@

    Members
    Attributes
    -
    pint

    See p parameter.

    +
    pint

    See p parameter.

    -
    stablebool

    See stable parameter.

    +
    stablebool

    See stable parameter.

    @@ -1289,9 +1289,9 @@

    Members
    Parameters
    -
    pint, default=2

    Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.

    +
    pint, default=2

    Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.

    -
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    +
    stablebool, default=True

    Flag indicating whether to use the stable version of the operator or not.

    @@ -1324,31 +1324,31 @@

    Members
    Parameters
    -
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    +
    xstorch.Tensor

    Grounding of formula on which the aggregation has to be performed.

    -
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    +
    dimtuple of int, default=None

    Tuple containing the indexes of dimensions on which the aggregation has to be performed.

    -
    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after +

    keepdimbool, default=False

    Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.

    -
    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded +

    masktorch.Tensor, default=None

    Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.

    -
    pint, default=None

    Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.

    +
    pint, default=None

    Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.

    -
    stable: :obj:`bool`, default=None

    Flag indicating whether to use the stable version of the operator or not.

    +
    stable: :obj:`bool`, default=None

    Flag indicating whether to use the stable version of the operator or not.

    Returns
    -
    torch.Tensor

    pMeanError fuzzy aggregation of the formula.

    +
    torch.Tensor

    pMeanError fuzzy aggregation of the formula.

    Raises
    -
    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same +

    ValueError

    Raises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.

    @@ -1360,9 +1360,9 @@

    Members
    Attributes
    -
    pint

    See p parameter.

    +
    pint

    See p parameter.

    -
    stablebool

    See stable parameter.

    +
    stablebool

    See stable parameter.

    @@ -1372,7 +1372,7 @@

    Members
    class ltn.fuzzy_ops.SatAgg(agg_op=AggregPMeanError(p=2, stable=True))
    -

    Bases: object

    +

    Bases: object

    SatAgg aggregation operator.

    \(\operatorname{SatAgg}_{\phi \in \mathcal{K}} \mathcal{G}_{\theta} (\phi)\)

    It aggregates the truth values of the closed formulas given in input, namely the formulas @@ -1387,7 +1387,7 @@

    MembersRaises
    -
    TypeError

    Raises when the type of the input parameter is not correct.

    +
    TypeError

    Raises when the type of the input parameter is not correct.

    @@ -1400,11 +1400,11 @@

    MembersExamples

    SatAgg can be used to aggregate the truth values of formulas contained in a knowledge base. Note that:

      -
    • SatAgg takes as input a tuple of ltn.core.LTNObject and/or torch.Tensor;

    • -
    • when some torch.Tensor are given to SatAgg, they have to be scalars in [0., 1.] since SatAgg is designed to work with closed formulas;

    • +
    • SatAgg takes as input a tuple of ltn.core.LTNObject and/or torch.Tensor;

    • +
    • when some torch.Tensor are given to SatAgg, they have to be scalars in [0., 1.] since SatAgg is designed to work with closed formulas;

    • in this example, our knowledge base is composed of closed formulas f1, f2, and f3;

    • SatAgg applies the pMeanError aggregation operator to the truth values of these formulas. The result is a new truth value which can be interpreted as a satisfaction level of the entire knowledge base;

    • -
    • the result of SatAgg is a torch.Tensor since it has been designed for learning in PyTorch. The idea is to put the result of the operator directly inside the loss function of the LTN. See this tutorial for a detailed example.

    • +
    • the result of SatAgg is a torch.Tensor since it has been designed for learning in PyTorch. The idea is to put the result of the operator directly inside the loss function of the LTN. See this tutorial for a detailed example.

    >>> import ltn
     >>> import torch
    @@ -1435,11 +1435,11 @@ 

    Membersltn.core.LTNObject) have been given to the SatAgg operator. -In this example, we show that SatAgg can take as input also torch.Tensor containing the result of some +In this example, we show that SatAgg can take as input also torch.Tensor containing the result of some closed formulas, namely scalars in [0., 1.]. Note that:

      -
    • f2 is just a torch.Tensor;

    • -
    • since f2 contains a scalar in [0., 1.], its value can be interpreted as a truth value of a closed formula. For this reason, it is possible to give f2 to the SatAgg operator to get the aggregation of f1 (ltn.core.LTNObject) and f2 (torch.Tensor).

    • +
    • f2 is just a torch.Tensor;

    • +
    • since f2 contains a scalar in [0., 1.], its value can be interpreted as a truth value of a closed formula. For this reason, it is possible to give f2 to the SatAgg operator to get the aggregation of f1 (ltn.core.LTNObject) and f2 (torch.Tensor).

    >>> x = ltn.Variable('x', torch.tensor([[0.1, 0.03],
     ...                                     [2.3, 4.3]]))
    @@ -1466,21 +1466,21 @@ 

    Members
    Parameters
    -
    closed_formulastuple of ltn.core.LTNObject and/or torch.Tensor

    Tuple of closed formulas (LTNObject and/or tensors) for which the aggregation has to be computed.

    +
    closed_formulastuple of ltn.core.LTNObject and/or torch.Tensor

    Tuple of closed formulas (LTNObject and/or tensors) for which the aggregation has to be computed.

    Returns
    -
    torch.Tensor

    The result of the SatAgg aggregation.

    +
    torch.Tensor

    The result of the SatAgg aggregation.

    Raises
    -
    TypeError

    Raises when the type of the input parameter is not correct.

    +
    TypeError

    Raises when the type of the input parameter is not correct.

    -
    ValueError

    Raises when the truth values of the formulas/tensors given in input are not in the range [0., 1.]. +

    ValueError

    Raises when the truth values of the formulas/tensors given in input are not in the range [0., 1.]. Raises when the truth values of the formulas/tensors given in input are not scalars, namely some formulas are not closed formulas.

    diff --git a/setup.py b/setup.py index d96d752..f640c2f 100644 --- a/setup.py +++ b/setup.py @@ -8,7 +8,7 @@ setup( name='LTNtorch', - version='1.0.1', + version='1.0.2', packages=find_packages(include=['ltn']), install_requires=[ "numpy",