From d1bd98169cc2121f8cdd25ff99901e4589923c95 Mon Sep 17 00:00:00 2001
From: bmxitalia Bases: Bases: Class representing a generic LTN object. In LTNtorch, LTN objects are constants, variables, and outputs of predicates, formulas, functions, connectives,
and quantifiers. The grounding (value) of the LTN object. The grounding (value) of the LTN object. The labels of the free variables contained in the LTN object. The labels of the free variables contained in the LTN object. Notes in LTNtorch, the groundings of the LTN objects (symbols) are represented using PyTorch tensors, namely in LTNtorch, the groundings of the LTN objects (symbols) are represented using PyTorch tensors, namely LTNObject is used by LTNtorch internally. The user should not create LTNObject instances by his/her own, unless strictly necessary. See value parameter. See value parameter. See var_labels parameter. See var_labels parameter. The shape of the grounding of the LTN object. The shape of the grounding of the LTN object. The grounding of the LTN constant. It can be a tensor of any order. The grounding of the LTN constant. It can be a tensor of any order. Flag indicating whether the LTN constant is trainable (embedding) or not. Flag indicating whether the LTN constant is trainable (embedding) or not. Name of the variable. Name of the variable. Sequence of individuals (tensors) that becomes the grounding the LTN variable. Sequence of individuals (tensors) that becomes the grounding the LTN variable. Flag indicating whether a batch dimension (first dimension) has to be added to the
+ Flag indicating whether a batch dimension (first dimension) has to be added to the
vale of the variable or not. If True, a dimension will be added only if the
value attribute of the LTN variable has one single dimension. In all the other cases, the first dimension
will be considered as batch dimension, so no dimension will be added. Raises when the types of the input parameters are not correct. Raises when the types of the input parameters are not correct. Raises when the value of the var_label parameter is not correct. Raises when the value of the var_label parameter is not correct. Bases: Bases: Class representing an LTN predicate. An LTN predicate is grounded as a mathematical function (either pre-defined or learnable)
that maps from some n-ary domain of individuals to a real number in [0,1] (fuzzy), which can be interpreted as a
@@ -345,7 +345,7 @@ PyTorch model that becomes the grounding of the LTN predicate. PyTorch model that becomes the grounding of the LTN predicate. Function that becomes the grounding of the LTN predicate. Raises when the types of the input parameters are incorrect. Raises when the types of the input parameters are incorrect. Raises when the values of the input parameters are incorrect. Raises when the values of the input parameters are incorrect.Members
object
object
-
torch.Tensor
torch.Tensor
list
of str
list
of str
-
torch.Tensor
instances;torch.Tensor
instances;
-
torch.Tensor
torch.Tensor
list
of str
list
of str
Members
-
torch.Size
torch.Size
Members
-
torch.Tensor
torch.Tensor
bool
, default=Falsebool
, default=FalseMembers
-
str
str
torch.Tensor
torch.Tensor
bool
, default=Truebool
, default=TrueMembersRaises
-
TypeError
TypeError
ValueError
ValueError
Members
torch.nn.modules.module.Module
torch.nn.modules.module.Module
Members
-
torch.nn.Module
, default=Nonetorch.nn.Module
, default=Nonefunction
, default=NoneMembersRaises
-
TypeError
TypeError
ValueError
ValueError
MembersLTN broadcasting, see
ltn.core.diag()
.
Examples
-Unary predicate defined using a torch.nn.Sequential
.
Unary predicate defined using a torch.nn.Sequential
.
>>> import ltn
>>> import torch
>>> predicate_model = torch.nn.Sequential(
@@ -401,7 +401,7 @@ MembersPredicate(model=LambdaModel())
Binary predicate defined using a torch.nn.Module
. Note the call to torch.cat to merge
+
Binary predicate defined using a torch.nn.Module
. Note the call to torch.cat to merge
the two inputs of the binary predicate.
>>> class PredicateModel(torch.nn.Module):
... def __init__(self):
@@ -524,7 +524,7 @@ Members
Attributes
-- model
torch.nn.Module
or function
The grounding of the LTN predicate.
+- model
torch.nn.Module
or function
The grounding of the LTN predicate.
@@ -537,7 +537,7 @@ Members
Parameters
-- inputs
tuple
of ltn.core.LTNObject
Tuple of LTN objects for which the predicate has to be computed.
+- inputs
tuple
of ltn.core.LTNObject
Tuple of LTN objects for which the predicate has to be computed.
@@ -550,9 +550,9 @@ MembersRaises
-TypeError
Raises when the types of the inputs are incorrect.
+TypeError
Raises when the types of the inputs are incorrect.
-ValueError
Raises when the values of the output are not in the range [0., 1.].
+ValueError
Raises when the values of the output are not in the range [0., 1.].
@@ -564,7 +564,7 @@ Members
class ltn.core.Function(model=None, func=None)
-Bases: torch.nn.modules.module.Module
+Bases: torch.nn.modules.module.Module
Class representing LTN functions.
An LTN function is grounded as a mathematical function (either pre-defined or learnable)
that maps from some n-ary domain of individuals to a tensor (individual) in the Real field.
@@ -575,7 +575,7 @@ Members
Parameters
-- model
torch.nn.Module
, default=None PyTorch model that becomes the grounding of the LTN function.
+- model
torch.nn.Module
, default=None PyTorch model that becomes the grounding of the LTN function.
- func
function
, default=None Function that becomes the grounding of the LTN function.
@@ -583,9 +583,9 @@ MembersRaises
-TypeError
Raises when the types of the input parameters are incorrect.
+TypeError
Raises when the types of the input parameters are incorrect.
-ValueError
Raises when the values of the input parameters are incorrect.
+ValueError
Raises when the values of the input parameters are incorrect.
@@ -603,7 +603,7 @@ MembersLTN broadcasting, see ltn.core.diag()
.
Examples
-Unary function defined using a torch.nn.Sequential
.
+Unary function defined using a torch.nn.Sequential
.
>>> import ltn
>>> import torch
>>> function_model = torch.nn.Sequential(
@@ -631,7 +631,7 @@ MembersFunction(model=LambdaModel())
-Binary function defined using a torch.nn.Module
. Note the call to torch.cat to merge
+
Binary function defined using a torch.nn.Module
. Note the call to torch.cat to merge
the two inputs of the binary function.
>>> class FunctionModel(torch.nn.Module):
... def __init__(self):
@@ -766,7 +766,7 @@ Members
- Attributes
-- model
torch.nn.Module
or function
The grounding of the LTN function.
+- model
torch.nn.Module
or function
The grounding of the LTN function.
@@ -779,7 +779,7 @@ Members
- Parameters
-- inputs
tuple
of ltn.core.LTNObject
Tuple of LTN objects for which the function has to be computed.
+- inputs
tuple
of ltn.core.LTNObject
Tuple of LTN objects for which the function has to be computed.
@@ -792,7 +792,7 @@ MembersRaises
-
@@ -809,21 +809,21 @@ Members
- Parameters
-- vars
tuple
of ltn.core.Variable
Tuple of LTN variables for which the diagonal quantification has to be set.
+- vars
tuple
of ltn.core.Variable
Tuple of LTN variables for which the diagonal quantification has to be set.
- Returns
-list
of ltn.core.Variable
List of the same LTN variables given in input, prepared for the use of diagonal quantification.
+list
of ltn.core.Variable
List of the same LTN variables given in input, prepared for the use of diagonal quantification.
- Raises
-TypeError
Raises when the types of the input parameters are incorrect.
+TypeError
Raises when the types of the input parameters are incorrect.
-ValueError
Raises when the values of the input parameters are incorrect.
+ValueError
Raises when the values of the input parameters are incorrect.
@@ -894,19 +894,19 @@ Members
- Parameters
-- vars
tuple
of ltn.core.Variable
Tuple of LTN variables for which the diagonal quantification setting has to be removed.
+- vars
tuple
of ltn.core.Variable
Tuple of LTN variables for which the diagonal quantification setting has to be removed.
- Returns
-list
List of the same LTN variables given in input, with the diagonal quantification setting removed.
+list
List of the same LTN variables given in input, with the diagonal quantification setting removed.
- Raises
-
@@ -967,7 +967,7 @@ Members
-
class ltn.core.Connective(connective_op)
-Bases: object
+Bases: object
Class representing an LTN connective.
An LTN connective is grounded as a fuzzy connective operator.
In LTNtorch, the inputs of a connective are automatically broadcasted before the computation of the connective,
@@ -982,7 +982,7 @@
MembersRaises
-
@@ -1009,7 +1009,7 @@ Members
- Parameters
-- operands
tuple
of ltn.core.LTNObject
Tuple of LTN objects representing the operands to which the fuzzy connective
+
- operands
tuple
of ltn.core.LTNObject
Tuple of LTN objects representing the operands to which the fuzzy connective
operator has to be applied.
@@ -1023,9 +1023,9 @@ MembersRaises
-TypeError
Raises when the types of the input parameters are incorrect.
+TypeError
Raises when the types of the input parameters are incorrect.
-ValueError
Raises when the values of the input parameters are incorrect.
+
ValueError
Raises when the values of the input parameters are incorrect.
Raises when the truth values of the operands given in input are not in the range [0., 1.].
@@ -1102,7 +1102,7 @@ Members
-
class ltn.core.Quantifier(agg_op, quantifier)
-Bases: object
+Bases: object
Class representing an LTN quantifier.
An LTN quantifier is grounded as a fuzzy aggregation operator. See quantification in LTN
for more information about quantification.
@@ -1111,15 +1111,15 @@ Members
- agg_op
ltn.fuzzy_ops.AggregationOperator
The fuzzy aggregation operator that becomes the grounding of the LTN quantifier.
-- quantifier
str
String indicating the quantification that has to be performed (‘e’ for ∃, or ‘f’ for ∀).
+- quantifier
str
String indicating the quantification that has to be performed (‘e’ for ∃, or ‘f’ for ∀).
- Raises
-TypeError
Raises when the type of the agg_op parameter is incorrect.
+TypeError
Raises when the type of the agg_op parameter is incorrect.
-ValueError
Raises when the value of the quantifier parameter is incorrect.
+ValueError
Raises when the value of the quantifier parameter is incorrect.
@@ -1149,11 +1149,11 @@ Members
- Parameters
-- vars
list
of ltn.core.Variable
List of LTN variables on which the quantification has to be performed.
+- vars
list
of ltn.core.Variable
List of LTN variables on which the quantification has to be performed.
- formula
ltn.core.LTNObject
Formula on which the quantification has to be performed.
-- cond_vars
list
of ltn.core.Variable
, default=None List of LTN variables that appear in the guarded quantification condition.
+- cond_vars
list
of ltn.core.Variable
, default=None List of LTN variables that appear in the guarded quantification condition.
- cond_fn
function
, default=None Function representing the guarded quantification condition.
@@ -1161,9 +1161,9 @@ MembersRaises
-TypeError
Raises when the types of the input parameters are incorrect.
+TypeError
Raises when the types of the input parameters are incorrect.
-ValueError
Raises when the values of the input parameters are incorrect.
+
ValueError
Raises when the values of the input parameters are incorrect.
Raises when the truth values of the formula given in input are not in the range [0., 1.].
@@ -1316,7 +1316,7 @@ Members
- agg_op
ltn.fuzzy_ops.AggregationOperator
See agg_op parameter.
-- quantifier
str
See quantifier parameter.
+- quantifier
str
See quantifier parameter.
diff --git a/docs/fuzzy_ops.html b/docs/fuzzy_ops.html
index fd24c28..bcda920 100644
--- a/docs/fuzzy_ops.html
+++ b/docs/fuzzy_ops.html
@@ -178,14 +178,14 @@ Members
-
class ltn.fuzzy_ops.ConnectiveOperator
-Bases: object
+Bases: object
Abstract class for connective operators.
Every connective operator implemented in LTNtorch must inherit from this class and implements
the __call__() method.
- Raises
-NotImplementedError
Raised when __call__() is not implemented in the sub-class.
+NotImplementedError
Raised when __call__() is not implemented in the sub-class.
@@ -202,7 +202,7 @@ Members
- Raises
-NotImplementedError
Raised when __call__() is not implemented in the sub-class.
+NotImplementedError
Raised when __call__() is not implemented in the sub-class.
@@ -219,7 +219,7 @@ Members
- Raises
-NotImplementedError
Raised when __call__() is not implemented in the sub-class.
+NotImplementedError
Raised when __call__() is not implemented in the sub-class.
@@ -255,13 +255,13 @@ Members
- Parameters
-- x
torch.Tensor
Operand on which the operator has to be applied.
+- x
torch.Tensor
Operand on which the operator has to be applied.
- Returns
-torch.Tensor
The standard fuzzy negation of the given operand.
+torch.Tensor
The standard fuzzy negation of the given operand.
@@ -299,13 +299,13 @@ Members
- Parameters
-- x
torch.Tensor
Operand on which the operator has to be applied.
+- x
torch.Tensor
Operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Godel fuzzy negation of the given operand.
+torch.Tensor
The Godel fuzzy negation of the given operand.
@@ -354,15 +354,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Godel fuzzy conjunction of the two operands.
+torch.Tensor
The Godel fuzzy conjunction of the two operands.
@@ -380,13 +380,13 @@ Members
- Parameters
-- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
Notes
-The Gougen fuzzy conjunction could have vanishing gradients if not used in its stable version.
+The Gougen fuzzy conjunction could have vanishing gradients if not used in its stable version.
Examples
Note that:
@@ -421,17 +421,17 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
-- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
- Returns
-torch.Tensor
The Goguen fuzzy conjunction of the two operands.
+torch.Tensor
The Goguen fuzzy conjunction of the two operands.
@@ -441,7 +441,7 @@ Members
- Attributes
-
@@ -488,15 +488,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Lukasiewicz fuzzy conjunction of the two operands.
+torch.Tensor
The Lukasiewicz fuzzy conjunction of the two operands.
@@ -545,15 +545,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Godel fuzzy disjunction of the two operands.
+torch.Tensor
The Godel fuzzy disjunction of the two operands.
@@ -571,13 +571,13 @@ Members
- Parameters
-- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
Notes
-The Gougen fuzzy disjunction could have vanishing gradients if not used in its stable version.
+The Gougen fuzzy disjunction could have vanishing gradients if not used in its stable version.
Examples
Note that:
@@ -612,17 +612,17 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
-- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
- Returns
-torch.Tensor
The Goguen fuzzy disjunction of the two operands.
+torch.Tensor
The Goguen fuzzy disjunction of the two operands.
@@ -632,7 +632,7 @@ Members
- Attributes
-
@@ -679,15 +679,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Lukasiewicz fuzzy disjunction of the two operands.
+torch.Tensor
The Lukasiewicz fuzzy disjunction of the two operands.
@@ -736,15 +736,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Kleene Dienes fuzzy implication of the two operands.
+torch.Tensor
The Kleene Dienes fuzzy implication of the two operands.
@@ -793,15 +793,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The Godel fuzzy implication of the two operands.
+torch.Tensor
The Godel fuzzy implication of the two operands.
@@ -819,13 +819,13 @@ Members
- Parameters
-- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
Notes
-The Reichenbach fuzzy implication could have vanishing gradients if not used in its stable version.
+The Reichenbach fuzzy implication could have vanishing gradients if not used in its stable version.
Examples
Note that:
@@ -860,17 +860,17 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
-- stable: :obj:`bool`, default=None
Flag indicating whether to use the stable version of the operator or not.
+- stable: :obj:`bool`, default=None
Flag indicating whether to use the stable version of the operator or not.
- Returns
-torch.Tensor
The Reichenbach fuzzy implication of the two operands.
+torch.Tensor
The Reichenbach fuzzy implication of the two operands.
@@ -880,7 +880,7 @@ Members
- Attributes
-
@@ -896,13 +896,13 @@ Members
- Parameters
-- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
Notes
-The Goguen fuzzy implication could have vanishing gradients if not used in its stable version.
+The Goguen fuzzy implication could have vanishing gradients if not used in its stable version.
Examples
Note that:
@@ -937,17 +937,17 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
-- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
- Returns
-torch.Tensor
The Goguen fuzzy implication of the two operands.
+torch.Tensor
The Goguen fuzzy implication of the two operands.
@@ -957,7 +957,7 @@ Members
- Attributes
-
@@ -1024,15 +1024,15 @@ Members
- Parameters
-- x
torch.Tensor
First operand on which the operator has to be applied.
+- x
torch.Tensor
First operand on which the operator has to be applied.
-- y
torch.Tensor
Second operand on which the operator has to be applied.
+- y
torch.Tensor
Second operand on which the operator has to be applied.
- Returns
-torch.Tensor
The fuzzy equivalence of the two operands.
+torch.Tensor
The fuzzy equivalence of the two operands.
@@ -1054,14 +1054,14 @@ Members
-
class ltn.fuzzy_ops.AggregationOperator
-Bases: object
+Bases: object
Abstract class for aggregation operators.
Every aggregation operator implemented in LTNtorch must inherit from this class
and implement the __call__() method.
- Raises
-NotImplementedError
Raised when __call__() is not implemented in the sub-class.
+NotImplementedError
Raised when __call__() is not implemented in the sub-class.
@@ -1098,27 +1098,27 @@ Members
- Parameters
-- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
+- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
-- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
+- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
-- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
+
- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
the aggregation.
-- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
+
- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
- Returns
-torch.Tensor
Min fuzzy aggregation of the formula.
+torch.Tensor
Min fuzzy aggregation of the formula.
- Raises
-ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
+
ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
shape.
Raises when the ‘mask’ is not boolean.
@@ -1159,27 +1159,27 @@ Members
- Parameters
-- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
+- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
-- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
+- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
-- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
+
- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
the aggregation.
-- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
+
- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
- Returns
-torch.Tensor
Mean fuzzy aggregation of the formula.
+torch.Tensor
Mean fuzzy aggregation of the formula.
- Raises
-ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
+
ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
shape.
Raises when the ‘mask’ is not boolean.
@@ -1199,9 +1199,9 @@ Members
- Parameters
-- p
int
, default=2 Value of hyper-parameter p of the pMean fuzzy aggregation operator.
+- p
int
, default=2 Value of hyper-parameter p of the pMean fuzzy aggregation operator.
-- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
@@ -1235,31 +1235,31 @@ Members
- Parameters
-- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
+- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
-- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
+- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
-- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
+
- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
the aggregation.
-- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
+
- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
-- p
int
, default=None Value of hyper-parameter p of the pMean fuzzy aggregation operator.
+- p
int
, default=None Value of hyper-parameter p of the pMean fuzzy aggregation operator.
-- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=None Flag indicating whether to use the stable version of the operator or not.
- Returns
-torch.Tensor
pMean fuzzy aggregation of the formula.
+torch.Tensor
pMean fuzzy aggregation of the formula.
- Raises
-ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
+
ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
shape.
Raises when the ‘mask’ is not boolean.
@@ -1271,9 +1271,9 @@ Members
- Attributes
-
@@ -1289,9 +1289,9 @@ Members
- Parameters
-- p
int
, default=2 Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.
+- p
int
, default=2 Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.
-- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
+- stable
bool
, default=True Flag indicating whether to use the stable version of the operator or not.
@@ -1324,31 +1324,31 @@ Members
- Parameters
-- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
+- xs
torch.Tensor
Grounding of formula on which the aggregation has to be performed.
-- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
+- dim
tuple
of int
, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
-- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
+
- keepdim
bool
, default=False Flag indicating whether the output has to keep the same dimensions as the input after
the aggregation.
-- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
+
- mask
torch.Tensor
, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded
quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
-- p
int
, default=None Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.
+- p
int
, default=None Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.
-- stable: :obj:`bool`, default=None
Flag indicating whether to use the stable version of the operator or not.
+- stable: :obj:`bool`, default=None
Flag indicating whether to use the stable version of the operator or not.
- Returns
-torch.Tensor
pMeanError fuzzy aggregation of the formula.
+torch.Tensor
pMeanError fuzzy aggregation of the formula.
- Raises
-ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
+
ValueError
Raises when the grounding of the formula (‘xs’) and the mask do not have the same
shape.
Raises when the ‘mask’ is not boolean.
@@ -1360,9 +1360,9 @@ Members
- Attributes
-
@@ -1372,7 +1372,7 @@ Members
-
class ltn.fuzzy_ops.SatAgg(agg_op=AggregPMeanError(p=2, stable=True))
-Bases: object
+Bases: object
SatAgg aggregation operator.
\(\operatorname{SatAgg}_{\phi \in \mathcal{K}} \mathcal{G}_{\theta} (\phi)\)
It aggregates the truth values of the closed formulas given in input, namely the formulas
@@ -1387,7 +1387,7 @@
MembersRaises
-
@@ -1400,11 +1400,11 @@ MembersExamples
SatAgg can be used to aggregate the truth values of formulas contained in a knowledge base. Note that:
-SatAgg takes as input a tuple of ltn.core.LTNObject
and/or torch.Tensor
;
-when some torch.Tensor
are given to SatAgg, they have to be scalars in [0., 1.] since SatAgg is designed to work with closed formulas;
+SatAgg takes as input a tuple of ltn.core.LTNObject
and/or torch.Tensor
;
+when some torch.Tensor
are given to SatAgg, they have to be scalars in [0., 1.] since SatAgg is designed to work with closed formulas;
in this example, our knowledge base is composed of closed formulas f1, f2, and f3;
SatAgg applies the pMeanError aggregation operator to the truth values of these formulas. The result is a new truth value which can be interpreted as a satisfaction level of the entire knowledge base;
-the result of SatAgg is a torch.Tensor
since it has been designed for learning in PyTorch. The idea is to put the result of the operator directly inside the loss function of the LTN. See this tutorial for a detailed example.
+the result of SatAgg is a torch.Tensor
since it has been designed for learning in PyTorch. The idea is to put the result of the operator directly inside the loss function of the LTN. See this tutorial for a detailed example.
>>> import ltn
>>> import torch
@@ -1435,11 +1435,11 @@ Membersltn.core.LTNObject
) have been given to the SatAgg
operator.
-In this example, we show that SatAgg can take as input also torch.Tensor
containing the result of some
+In this example, we show that SatAgg can take as input also torch.Tensor
containing the result of some
closed formulas, namely scalars in [0., 1.]. Note that:
-f2 is just a torch.Tensor
;
-since f2 contains a scalar in [0., 1.], its value can be interpreted as a truth value of a closed formula. For this reason, it is possible to give f2 to the SatAgg operator to get the aggregation of f1 (ltn.core.LTNObject
) and f2 (torch.Tensor
).
+f2 is just a torch.Tensor
;
+since f2 contains a scalar in [0., 1.], its value can be interpreted as a truth value of a closed formula. For this reason, it is possible to give f2 to the SatAgg operator to get the aggregation of f1 (ltn.core.LTNObject
) and f2 (torch.Tensor
).
>>> x = ltn.Variable('x', torch.tensor([[0.1, 0.03],
... [2.3, 4.3]]))
@@ -1466,21 +1466,21 @@ Members
- Parameters
-- closed_formulas
tuple
of ltn.core.LTNObject
and/or torch.Tensor
Tuple of closed formulas (LTNObject and/or tensors) for which the aggregation has to be computed.
+- closed_formulas
tuple
of ltn.core.LTNObject
and/or torch.Tensor
Tuple of closed formulas (LTNObject and/or tensors) for which the aggregation has to be computed.
- Returns
-torch.Tensor
The result of the SatAgg aggregation.
+torch.Tensor
The result of the SatAgg aggregation.
- Raises
-TypeError
Raises when the type of the input parameter is not correct.
+TypeError
Raises when the type of the input parameter is not correct.
-ValueError
Raises when the truth values of the formulas/tensors given in input are not in the range [0., 1.].
+
ValueError
Raises when the truth values of the formulas/tensors given in input are not in the range [0., 1.].
Raises when the truth values of the formulas/tensors given in input are not scalars, namely some formulas
are not closed formulas.
diff --git a/setup.py b/setup.py
index d96d752..f640c2f 100644
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,7 @@
setup(
name='LTNtorch',
- version='1.0.1',
+ version='1.0.2',
packages=find_packages(include=['ltn']),
install_requires=[
"numpy",