Scanned the following path:
diff --git a/docs/snyk/master/argocd-iac-namespace-install.html b/docs/snyk/master/argocd-iac-namespace-install.html
index 020a13bf79f07..a78881186e589 100644
--- a/docs/snyk/master/argocd-iac-namespace-install.html
+++ b/docs/snyk/master/argocd-iac-namespace-install.html
@@ -456,7 +456,7 @@
Snyk test report
-
September 15th 2024, 12:21:06 am (UTC+00:00)
+
September 22nd 2024, 12:21:16 am (UTC+00:00)
Scanned the following path:
diff --git a/docs/snyk/master/argocd-test.html b/docs/snyk/master/argocd-test.html
index 33e1e7ab43d33..0c91d6f1cb159 100644
--- a/docs/snyk/master/argocd-test.html
+++ b/docs/snyk/master/argocd-test.html
@@ -7,7 +7,7 @@
Snyk test report
-
+
@@ -456,7 +456,7 @@
Snyk test report
-
September 15th 2024, 12:18:53 am (UTC+00:00)
+
September 22nd 2024, 12:18:54 am (UTC+00:00)
Scanned the following paths:
@@ -467,8 +467,8 @@
Snyk test report
-
3known vulnerabilities
-
5 vulnerable dependency paths
+
2known vulnerabilities
+
4 vulnerable dependency paths
2132dependencies
@@ -561,7 +561,7 @@
Detailed paths
Overview
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when including multiple regular expression parameters in a single segment, which will produce the regular expression /^\/([^\/]+?)-([^\/]+?)\/?$/, if two parameters within a single segment are separated by a character other than a / or .. Poor performance will block the event loop and can lead to a DoS.
Note:
- Version 0.1.10 is patched to mitigate this but is also vulnerable if custom regular expressions are used. Due to the existence of this attack vector, the Snyk security team have decided to err on the side of caution in considering the very widely-used v0 branch vulnerable, while the 8.0.0 release has completely eliminated the vulnerable functionality.
+ While the 8.0.0 release has completely eliminated the vulnerable functionality, prior versions that have received the patch to mitigate backtracking may still be vulnerable if custom regular expressions are used. So it is strongly recommended for regular expression input to be controlled to avoid malicious performance degradation in those versions. This behavior is enforced as of version 7.1.0 via the strict option, which returns an error if a dangerous regular expression is detected.
Workaround
This vulnerability can be avoided by using a custom regular expression for parameters after the first in a segment, which excludes - and /.
PoC
@@ -629,11 +629,13 @@
Details
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
-
Upgrade path-to-regexp to version 8.0.0 or higher.
+
Upgrade path-to-regexp to version 0.1.10, 1.9.0, 3.3.0, 6.3.0, 8.0.0 or higher.
dompurify is a DOM-only XSS sanitizer for HTML, MathML and SVG.
-
Affected versions of this package are vulnerable to Template Injection in purify.js, due to inconsistencies in the parsing of XML and HTML tags. Executable code can be injected in HTML inside XML CDATA blocks.
-
PoC
-
<![CDATA[ ><img src onerror=alert(1)> ]]>
-
-
Remediation
-
Upgrade dompurify to version 2.4.9, 3.0.11 or higher.
Note:Versions mentioned in the description apply only to the upstream curl package and not the curl package as distributed by Ubuntu.
- See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
-
When curl is told to use the Certificate Status Request TLS extension, often referred to as OCSP stapling, to verify that the server certificate is valid, it might fail to detect some OCSP problems and instead wrongly consider the response as fine. If the returned status reports another error than 'revoked' (like for example 'unauthorized') it is not treated as a bad certficate.
dompurify is a DOM-only XSS sanitizer for HTML, MathML and SVG.
+
Affected versions of this package are vulnerable to Prototype Pollution due to improper user input sanitization through the depth-checking mechanism, an attacker can exploit this vulnerability by using special nesting techniques to create a malicious HTML file.
+
Details
+
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
+
There are two main ways in which the pollution of prototypes occurs:
+
+
Unsafe Object recursive merge
+
+
Property definition by path
+
+
+
Unsafe Object recursive merge
+
The logic of a vulnerable recursive merge function follows the following high-level model:
+
merge (target, source)
+
+ foreach property of source
+
+ if property exists and is an object on both the target and the source
+
+ merge(target[property], source[property])
+
+ else
+
+ target[property] = source[property]
+
+
+
+
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
+
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
+
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
+
Property definition by path
+
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
+
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
+
Types of attacks
+
There are a few methods by which Prototype Pollution can be manipulated:
+
+
+
+
Type
+
Origin
+
Short description
+
+
+
+
Denial of service (DoS)
+
Client
+
This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail.
+
+
+
Remote Code Execution
+
Client
+
Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example:eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code.
+
+
+
Property Injection
+
Client
+
The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges.
+
+
+
Affected environments
+
The following environments are susceptible to a Prototype Pollution attack:
+
+
Application server
+
+
Web server
+
+
Web browser
+
+
+
How to prevent
+
+
Freeze the prototype— use Object.freeze (Object.prototype).
+
+
Require schema validation of JSON input.
+
+
Avoid using unsafe recursive merge functions.
+
+
Consider using objects without prototypes (for example, Object.create(null)), breaking the prototype chain and preventing pollution.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when including multiple regular expression parameters in a single segment, which will produce the regular expression /^\/([^\/]+?)-([^\/]+?)\/?$/, if two parameters within a single segment are separated by a character other than a / or .. Poor performance will block the event loop and can lead to a DoS.
Note:
- Version 0.1.10 is patched to mitigate this but is also vulnerable if custom regular expressions are used. Due to the existence of this attack vector, the Snyk security team have decided to err on the side of caution in considering the very widely-used v0 branch vulnerable, while the 8.0.0 release has completely eliminated the vulnerable functionality.
+ While the 8.0.0 release has completely eliminated the vulnerable functionality, prior versions that have received the patch to mitigate backtracking may still be vulnerable if custom regular expressions are used. So it is strongly recommended for regular expression input to be controlled to avoid malicious performance degradation in those versions. This behavior is enforced as of version 7.1.0 via the strict option, which returns an error if a dangerous regular expression is detected.
Workaround
This vulnerability can be avoided by using a custom regular expression for parameters after the first in a segment, which excludes - and /.
PoC
@@ -3292,11 +3446,13 @@
Details
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
-
Upgrade path-to-regexp to version 8.0.0 or higher.
+
Upgrade path-to-regexp to version 0.1.10, 1.9.0, 3.3.0, 6.3.0, 8.0.0 or higher.
Note:Versions mentioned in the description apply only to the upstream gcc-12 package and not the gcc-12 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
+
DISPUTEDA failure in the -fstack-protector feature in GCC-based toolchains
+ that target AArch64 allows an attacker to exploit an existing buffer
+ overflow in dynamically-sized local variables in your application
+ without this being detected. This stack-protector failure only applies
+ to C99-style dynamically-sized local variables or those created using
+ alloca(). The stack-protector operates as intended for statically-sized
+ local variables.
+
The default behavior when the stack-protector
+ detects an overflow is to terminate your application, resulting in
+ controlled loss of availability. An attacker who can exploit a buffer
+ overflow without triggering the stack-protector might be able to change
+ program flow control to cause an uncontrolled loss of availability or to
+ go further and affect confidentiality or integrity. NOTE: The GCC project argues that this is a missed hardening bug and not a vulnerability by itself.
See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
An issue was discovered in libexpat before 2.6.3. dtdCopy in xmlparse.c can have an integer overflow for nDefaultAtts on 32-bit platforms (where UINT_MAX equals SIZE_MAX).
Remediation
-
There is no fixed version for Ubuntu:22.04expat.
+
Upgrade Ubuntu:22.04expat to version 2.4.7-1ubuntu0.4 or higher.
See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
An issue was discovered in libexpat before 2.6.3. nextScaffoldPart in xmlparse.c can have an integer overflow for m_groupSize on 32-bit platforms (where UINT_MAX equals SIZE_MAX).
Remediation
-
There is no fixed version for Ubuntu:22.04expat.
+
Upgrade Ubuntu:22.04expat to version 2.4.7-1ubuntu0.4 or higher.
See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
When curl is told to use the Certificate Status Request TLS extension, often referred to as OCSP stapling, to verify that the server certificate is valid, it might fail to detect some OCSP problems and instead wrongly consider the response as fine. If the returned status reports another error than 'revoked' (like for example 'unauthorized') it is not treated as a bad certficate.
Remediation
-
There is no fixed version for Ubuntu:22.04curl.
+
Upgrade Ubuntu:22.04curl to version 7.81.0-1ubuntu1.18 or higher.
dompurify is a DOM-only XSS sanitizer for HTML, MathML and SVG.
+
Affected versions of this package are vulnerable to Prototype Pollution due to improper user input sanitization through the depth-checking mechanism, an attacker can exploit this vulnerability by using special nesting techniques to create a malicious HTML file.
+
Details
+
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
+
There are two main ways in which the pollution of prototypes occurs:
+
+
Unsafe Object recursive merge
+
+
Property definition by path
+
+
+
Unsafe Object recursive merge
+
The logic of a vulnerable recursive merge function follows the following high-level model:
+
merge (target, source)
+
+ foreach property of source
+
+ if property exists and is an object on both the target and the source
+
+ merge(target[property], source[property])
+
+ else
+
+ target[property] = source[property]
+
+
+
+
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
+
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
+
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
+
Property definition by path
+
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
+
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
+
Types of attacks
+
There are a few methods by which Prototype Pollution can be manipulated:
+
+
+
+
Type
+
Origin
+
Short description
+
+
+
+
Denial of service (DoS)
+
Client
+
This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail.
+
+
+
Remote Code Execution
+
Client
+
Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example:eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code.
+
+
+
Property Injection
+
Client
+
The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges.
+
+
+
Affected environments
+
The following environments are susceptible to a Prototype Pollution attack:
+
+
Application server
+
+
Web server
+
+
Web browser
+
+
+
How to prevent
+
+
Freeze the prototype— use Object.freeze (Object.prototype).
+
+
Require schema validation of JSON input.
+
+
Avoid using unsafe recursive merge functions.
+
+
Consider using objects without prototypes (for example, Object.create(null)), breaking the prototype chain and preventing pollution.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when including multiple regular expression parameters in a single segment, which will produce the regular expression /^\/([^\/]+?)-([^\/]+?)\/?$/, if two parameters within a single segment are separated by a character other than a / or .. Poor performance will block the event loop and can lead to a DoS.
Note:
- Version 0.1.10 is patched to mitigate this but is also vulnerable if custom regular expressions are used. Due to the existence of this attack vector, the Snyk security team have decided to err on the side of caution in considering the very widely-used v0 branch vulnerable, while the 8.0.0 release has completely eliminated the vulnerable functionality.
+ While the 8.0.0 release has completely eliminated the vulnerable functionality, prior versions that have received the patch to mitigate backtracking may still be vulnerable if custom regular expressions are used. So it is strongly recommended for regular expression input to be controlled to avoid malicious performance degradation in those versions. This behavior is enforced as of version 7.1.0 via the strict option, which returns an error if a dangerous regular expression is detected.
Workaround
This vulnerability can be avoided by using a custom regular expression for parameters after the first in a segment, which excludes - and /.
PoC
@@ -3292,11 +3446,13 @@
Details
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
-
Upgrade path-to-regexp to version 8.0.0 or higher.
+
Upgrade path-to-regexp to version 0.1.10, 1.9.0, 3.3.0, 6.3.0, 8.0.0 or higher.
Note:Versions mentioned in the description apply only to the upstream gcc-12 package and not the gcc-12 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
+
DISPUTEDA failure in the -fstack-protector feature in GCC-based toolchains
+ that target AArch64 allows an attacker to exploit an existing buffer
+ overflow in dynamically-sized local variables in your application
+ without this being detected. This stack-protector failure only applies
+ to C99-style dynamically-sized local variables or those created using
+ alloca(). The stack-protector operates as intended for statically-sized
+ local variables.
+
The default behavior when the stack-protector
+ detects an overflow is to terminate your application, resulting in
+ controlled loss of availability. An attacker who can exploit a buffer
+ overflow without triggering the stack-protector might be able to change
+ program flow control to cause an uncontrolled loss of availability or to
+ go further and affect confidentiality or integrity. NOTE: The GCC project argues that this is a missed hardening bug and not a vulnerability by itself.
See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
An issue was discovered in libexpat before 2.6.3. dtdCopy in xmlparse.c can have an integer overflow for nDefaultAtts on 32-bit platforms (where UINT_MAX equals SIZE_MAX).
Remediation
-
There is no fixed version for Ubuntu:22.04expat.
+
Upgrade Ubuntu:22.04expat to version 2.4.7-1ubuntu0.4 or higher.
See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
An issue was discovered in libexpat before 2.6.3. nextScaffoldPart in xmlparse.c can have an integer overflow for m_groupSize on 32-bit platforms (where UINT_MAX equals SIZE_MAX).
Remediation
-
There is no fixed version for Ubuntu:22.04expat.
+
Upgrade Ubuntu:22.04expat to version 2.4.7-1ubuntu0.4 or higher.
See How to fix? for Ubuntu:22.04 relevant fixed versions and status.
When curl is told to use the Certificate Status Request TLS extension, often referred to as OCSP stapling, to verify that the server certificate is valid, it might fail to detect some OCSP problems and instead wrongly consider the response as fine. If the returned status reports another error than 'revoked' (like for example 'unauthorized') it is not treated as a bad certficate.
Remediation
-
There is no fixed version for Ubuntu:22.04curl.
+
Upgrade Ubuntu:22.04curl to version 7.81.0-1ubuntu1.18 or higher.
dompurify is a DOM-only XSS sanitizer for HTML, MathML and SVG.
+
Affected versions of this package are vulnerable to Prototype Pollution due to improper user input sanitization through the depth-checking mechanism, an attacker can exploit this vulnerability by using special nesting techniques to create a malicious HTML file.
+
Details
+
Prototype Pollution is a vulnerability affecting JavaScript. Prototype Pollution refers to the ability to inject properties into existing JavaScript language construct prototypes, such as objects. JavaScript allows all Object attributes to be altered, including their magical attributes such as __proto__, constructor and prototype. An attacker manipulates these attributes to overwrite, or pollute, a JavaScript application object prototype of the base object by injecting other values. Properties on the Object.prototype are then inherited by all the JavaScript objects through the prototype chain. When that happens, this leads to either denial of service by triggering JavaScript exceptions, or it tampers with the application source code to force the code path that the attacker injects, thereby leading to remote code execution.
+
There are two main ways in which the pollution of prototypes occurs:
+
+
Unsafe Object recursive merge
+
+
Property definition by path
+
+
+
Unsafe Object recursive merge
+
The logic of a vulnerable recursive merge function follows the following high-level model:
+
merge (target, source)
+
+ foreach property of source
+
+ if property exists and is an object on both the target and the source
+
+ merge(target[property], source[property])
+
+ else
+
+ target[property] = source[property]
+
+
+
+
When the source object contains a property named __proto__ defined with Object.defineProperty() , the condition that checks if the property exists and is an object on both the target and the source passes and the merge recurses with the target, being the prototype of Object and the source of Object as defined by the attacker. Properties are then copied on the Object prototype.
+
Clone operations are a special sub-class of unsafe recursive merges, which occur when a recursive merge is conducted on an empty object: merge({},source).
+
lodash and Hoek are examples of libraries susceptible to recursive merge attacks.
+
Property definition by path
+
There are a few JavaScript libraries that use an API to define property values on an object based on a given path. The function that is generally affected contains this signature: theFunction(object, path, value)
+
If the attacker can control the value of “path”, they can set this value to __proto__.myValue. myValue is then assigned to the prototype of the class of the object.
+
Types of attacks
+
There are a few methods by which Prototype Pollution can be manipulated:
+
+
+
+
Type
+
Origin
+
Short description
+
+
+
+
Denial of service (DoS)
+
Client
+
This is the most likely attack. DoS occurs when Object holds generic functions that are implicitly called for various operations (for example, toString and valueOf). The attacker pollutes Object.prototype.someattr and alters its state to an unexpected value such as Int or Object. In this case, the code fails and is likely to cause a denial of service. For example: if an attacker pollutes Object.prototype.toString by defining it as an integer, if the codebase at any point was reliant on someobject.toString() it would fail.
+
+
+
Remote Code Execution
+
Client
+
Remote code execution is generally only possible in cases where the codebase evaluates a specific attribute of an object, and then executes that evaluation. For example:eval(someobject.someattr). In this case, if the attacker pollutes Object.prototype.someattr they are likely to be able to leverage this in order to execute code.
+
+
+
Property Injection
+
Client
+
The attacker pollutes properties that the codebase relies on for their informative value, including security properties such as cookies or tokens. For example: if a codebase checks privileges for someuser.isAdmin, then when the attacker pollutes Object.prototype.isAdmin and sets it to equal true, they can then achieve admin privileges.
+
+
+
Affected environments
+
The following environments are susceptible to a Prototype Pollution attack:
+
+
Application server
+
+
Web server
+
+
Web browser
+
+
+
How to prevent
+
+
Freeze the prototype— use Object.freeze (Object.prototype).
+
+
Require schema validation of JSON input.
+
+
Avoid using unsafe recursive merge functions.
+
+
Consider using objects without prototypes (for example, Object.create(null)), breaking the prototype chain and preventing pollution.
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when including multiple regular expression parameters in a single segment, which will produce the regular expression /^\/([^\/]+?)-([^\/]+?)\/?$/, if two parameters within a single segment are separated by a character other than a / or .. Poor performance will block the event loop and can lead to a DoS.
Note:
- Version 0.1.10 is patched to mitigate this but is also vulnerable if custom regular expressions are used. Due to the existence of this attack vector, the Snyk security team have decided to err on the side of caution in considering the very widely-used v0 branch vulnerable, while the 8.0.0 release has completely eliminated the vulnerable functionality.
+ While the 8.0.0 release has completely eliminated the vulnerable functionality, prior versions that have received the patch to mitigate backtracking may still be vulnerable if custom regular expressions are used. So it is strongly recommended for regular expression input to be controlled to avoid malicious performance degradation in those versions. This behavior is enforced as of version 7.1.0 via the strict option, which returns an error if a dangerous regular expression is detected.
Workaround
This vulnerability can be avoided by using a custom regular expression for parameters after the first in a segment, which excludes - and /.
PoC
@@ -629,11 +783,13 @@
Details
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
Remediation
-
Upgrade path-to-regexp to version 8.0.0 or higher.
+
Upgrade path-to-regexp to version 0.1.10, 1.9.0, 3.3.0, 6.3.0, 8.0.0 or higher.
Scanned the following paths:
diff --git a/docs/snyk/v2.12.3/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html b/docs/snyk/v2.12.3/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html
index c072d3a72b314..bed01faa336f7 100644
--- a/docs/snyk/v2.12.3/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html
+++ b/docs/snyk/v2.12.3/public.ecr.aws_docker_library_haproxy_2.6.17-alpine.html
@@ -456,7 +456,7 @@
Snyk test report
-
September 15th 2024, 12:21:38 am (UTC+00:00)
+
September 22nd 2024, 12:24:09 am (UTC+00:00)
Scanned the following path:
diff --git a/docs/snyk/v2.12.3/public.ecr.aws_docker_library_redis_7.0.15-alpine.html b/docs/snyk/v2.12.3/public.ecr.aws_docker_library_redis_7.0.15-alpine.html
index da43180e99e7c..dbc79e2e50588 100644
--- a/docs/snyk/v2.12.3/public.ecr.aws_docker_library_redis_7.0.15-alpine.html
+++ b/docs/snyk/v2.12.3/public.ecr.aws_docker_library_redis_7.0.15-alpine.html
@@ -456,7 +456,7 @@
Snyk test report
-
September 15th 2024, 12:21:42 am (UTC+00:00)
+
September 22nd 2024, 12:24:12 am (UTC+00:00)
Scanned the following paths:
diff --git a/docs/snyk/v2.12.3/quay.io_argoproj_argocd_v2.12.3.html b/docs/snyk/v2.12.3/quay.io_argoproj_argocd_v2.12.3.html
index ec4329b79d670..0bd0879c74d78 100644
--- a/docs/snyk/v2.12.3/quay.io_argoproj_argocd_v2.12.3.html
+++ b/docs/snyk/v2.12.3/quay.io_argoproj_argocd_v2.12.3.html
@@ -456,7 +456,7 @@
Snyk test report
-
September 15th 2024, 12:21:58 am (UTC+00:00)
+
September 22nd 2024, 12:24:27 am (UTC+00:00)
Scanned the following paths:
@@ -1654,7 +1654,7 @@
NVD Description
See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
When curl is told to use the Certificate Status Request TLS extension, often referred to as OCSP stapling, to verify that the server certificate is valid, it might fail to detect some OCSP problems and instead wrongly consider the response as fine. If the returned status reports another error than 'revoked' (like for example 'unauthorized') it is not treated as a bad certficate.
Remediation
-
There is no fixed version for Ubuntu:24.04curl.
+
Upgrade Ubuntu:24.04curl to version 8.5.0-2ubuntu10.4 or higher.
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
CPU limits can prevent containers from consuming valuable compute time for no benefit (e.g. inefficient code) that might lead to unnecessary costs. It is advisable to also configure CPU requests to ensure application stability.
+
+
Remediation
+
Add `resources.limits.cpu` field with required CPU limit value
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
UID of the container processes could clash with host's UIDs and lead to unintentional authorization bypass
+
+
Remediation
+
Set `securityContext.runAsUser` value to greater or equal than 10'000. SecurityContext can be set on both `pod` and `container` level. If both are set, then the container level takes precedence
Affected versions of this package are vulnerable to Regular Expression Denial of Service (ReDoS) when including multiple regular expression parameters in a single segment, which will produce the regular expression /^\/([^\/]+?)-([^\/]+?)\/?$/, if two parameters within a single segment are separated by a character other than a / or .. Poor performance will block the event loop and can lead to a DoS.
+
Note:
+ While the 8.0.0 release has completely eliminated the vulnerable functionality, prior versions that have received the patch to mitigate backtracking may still be vulnerable if custom regular expressions are used. So it is strongly recommended for regular expression input to be controlled to avoid malicious performance degradation in those versions. This behavior is enforced as of version 7.1.0 via the strict option, which returns an error if a dangerous regular expression is detected.
+
Workaround
+
This vulnerability can be avoided by using a custom regular expression for parameters after the first in a segment, which excludes - and /.
+
PoC
+
/a${'-a'.repeat(8_000)}/a
+
+
Details
+
Denial of Service (DoS) describes a family of attacks, all aimed at making a system inaccessible to its original and legitimate users. There are many types of DoS attacks, ranging from trying to clog the network pipes to the system by generating a large volume of traffic from many machines (a Distributed Denial of Service - DDoS - attack) to sending crafted requests that cause a system to crash or take a disproportional amount of time to process.
+
The Regular expression Denial of Service (ReDoS) is a type of Denial of Service attack. Regular expressions are incredibly powerful, but they aren't very intuitive and can ultimately end up making it easy for attackers to take your site down.
+
Let’s take the following regular expression as an example:
+
regex = /A(B|C+)+D/
+
+
This regular expression accomplishes the following:
+
+
A The string must start with the letter 'A'
+
(B|C+)+ The string must then follow the letter A with either the letter 'B' or some number of occurrences of the letter 'C' (the + matches one or more times). The + at the end of this section states that we can look for one or more matches of this section.
+
D Finally, we ensure this section of the string ends with a 'D'
+
+
The expression would match inputs such as ABBD, ABCCCCD, ABCBCCCD and ACCCCCD
+
It most cases, it doesn't take very long for a regex engine to find a match:
+
$ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCD")'
+ 0.04s user 0.01s system 95% cpu 0.052 total
+
+ $ time node -e '/A(B|C+)+D/.test("ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX")'
+ 1.79s user 0.02s system 99% cpu 1.812 total
+
+
The entire process of testing it against a 30 characters long string takes around ~52ms. But when given an invalid string, it takes nearly two seconds to complete the test, over ten times as long as it took to test a valid string. The dramatic difference is due to the way regular expressions get evaluated.
+
Most Regex engines will work very similarly (with minor differences). The engine will match the first possible way to accept the current character and proceed to the next one. If it then fails to match the next one, it will backtrack and see if there was another way to digest the previous character. If it goes too far down the rabbit hole only to find out the string doesn’t match in the end, and if many characters have multiple valid regex paths, the number of backtracking steps can become very large, resulting in what is known as catastrophic backtracking.
+
Let's look at how our expression runs into this problem, using a shorter string: "ACCCX". While it seems fairly straightforward, there are still four different ways that the engine could match those three C's:
+
+
CCC
+
CC+C
+
C+CC
+
C+C+C.
+
+
The engine has to try each of those combinations to see if any of them potentially match against the expression. When you combine that with the other steps the engine must take, we can use RegEx 101 debugger to see the engine has to take a total of 38 steps before it can determine the string doesn't match.
+
From there, the number of steps the engine must use to validate a string just continues to grow.
+
+
+
+
String
+
Number of C's
+
Number of steps
+
+
+
+
ACCCX
+
3
+
38
+
+
+
ACCCCX
+
4
+
71
+
+
+
ACCCCCX
+
5
+
136
+
+
+
ACCCCCCCCCCCCCCX
+
14
+
65,553
+
+
+
By the time the string includes 14 C's, the engine has to take over 65,000 steps just to see if the string is valid. These extreme situations can cause them to work very slowly (exponentially related to input size, as shown above), allowing an attacker to exploit this and can cause the service to excessively consume CPU, resulting in a Denial of Service.
+
Remediation
+
Upgrade path-to-regexp to version 0.1.10, 1.9.0, 3.3.0, 6.3.0, 8.0.0 or higher.
github.com/Azure/azure-sdk-for-go/sdk/azidentity is a module that provides Microsoft Entra ID (formerly Azure Active Directory) token authentication support across the Azure SDK. It includes a set of TokenCredential implementations, which can be used with Azure SDK clients supporting token authentication.
+
Affected versions of this package are vulnerable to Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition') in the authentication process. An attacker can elevate privileges by exploiting race conditions during the token validation steps. This is only exploitable if the application is configured to use multiple threads or processes for handling authentication requests.
+
Notes:
+
+
An attacker who successfully exploited the vulnerability could elevate privileges and read any file on the file system with SYSTEM access permissions;
+
+
An attacker who successfully exploits this vulnerability can only obtain read access to the system files by exploiting this vulnerability. The attacker cannot perform write or delete operations on the files;
+
+
The vulnerability exists in the following credential types: DefaultAzureCredential and ManagedIdentityCredential;
+
+
The vulnerability exists in the following credential types:
+
+
+
ManagedIdentityApplication (.NET)
+
ManagedIdentityApplication (Java)
+
ManagedIdentityApplication (Node.js)
+
Remediation
+
Upgrade github.com/Azure/azure-sdk-for-go/sdk/azidentity to version 1.6.0 or higher.
Affected versions of this package are vulnerable to Insertion of Sensitive Information into Log File in the form of gRPC metadata. If the metadata contains sensitive information an attacker can expose it.
+
Remediation
+
Upgrade google.golang.org/grpc/metadata to version 1.64.1 or higher.
Note:Versions mentioned in the description apply only to the upstream openssl package and not the openssl package as distributed by Alpine.
+ See How to fix? for Alpine:3.20 relevant fixed versions and status.
+
Issue summary: Applications performing certificate name checks (e.g., TLS
+ clients checking server certificates) may attempt to read an invalid memory
+ address resulting in abnormal termination of the application process.
+
Impact summary: Abnormal termination of an application can a cause a denial of
+ service.
+
Applications performing certificate name checks (e.g., TLS clients checking
+ server certificates) may attempt to read an invalid memory address when
+ comparing the expected name with an otherName subject alternative name of an
+ X.509 certificate. This may result in an exception that terminates the
+ application program.
+
Note that basic certificate chain validation (signatures, dates, ...) is not
+ affected, the denial of service can occur only when the application also
+ specifies an expected DNS name, Email address or IP address.
+
TLS servers rarely solicit client certificates, and even when they do, they
+ generally don't perform a name check against a reference identifier (expected
+ identity), but rather extract the presented identity after checking the
+ certificate chain. So TLS servers are generally not affected and the severity
+ of the issue is Moderate.
+
The FIPS modules in 3.3, 3.2, 3.1 and 3.0 are not affected by this issue.
+
Remediation
+
Upgrade Alpine:3.20openssl to version 3.3.2-r0 or higher.
Note:Versions mentioned in the description apply only to the upstream busybox package and not the busybox package as distributed by Alpine.
+ See How to fix? for Alpine:3.20 relevant fixed versions and status.
+
A use-after-free vulnerability in BusyBox v.1.36.1 allows attackers to cause a denial of service via a crafted awk pattern in the awk.c evaluate function.
+
Remediation
+
Upgrade Alpine:3.20busybox to version 1.36.1-r29 or higher.
Note:Versions mentioned in the description apply only to the upstream busybox package and not the busybox package as distributed by Alpine.
+ See How to fix? for Alpine:3.20 relevant fixed versions and status.
+
A use-after-free vulnerability was discovered in BusyBox v.1.36.1 via a crafted awk pattern in the awk.c copyvar function.
+
Remediation
+
Upgrade Alpine:3.20busybox to version 1.36.1-r29 or higher.
Note:Versions mentioned in the description apply only to the upstream openssl package and not the openssl package as distributed by Alpine.
+ See How to fix? for Alpine:3.20 relevant fixed versions and status.
+
Issue summary: Calling the OpenSSL API function SSL_select_next_proto with an
+ empty supported client protocols buffer may cause a crash or memory contents to
+ be sent to the peer.
+
Impact summary: A buffer overread can have a range of potential consequences
+ such as unexpected application beahviour or a crash. In particular this issue
+ could result in up to 255 bytes of arbitrary private data from memory being sent
+ to the peer leading to a loss of confidentiality. However, only applications
+ that directly call the SSL_select_next_proto function with a 0 length list of
+ supported client protocols are affected by this issue. This would normally never
+ be a valid scenario and is typically not under attacker control but may occur by
+ accident in the case of a configuration or programming error in the calling
+ application.
+
The OpenSSL API function SSL_select_next_proto is typically used by TLS
+ applications that support ALPN (Application Layer Protocol Negotiation) or NPN
+ (Next Protocol Negotiation). NPN is older, was never standardised and
+ is deprecated in favour of ALPN. We believe that ALPN is significantly more
+ widely deployed than NPN. The SSL_select_next_proto function accepts a list of
+ protocols from the server and a list of protocols from the client and returns
+ the first protocol that appears in the server list that also appears in the
+ client list. In the case of no overlap between the two lists it returns the
+ first item in the client list. In either case it will signal whether an overlap
+ between the two lists was found. In the case where SSL_select_next_proto is
+ called with a zero length client list it fails to notice this condition and
+ returns the memory immediately following the client list pointer (and reports
+ that there was no overlap in the lists).
+
This function is typically called from a server side application callback for
+ ALPN or a client side application callback for NPN. In the case of ALPN the list
+ of protocols supplied by the client is guaranteed by libssl to never be zero in
+ length. The list of server protocols comes from the application and should never
+ normally be expected to be of zero length. In this case if the
+ SSL_select_next_proto function has been called as expected (with the list
+ supplied by the client passed in the client/client_len parameters), then the
+ application will not be vulnerable to this issue. If the application has
+ accidentally been configured with a zero length server list, and has
+ accidentally passed that zero length server list in the client/client_len
+ parameters, and has additionally failed to correctly handle a "no overlap"
+ response (which would normally result in a handshake failure in ALPN) then it
+ will be vulnerable to this problem.
+
In the case of NPN, the protocol permits the client to opportunistically select
+ a protocol when there is no overlap. OpenSSL returns the first client protocol
+ in the no overlap case in support of this. The list of client protocols comes
+ from the application and should never normally be expected to be of zero length.
+ However if the SSL_select_next_proto function is accidentally called with a
+ client_len of 0 then an invalid memory pointer will be returned instead. If the
+ application uses this output as the opportunistic protocol then the loss of
+ confidentiality will occur.
+
This issue has been assessed as Low severity because applications are most
+ likely to be vulnerable if they are using NPN instead of ALPN - but NPN is not
+ widely used. It also requires an application configuration or programming error.
+ Finally, this issue would not typically be under attacker control making active
+ exploitation unlikely.
+
The FIPS modules in 3.3, 3.2, 3.1 and 3.0 are not affected by this issue.
+
Due to the low severity of this issue we are not issuing new releases of
+ OpenSSL at this time. The fix will be included in the next releases when they
+ become available.
+
Remediation
+
Upgrade Alpine:3.20openssl to version 3.3.1-r1 or higher.
Note:Versions mentioned in the description apply only to the upstream openssl package and not the openssl package as distributed by Alpine.
+ See How to fix? for Alpine:3.20 relevant fixed versions and status.
+
Issue summary: Applications performing certificate name checks (e.g., TLS
+ clients checking server certificates) may attempt to read an invalid memory
+ address resulting in abnormal termination of the application process.
+
Impact summary: Abnormal termination of an application can a cause a denial of
+ service.
+
Applications performing certificate name checks (e.g., TLS clients checking
+ server certificates) may attempt to read an invalid memory address when
+ comparing the expected name with an otherName subject alternative name of an
+ X.509 certificate. This may result in an exception that terminates the
+ application program.
+
Note that basic certificate chain validation (signatures, dates, ...) is not
+ affected, the denial of service can occur only when the application also
+ specifies an expected DNS name, Email address or IP address.
+
TLS servers rarely solicit client certificates, and even when they do, they
+ generally don't perform a name check against a reference identifier (expected
+ identity), but rather extract the presented identity after checking the
+ certificate chain. So TLS servers are generally not affected and the severity
+ of the issue is Moderate.
+
The FIPS modules in 3.3, 3.2, 3.1 and 3.0 are not affected by this issue.
+
Remediation
+
Upgrade Alpine:3.20openssl to version 3.3.2-r0 or higher.
Note:Versions mentioned in the description apply only to the upstream openssl package and not the openssl package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
Validating the order of the public keys in the Diffie-Hellman Key Agreement Protocol, when an approved safe prime is used, allows remote attackers (from the client side) to trigger unnecessarily expensive server-side DHE modular-exponentiation calculations. The client may cause asymmetric resource consumption. The basic attack scenario is that the client must claim that it can only communicate with DHE, and the server must be configured to allow DHE and validate the order of the public key.
+
Remediation
+
There is no fixed version for Ubuntu:24.04openssl.
Note:Versions mentioned in the description apply only to the upstream libgcrypt20 package and not the libgcrypt20 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
A timing-based side-channel flaw was found in libgcrypt's RSA implementation. This issue may allow a remote attacker to initiate a Bleichenbacher-style attack, which can lead to the decryption of RSA ciphertexts.
+
Remediation
+
There is no fixed version for Ubuntu:24.04libgcrypt20.
Note:Versions mentioned in the description apply only to the upstream krb5 package and not the krb5 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
Kerberos 5 (aka krb5) 1.21.2 contains a memory leak vulnerability in /krb5/src/kdc/ndr.c.
Note:Versions mentioned in the description apply only to the upstream patch package and not the patch package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
An Invalid Pointer vulnerability exists in GNU patch 2.7 via the another_hunk function, which causes a Denial of Service.
Note:Versions mentioned in the description apply only to the upstream patch package and not the patch package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
A double free exists in the another_hunk function in pch.c in GNU patch through 2.7.6.
Note:Versions mentioned in the description apply only to the upstream krb5 package and not the krb5 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
Kerberos 5 (aka krb5) 1.21.2 contains a memory leak in /krb5/src/lib/rpc/pmap_rmt.c.
Note:Versions mentioned in the description apply only to the upstream krb5 package and not the krb5 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
Kerberos 5 (aka krb5) 1.21.2 contains a memory leak vulnerability in /krb5/src/lib/gssapi/krb5/k5sealv3.c.
Note:Versions mentioned in the description apply only to the upstream gnupg2 package and not the gnupg2 package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
GnuPG can be made to spin on a relatively small input by (for example) crafting a public key with thousands of signatures attached, compressed down to just a few KB.
Note:Versions mentioned in the description apply only to the upstream glibc package and not the glibc package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
sha256crypt and sha512crypt through 0.6 allow attackers to cause a denial of service (CPU consumption) because the algorithm's runtime is proportional to the square of the length of the password.
Note:Versions mentioned in the description apply only to the upstream git package and not the git package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
GIT version 2.15.1 and earlier contains a Input Validation Error vulnerability in Client that can result in problems including messing up terminal configuration to RCE. This attack appear to be exploitable via The user must interact with a malicious git server, (or have their traffic modified in a MITM attack).
Note:Versions mentioned in the description apply only to the upstream coreutils package and not the coreutils package as distributed by Ubuntu.
+ See How to fix? for Ubuntu:24.04 relevant fixed versions and status.
+
chroot in GNU coreutils, when used with --userspec, allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer.
+
Remediation
+
There is no fixed version for Ubuntu:24.04coreutils.