Skip to content

Commit

Permalink
net/mlx5: fix RSS and queue action validation
Browse files Browse the repository at this point in the history
mlx5 PMD supports configuration where
Rx queues managed by DPDK are not set up.
Externally allocated RQs can be used by mapping them to some
DPDK Rx queue indexes using rte_pmd_mlx5_external_rx_queue_id_map()
API. In this case, mlx5 PMD will allow creating flow rules which
reference such external RQ.

HWS validation of RSS and QUEUE unmasked flow actions in actions
templates worked by constructing a "mock" action which was then checked.
This procedure incorrectly assumed that queue index 0 can be used as
"always valid queue", which is not the case in scenario mentioned above,
because queue 0 was not set up

This patch fixes that by removing "mock" actions, since there's no real
data available for validation. RSS and QUEUE validation in unmasked
action case only checks flow attributes.

Fixes: d6dc072 ("net/mlx5: validate flow actions in table creation")

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
  • Loading branch information
sodar authored and raslandarawsheh committed Jul 21, 2024
1 parent 4292545 commit 978350f
Showing 1 changed file with 14 additions and 23 deletions.
37 changes: 14 additions & 23 deletions drivers/net/mlx5/mlx5_flow_hw.c
Original file line number Diff line number Diff line change
Expand Up @@ -6806,8 +6806,6 @@ mlx5_hw_validate_action_mark(struct rte_eth_dev *dev,
&attr, error);
}

#define MLX5_FLOW_DEFAULT_INGRESS_QUEUE 0

static int
mlx5_hw_validate_action_queue(struct rte_eth_dev *dev,
const struct rte_flow_action *template_action,
Expand All @@ -6817,22 +6815,22 @@ mlx5_hw_validate_action_queue(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
const struct rte_flow_action_queue *queue_mask = template_mask->conf;
const struct rte_flow_action *action =
queue_mask && queue_mask->index ? template_action :
&(const struct rte_flow_action) {
.type = RTE_FLOW_ACTION_TYPE_QUEUE,
.conf = &(const struct rte_flow_action_queue) {
.index = MLX5_FLOW_DEFAULT_INGRESS_QUEUE
}
};
const struct rte_flow_attr attr = {
.ingress = template_attr->ingress,
.egress = template_attr->egress,
.transfer = template_attr->transfer
};
bool masked = queue_mask != NULL && queue_mask->index;

return mlx5_flow_validate_action_queue(action, action_flags,
dev, &attr, error);
if (template_attr->egress || template_attr->transfer)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
"QUEUE action supported for ingress only");
if (masked)
return mlx5_flow_validate_action_queue(template_action, action_flags, dev,
&attr, error);
else
return 0;
}

static int
Expand All @@ -6844,22 +6842,15 @@ mlx5_hw_validate_action_rss(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
const struct rte_flow_action_rss *mask = template_mask->conf;
const struct rte_flow_action *action = mask ? template_action :
&(const struct rte_flow_action) {
.type = RTE_FLOW_ACTION_TYPE_RSS,
.conf = &(const struct rte_flow_action_rss) {
.queue_num = 1,
.queue = (uint16_t [1]) {
MLX5_FLOW_DEFAULT_INGRESS_QUEUE
}
}
};

if (template_attr->egress || template_attr->transfer)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ATTR, NULL,
"RSS action supported for ingress only");
return mlx5_validate_action_rss(dev, action, error);
if (mask != NULL)
return mlx5_validate_action_rss(dev, template_action, error);
else
return 0;
}

static int
Expand Down

0 comments on commit 978350f

Please sign in to comment.