Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress nginx TCP service endpint 400 Bad Request #12171

Closed
creeram opened this issue Oct 14, 2024 · 9 comments
Closed

Ingress nginx TCP service endpint 400 Bad Request #12171

creeram opened this issue Oct 14, 2024 · 9 comments
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@creeram
Copy link

creeram commented Oct 14, 2024

What happened:

telnet <IP address> 18332         
Trying IP address...
Connected to IP address
Escape character is '^]'.
HTTP/1.1 400 Bad Request
Content-Type: text/html
Connection: close
Date: Mon, 14 Oct 2024 10:57:12 GMT
Content-Length: 94

<HTML><HEAD>
<TITLE>400 Bad Request</TITLE>
</HEAD><BODY>
<H1>Bad Request</H1>
</BODY></HTML>
Connection closed by foreign host.

What you expected to happen:

telnet <IP address > 18332 
Trying Ip address ..
Connected to IP address.
Escape character is '^]'.

NGINX Ingress controller version : 1.11.3

Kubernetes version : v1.29.6

Environment:

  • Cloud provider or hardware configuration: OVH Cloud
  • Install tools: OVH managed k8s cluster

Deployed as Daemonset and below are the details of the node

NAME                               STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP       OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
apps-node-pool-node-52f59c         Ready    <none>   115d   v1.29.6   10.10.240.21    ******    Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
apps-node-pool-node-5d4284         Ready    <none>   116d   v1.29.6   10.10.242.127  *******     Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
apps-node-pool-node-964fdf         Ready    <none>   33d    v1.29.6   10.10.242.48   ******   Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
apps-node-pool-node-b62e1b         Ready    <none>   116d   v1.29.6   10.10.240.186   *******    Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
apps-node-pool-node-f803d5         Ready    <none>   68d    v1.29.6   10.10.242.167   *******    Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
coder-node-pool-node-1ba810        Ready    <none>   77d    v1.29.6   10.10.241.223   ********   Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
coder-node-pool-node-564556        Ready    <none>   77d    v1.29.6   10.10.240.187   ********   Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
coder-node-pool-node-b3bf10        Ready    <none>   121d   v1.29.6   10.10.243.36    ********      Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
monitoring-node-pool-node-8d91f2   Ready    <none>   92d    v1.29.6   10.10.243.142  ********     Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
monitoring-node-pool-node-c8dcae   Ready    <none>   92d    v1.29.6   10.10.240.253   *******  Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
monitoring-node-pool-node-d0bf38   Ready    <none>   92d    v1.29.6   10.10.240.232   *******    Ubuntu 22.04.4 LTS   5.15.0-113-generic   containerd://1.7.18
ssilwal@Shreerams-MacBook-Pro ingress-nginx % 

  • How was the ingress-nginx-controller installed:

Deployed using helm chart

Helm chart values:

tcp:
     "18332": "crypto-nodes/bitcoin:18332"
    controller:
        kind: DaemonSet
      admissionWebhooks:
         enabled: false
      extraArgs:
        default-ssl-certificate: "default/<certificate-name>"
      service:
        externalTrafficPolicy: "Local"
        annotations:
          service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v2"
      config:
        use-proxy-protocol: "true"
        real-ip-header: "proxy_protocol"
        proxy-real-ip-cidr: "***********"

Helm created k8s manifests:

Configmaps:

apiVersion: v1
items:
- apiVersion: v1
  data:
    allow-snippet-annotations: "false"
    proxy-real-ip-cidr: *********
    real-ip-header: proxy_protocol
    use-proxy-protocol: "true"
  kind: ConfigMap
  metadata:
    annotations:
      meta.helm.sh/release-name: ingress-nginx
      meta.helm.sh/release-namespace: ingress-nginx
    labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
      app.kubernetes.io/version: 1.11.3
      helm.sh/chart: ingress-nginx-4.11.3
    name: ingress-nginx-controller
    namespace: ingress-nginx
---
- apiVersion: v1
  data:
    "18332": crypto-nodes/bitcoin:18332
  kind: ConfigMap
  metadata:
    annotations:
      meta.helm.sh/release-name: ingress-nginx
      meta.helm.sh/release-namespace: ingress-nginx
    labels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
      app.kubernetes.io/version: 1.11.3
      helm.sh/chart: ingress-nginx-4.11.3
    name: ingress-nginx-tcp
    namespace: ingress-nginx
   

Daemonset:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "10"
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
  creationTimestamp: "2024-06-10T16:45:17Z"
  generation: 10
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.11.3
    helm.sh/chart: ingress-nginx-4.11.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.11.3
        helm.sh/chart: ingress-nginx-4.11.3
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --tcp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-tcp
        - --enable-metrics=false
        - --default-ssl-certificate=default/<certificate-name>
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: registry.k8s.io/ingress-nginx/controller:v1.11.3@sha256:d56f135b6462cfc476447cfe564b83a45e8bb7da2774963b00d12161112270b7
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 18332
          name: 18332-tcp
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          readOnlyRootFilesystem: false
          runAsNonRoot: true
          runAsUser: 101
          seccompProfile:
            type: RuntimeDefault
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ingress-nginx
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate



Service:

apiVersion: v1
kind: Service
metadata:
  annotations:
    loadbalancer.ovhcloud.com/class: iolb
    meta.helm.sh/release-name: ingress-nginx
    meta.helm.sh/release-namespace: ingress-nginx
    service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: v2
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.11.3
    helm.sh/chart: ingress-nginx-4.11.3
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.3.165.43
  clusterIPs:
  - 10.3.165.43
  externalTrafficPolicy: Local
  healthCheckNodePort: 31663
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    nodePort: 31851
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    nodePort: 32666
    port: 443
    protocol: TCP
    targetPort: https
  - name: 18332-tcp
    nodePort: 31265
    port: 18332
    protocol: TCP
    targetPort: 18332-tcp
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer


@creeram creeram added the kind/bug Categorizes issue or PR as related to a bug. label Oct 14, 2024
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 14, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@longwuyuan
Copy link
Contributor

@creeram If you can provide the information as asked in the issue template, then readers don't have to ask you questions to get info that is actionable for analysis. Help out and answer questions as asked in the template please because the feature works just fine for everyone so your problem is likely caused by a environment factor.

/remove-kind bug
/kind support

And in case you have not opened those ports on the LoadBalancer, then you obviously need to do that

@k8s-ci-robot k8s-ci-robot added kind/support Categorizes issue or PR as a support question. and removed kind/bug Categorizes issue or PR as related to a bug. labels Oct 14, 2024
@creeram
Copy link
Author

creeram commented Oct 14, 2024

@longwuyuan Why does it manually need to open ports in a load balancer managed by the k8s cluster, and the issue is not related to the ports not being reachable instead it's throwing a 400 Bad Request error.

@longwuyuan
Copy link
Contributor

So there is no data to analyze. All that is known is that you sent a telnet packet and you received a HTTP 400 response.

Saying this because if I opened tcp port for postgres for testing on minikube , and it works just fine. So the controller is not broken for sure.

@creeram
Copy link
Author

creeram commented Oct 14, 2024

I'm not sure if the issue is specific to OVH cloud, as the same configuration works fine on my local kind Kubernetes cluster with a MetalLB load balancer.

@longwuyuan
Copy link
Contributor

If you post data that can be analyzed, I can comment now and others will comment as soon as they see i guess. the feature is not broken for sure.

@creeram
Copy link
Author

creeram commented Oct 17, 2024

@longwuyuan I am able fix the issue by updating the values.yaml file with below config.

Added :PROXY and the end of the TCP config.

tcp:
     "18332": "crypto-nodes/bitcoin:18332:PROXY"
    controller:
        kind: DaemonSet
      admissionWebhooks:
         enabled: false
      extraArgs:
        default-ssl-certificate: "default/<certificate-name>"
      service:
        externalTrafficPolicy: "Local"
        annotations:
          service.beta.kubernetes.io/ovh-loadbalancer-proxy-protocol: "v2"
      config:
        use-proxy-protocol: "true"
        real-ip-header: "proxy_protocol"
        proxy-real-ip-cidr: "***********"

@longwuyuan
Copy link
Contributor

longwuyuan commented Oct 17, 2024 via email

@longwuyuan
Copy link
Contributor

And if the issue does not need support anymore, please close it or confirm that issue is solved.

@creeram creeram closed this as completed Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Development

No branches or pull requests

3 participants