Understanding Ingress in Kubernetes

Simply speaking ingress is a collection of routing rules, and ingress controller is the component that implements those rules in Kubernetes cluster. It is not available in any Kubernetes releases prior to 1.1. I guess it is the reason that why OpenShift developed Router.

This is a not so nice diagram, but I think it is able to give you a big picture of what is ingress and ingress controllersketch.png

The users’ traffics come in via the load balancer, then load balancer forwards the traffics to the backend services which normally are the ingress controller service’s NodePort on the Kubernetes nodes. The ingress controller will check the request traffics to see if they match any ingress rules, e.g host name, path or both. If the request traffic matches, ingress controller will divert the traffics to the corresponding services. Otherwise, the request traffics will be routed to the default backend server and get a Not Found/ 404 response. As you can see in this example, the benefits of using ingress is that only ingress controller service is exposed to the external world via NodePort. All other services are internal and only can be reachable via the Cluster IP and port. It is not only secure but also easy to manage the exposure of services.

Now lets look into how ingress and ingress controller work together. I will take this one as a example. When I add a ingress rule to forward any requests of /foo to service echoheader-x in default namespace. The Nginx ingress controller configuration (/etc/nginx/nginx.conf) will be automatically updated to fulfil this rule. This is because that the ingress controller monitors the /ingress endpoint closely.

This is the part of the auto-generated nginx configuration:

        location /foo {

            port_in_redirect off;

            set $proxy_upstream_name "default-echoheaders-x-80";

            set $namespace      "default";
            set $ingress_name   "echomap";
            set $service_name   "echoheaders-x";

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;

            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_buffering                         "off";
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://default-echoheaders-x-80;

            proxy_redirect                          off;


        upstream upstream-default-backend {

                keepalive 32;

                server max_fails=0 fail_timeout=0;


        upstream default-echoheaders-x-80 {

                keepalive 32;

                server max_fails=0 fail_timeout=0;


Ingress controller works mostly like the reverse proxy, so it can be Nginx or HA-Proxy. And it also can be the Cloud service, e.g AWS ALB (Application Load Balancer). Oh, one more thing that I think you need to pay attention is when to enable Proxy Protocol. If you want to keep track of the original clients IP addresses and your load balancer is L4, then you need to enable the proxy protocol in your load balancer.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s