Cause of the accident

The new cluster in business, originally thought "sprinkle water", cut at 11 o'clock, and sleep at home at 12 o'clock. After the traffic was cut over, during the verification process, it was found that the webpage could be opened normally, and 502 was returned when logging in, which was bewildered on the spot. In the relevant container log, I found a high-frequency error entry "Port 7000 cannot be connected". I learned from the business group that this is a port in the redis cluster. The front and back ends interact through redis. The cluster also has 7001-7003 others. Three ports.

Use the nc command to test the connection of the redis cluster: keys * command is sent to the server, port 7000 returns HTTP/1.1 400 Bad Request , and the other three ports are -NOAUTH Authentication required returned by redis.

$ nc 10.0.0.6 7000
keys *
HTTP/1.1 400 Bad Request
content-length: 0
connection: close

$ nc 10.0.0.6 7003
keys *
-NOAUTH Authentication required

It is judged that port 7000 is connected to other applications, at least not redis. Capture packets on the host and find that no traffic accessing port 7000 is captured, and then check the nf_conntrackb table of the container, and find that the data on port 7000 only has the local session information; 7003 has two session information, one to the local machine, and the other To the target server.

$ grep 7000 /proc/net/nf_conntrack
ipv4     2 tcp      6 110 TIME_WAIT src=10.64.192.14 dst=10.0.0.6 sport=50498 dport=7000 src=127.0.0.1 dst=10.64.192.14 sport=15001 dport=50498 [ASSURED] mark=0 zone=0 use=2

$ grep 7003 /proc/net/nf_conntrack
ipv4     2 tcp      6 104 TIME_WAIT src=10.64.192.14 dst=10.0.0.6 sport=38952 dport=7003 src=127.0.0.1 dst=10.64.192.14 sport=15001 dport=38952 [ASSURED] mark=0 zone=0 use=2
ipv4     2 tcp      6 104 TIME_WAIT src=10.64.192.14 dst=10.0.0.6 sport=38954 dport=7003 src=10.0.0.6 dst=10.64.192.14 sport=7003 dport=38954 [ASSURED] mark=0 zone=0 use=2

From this, it was judged that istio did not forward the traffic of 7000, which suddenly touched my knowledge blind zone. A lot of people looked at the 26 degree air conditioner in the office and kept sweating. There was no way. After discussing with the business, I had to turn off the istio injection first and resume the business first. After going back, make up istio's relevant information. Finally the problem was solved. Record the relevant information for future reference.

Supplementary background knowledge

The mode of istio Sidecar

Sidecar of istio has two modes:

  • ALLOW_ANY : istio proxy allows calling unknown services, blacklist mode.
  • REGISTRY_ONLY : The istio proxy will block any host with no HTTP service or service entry defined in the grid, whitelist mode.

istio-proxy (Envoy) configuration structure

The proxy information of istio-proxy (Envoy) is roughly composed of the following parts:

  • Cluster : In Envoy, Cluster is a service cluster. The Cluster contains one or more endpoints. Each endpoint can provide services. Envoy sends requests to these endpoints according to the load balancing algorithm. There are two types of clusters: inbound and outbound. The former corresponds to the services on the node where Envoy is located; the latter accounts for the vast majority and corresponds to the external services of the node where Envoy is located. You can use the following methods to view the inbound and outbound clusters respectively.
  • Listeners : Envoy uses listeners to receive and process downstream requests. It can be directly associated with Cluster, or you can configure routing rules (Routes) through rds, and then refine the requests according to different request destinations in the routing rules.化的处理。 Treatment.
  • Routes : Configure Envoy routing rules. In the default routing rules issued by istio, a routing rule is set for each port (service), and the request is routed and distributed according to the host. The purpose of routes is the cluster of other services.
  • Endpoint : The backend service corresponding to cludter, you can check the endpoint information corresponding to inbound and outbound through istio pc endpoint.

Service discovery type

The main types of cluster service discovery are:

  • ORIGINAL_DST : Type of Cluster, Envoy will directly use the original destination IP address in the downstream request when forwarding the request
  • EDS : EDS obtains all available Endpoints in the Cluster, and sends requests from Downstream to different Endpoints according to the load balancing algorithm (the default is Round Robin). istio will automatically create proxy information for the service in the cluster, the listener information is obtained from the service, and the corresponding cluster is marked as EDS type
  • STATIC : The default value, which lists all the host Endpoints that can be proxied in the cluster. When no content is empty, no forwarding is performed.
  • LOGICAL_DNS : Envoy uses DNS to add a host, but if DNS no longer returns, it will not be discarded.
  • STRICT_DNS : Envoy will monitor DNS, and every matching A record will be considered valid.

Two special clusters

BlackHoleCluster : Black hole cluster, traffic matching this cluster will not be forwarded.

{
    "name": "BlackHoleCluster",
    "type": "STATIC",
    "connectTimeout": "10s"
}

The type is static, but no proxy endpoint is specified, so the traffic will not be forwarded.

PassthroughCluster : Passthrough cluster, the destination IP of traffic packets matching this cluster will not change.

{
    "name": "PassthroughCluster",
    "type": "ORIGINAL_DST",
    "connectTimeout": "10s",
    "lbPolicy": "CLUSTER_PROVIDED",
    "circuitBreakers": {
       "thresholds": [
          {
              "maxConnections": 4294967295,
               "maxPendingRequests": 4294967295,
               "maxRequests": 4294967295,
               "maxRetries": 4294967295
            }
        ]
 }

The type is original_dst, and the traffic will be forwarded as it is.

A special listener

There is a special Listener in called 16145b8b50e4b6 virtualOutbound , which is defined as follows:

  • virtualOutbound : Each Sidecar has a listener bound to 0.0.0.0:15001, and many virtual listeners are associated with this listener. iptables will first import all outbound traffic to the listener. The listener has a field useOriginalDst set to true, which means that the request will be distributed to the virtual listener in the best way to match the original destination. If no virtual listener is found, it will be directly The PassthroughCluster sent to the original destination of the packet.

The specific meaning of the useOriginalDst field is that if the connection is redirected using iptables, the destination address of the proxy receiving traffic may be different from the original destination address. When this flag is set to true , the listener will forward the redirected traffic associated with the original target address. If there is no listener associated with the original destination address, the traffic is processed by the listener that receives it. The default is false.

The flow processing flow of virtualOutbound is shown in the figure:

This is part of the configuration of virtualOutbound:

{
   "name": "envoy.tcp_proxy",
   "typedConfig": {
      "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
      "statPrefix": "PassthroughCluster",
      "cluster": "PassthroughCluster"
     }
}
……………
"useOriginalDst": true

istio's outbond traffic processing

After enabling traffic management, the traffic forwarding path for pod to access external resources is shown in the figure:

After istio is injected, istio-proxy has a port that listens on 15001. All outbond traffic generated by non-istio-proxy user processes is redirected to 15001 through iptables rules.

# Sidecar 注入的 pod 监听的端口
$ ss -tulnp
State       Recv-Q Send-Q                                 Local Address:Port                                                Peer Address:Port
LISTEN      0      128                                                *:80                                                             *:*
LISTEN      0      128                                                *:15090                                                          *:*
LISTEN      0      128                                        127.0.0.1:15000                                                          *:*
LISTEN      0      128                                                *:15001                                                          *:*
LISTEN      0      128                                                *:15006                                                          *:*
LISTEN      0      128                                             [::]:15020                                                       [::]:*

# Pod 内部的 iptables 表项
$ iptables-save
# Generated by iptables-save v1.4.21 on Fri Sep 17 13:47:09 2021
*nat
:PREROUTING ACCEPT [129886:7793160]
:INPUT ACCEPT [181806:10908360]
:OUTPUT ACCEPT [53409:3257359]
:POSTROUTING ACCEPT [53472:3261139]
:istio_INBOUND - [0:0]
:istio_IN_REDIRECT - [0:0]
:istio_OUTPUT - [0:0]
:istio_REDIRECT - [0:0]
-A PREROUTING -p tcp -j istio_INBOUND
-A OUTPUT -p tcp -j istio_OUTPUT
-A istio_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A istio_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A istio_INBOUND -p tcp -j istio_IN_REDIRECT
-A istio_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A istio_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A istio_OUTPUT ! -d 127.0.0.1/32 -o lo -j istio_IN_REDIRECT
-A istio_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A istio_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A istio_OUTPUT -d 127.0.0.1/32 -j RETURN
-A istio_OUTPUT -j istio_REDIRECT
-A istio_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Fri Sep 17 13:47:09 2021

After istio-proxy receives the traffic, the general processing steps are as follows:

  • Proxy will be forwarded directly if it does not match the listener in ALLOW_ANY mode
  • The listener associated with the cluster with type ORIGINAL_DST will use the IP address of the original request
  • Matched with BlackHoleCluster, it will not be forwarded

The matching steps of proxied traffic are roughly as follows:

Question: The listener address created by isito for svc is all zeros, and the ports inside the cluster will be reused. So how does istio distinguish traffic?

The key lies in the route. The route is composed of virtual_host entries. These virtual_host entries are generated based on the information of svc. When accessing the svc inside the cluster, the route can be accurately matched based on the domain name or the virtual_ip corresponding to the svc, so there is no need to worry about it. .

$ kubectl get svc -A | grep 8001
NodePort    10.233.34.158   <none>        8001:30333/TCP                                                                                                                               8d
NodePort    10.233.9.105    <none>        8001:31717/TCP                                                                                                                               8d
NodePort    10.233.60.59    <none>        8001:31135/TCP                                                                                                                               2d16h
NodePort    10.233.18.212   <none>        8001:32407/TCP                                                                                                                               8d
NodePort    10.233.15.5     <none>        8001:30079/TCP                                                                                                                               8d
NodePort    10.233.59.21    <none>        8001:31103/TCP                                                                                                                               8d
NodePort    10.233.17.123   <none>        8001:31786/TCP                                                                                                                               8d
NodePort    10.233.9.196    <none>        8001:32662/TCP                                                                                                                               8d
NodePort    10.233.62.85    <none>        8001:32104/TCP                                                                                                                               8d
ClusterIP     10.233.49.245   <none>        8000/TCP,8001/TCP,8443/TCP,8444/TCP

This is the virtual_host entry under route:

    {
        "name": "8001",
        "virtualHosts": [
            {
                "name": "merchant-center.open.svc.cluster.local:8001",
                "domains": [
                    "merchant-center.open.svc.cluster.local",
                    "merchant-center.open.svc.cluster.local:8001",
                    "merchant-center.open",
                    "merchant-center.open:8001",
                    "merchant-center.open.svc.cluster",
                    "merchant-center.open.svc.cluster:8001",
                    "merchant-center.open.svc",
                    "merchant-center.open.svc:8001",
                    "10.233.60.59",
                    "10.233.60.59:8001"
                ],
                "routes": [
                    {
                        "name": "default",
                        "match": {
                            "prefix": "/"
                        },
                        "route": {
                            "cluster": "outbound|8001||merchant-center.open.svc.cluster.local",
                            "timeout": "0s",
                            "retryPolicy": {
                                "retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
                                "numRetries": 2,
                                "retryHostPredicate": [
                                    {
                                        "name": "envoy.retry_host_predicates.previous_hosts"
                                    }
                                ],
                                "hostSelectionRetryMaxAttempts": "5",
                                "retriableStatusCodes": [
                                    503
                                ]
                            },
                            "maxGrpcTimeout": "0s"
                        },
…………………
{
                "name": "cashier-busi-svc.pay.svc.cluster.local:8001",
                "domains": [
                    "cashier-busi-svc.pay.svc.cluster.local",
                    "cashier-busi-svc.pay.svc.cluster.local:8001",
                    "cashier-busi-svc.pay",
                    "cashier-busi-svc.pay:8001",
                    "cashier-busi-svc.pay.svc.cluster",
                    "cashier-busi-svc.pay.svc.cluster:8001",
                    "cashier-busi-svc.pay.svc",
                    "cashier-busi-svc.pay.svc:8001",
                    "10.233.17.123",
                    "10.233.17.123:8001"
                ],
…………………
            {
                "name": "center-job.manager.svc.cluster.local:8001",
                "domains": [
                    "center-job.manager.svc.cluster.local",
                    "center-job.manager.svc.cluster.local:8001",
                    "center-job.manager",
                    "center-job.manager:8001",
                    "center-job.manager.svc.cluster",
                    "center-job.manager.svc.cluster:8001",
                    "center-job.manager.svc",
                    "center-job.manager.svc:8001",
                    "10.233.34.158",
                    "10.233.34.158:8001"
                ],
……………

problem analysis

Based on the above information, I filtered the svc in the cluster and finally found that there is a service using port 7000 in the cluster:

使用7000端口的svc

istio will automatically generate a 0.0.0.0:7000 listener for 10.233.0.115:7000:

ADDRESS      PORT     TYPE
0.0.0.0         7000      TCP

Check the detailed configuration information. In the listener, tcp traffic is not forwarded (BlackHoleCluster), so the traffic with the target address of 10.0.xx:7000 is matched by listener_0.0.0.0:7000, because it is tcp traffic (nc command The default tcp protocol), so the proxy does not forward the traffic. This is consistent with the phenomenon that no traffic is sent from the pod mentioned at the beginning.

{
     "name": "0.0.0.0_7000",
     "address": {
        "socketAddress": {
        "address": "0.0.0.0",
        "portValue": 7000
        }
     },
     "filterChains": [
       {
          "filterChainMatch": {
          "prefixRanges": [
             {
                "addressPrefix": "10.64.x.x",
                "prefixLen": 32
              }
           ]
        },
     "filters": [
       {
          "name": "envoy.tcp_proxy",
          "typedConfig": {
            "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
            "statPrefix": "BlackHoleCluster",
            "cluster": "BlackHoleCluster"
            }
          }
        ]
}

As for why 7001-7003 works, it is because istio-proxy uses the ALLOW_ANY mode by default, and the traffic that does not match the listener is passed directly. You can verify it through the istio_configmap configuration information:

$ kubectl get cm istio -n istio-system -o yaml | grep -i -w -a3 "mode"
    # REGISTRY_ONLY - restrict outbound traffic to services defined in the service registry as well
    #   as those defined through ServiceEntries
    outboundTrafficPolicy:
      mode: ALLOW_ANY
    localityLbSetting:
      enabled: true
    # The namespace to treat as the administrative root namespace for istio
--
      drainDuration: 45s
      parentShutdownDuration: 1m0s
      #
      # The mode used to redirect inbound connections to Envoy. This setting
      # has no effect on outbound traffic: iptables REDIRECT is always used for
      # outbound connections.
      # If "REDIRECT", use iptables REDIRECT to NAT and redirect to Envoy.
      # The "REDIRECT" mode loses source addresses during redirection.
      # If "TPROXY", use iptables TPROXY to redirect to Envoy.
      # The "TPROXY" mode preserves both the source and destination IP
      # addresses and ports, so that they can be used for advanced filtering
      # and manipulation.
      # The "TPROXY" mode also configures the Sidecar to run with the
      # CAP_NET_ADMIN capability, which is required to use TPROXY.
      #interceptionMode: REDIRECT
      #

solution

Finally, let's solve the problem mentioned at the beginning. There are three solutions in total.

Method 1: Service Entry

Service Entry is one of the important resource objects of istio. Its function is to register external resources to the grid service inside istio to provide more refined control of external resources in the grid. We can simply understand it as a whitelist, istios generates listeners based on the content of Service Entry.

We add the following configuration in the namespace dev-self-pc-ct:

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: rediscluster
  namespace: dev-self
spec:
  hosts:
  - redis
  addresses:
  - 10.0.x.x/32
  ports:
  - number: 7000
    name: redis-7000
    protocol: tcp
  - number: 7001
    name: redis-7001
    protocol: tcp
  - number: 7002
    name: redis-7002
    protocol: tcp
  - number: 7003
    name: redis-7003
    protocol: tcp
  resolution: NONE
  location: MESH_EXTERNAL
EOF

View listener:

$ ./istioctl proxy-config listeners test-8c4c9dcb9-kpm8n.dev-self --address 10.0.x.x -o json

[
    {
        "name": "10.0.x.x_7000",
        "address": {
            "socketAddress": {
                "address": "10.0.x.x",
                "portValue": 7000
            }
        },
        "filterChains": [
            {
                "filters": [
                    {
                        "name": "mixer",
                        "typedConfig": {
                            "@type": "type.googleapis.com/istio.mixer.v1.config.client.TcpClientConfig",
                            "transport": {
                                "networkFailPolicy": {
                                    "policy": "FAIL_CLOSE",
                                    "baseRetryWait": "0.080s",
                                    "maxRetryWait": "1s"
                                },
                                "checkCluster": "outbound|9091||istio-policy.istio-system.svc.cluster.local",
                                "reportCluster": "outbound|9091||istio-telemetry.istio-system.svc. cluster.local",
                                "reportBatchMaxEntries": 100,
                                "reportBatchMaxTime": "1s"
                            },
                            "mixerAttributes": {
                                "attributes": {
                                    "context.proxy_version": {
                                        "stringValue": "1.4.8"
                                    },
                                    "context.reporter.kind": {
                                        "stringValue": "outbound"
                                    },
                                    "context.reporter.uid": {
                                        "stringValue": "kubernetes://test-8c4c9dcb9-kpm8n.dev-self"
                                    },
                                    "destination.service.host": {
                                        "stringValue": "redis"
                                    },
                                    "destination.service.name": {
                                        "stringValue": "redis"
                                    },
                                    "destination.service.namespace": {
                                        "stringValue": "dev-self "
                                    },
                                    "source.namespace": {
                                        "stringValue": "dev-self "
                                    },
                                    "source.uid": {
                                        "stringValue": "kubernetes://test-8c4c9dcb9-kpm8n.dev-self"
                                    }
                                }
                            },
                            "disableCheckCalls": true
                        }
                    },
                    {
                        "name": "envoy.tcp_proxy",
                        "typedConfig": {
                            "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
                            "statPrefix": "outbound|7000||redis",
                            "cluster": "outbound|7000||redis"
                        }
                    }
                ]
            }
        ],
        "deprecatedV1": {
            "bindToPort": false
        },
        "listenerFiltersTimeout": "0.100s",
        "continueOnListenerFiltersTimeout": true,
        "trafficDirection": "OUTBOUND"
    },
    ......
]

We can see that the tcp traffic in the listener "10.0.1.6_7000" is associated with the outbound|7000||redis cluster. The type of the cluster is ORIGINAL_DST , that is, the destination address of the source packet is maintained and no service is associated.

Therefore, the target address for accessing 10.0.xx:7000 will not change at this time.

{
    "name": "outbound|7000||redis",
    "type": "ORIGINAL_DST",
    "connectTimeout": "10s",
    "lbPolicy": "CLUSTER_PROVIDED",
    "circuitBreakers": {
    "thresholds": [
       {
          "maxConnections": 4294967295,
          "maxPendingRequests": 4294967295,
          "maxRequests": 4294967295,
          "maxRetries": 4294967295
       }
     ]
  }
}

Visit the test again:

$ nc 10.0.0.6 7000
keys *
-NOAUTH Authentication required.

$ nc 10.0.0.7 7000
keys *
-NOAUTH Authentication required.

$ nc 10.0.0.8 7000
keys *
-NOAUTH Authentication required.

It can be forwarded normally.

Method 2: Change the global.proxy.includeIPRanges or global.proxy.excludeIPRanges configuration options

  • global.proxy.includeIPRanges : Proxy IP address range is required
  • global.proxy.excludeIPRanges : IP address range that does not need to be proxyed.

The final effect is to match or exclude the corresponding addresses on the pod's iptables rules, and traffic to these addresses will not be redirected to istio-proxy, but sent directly.

We can use the kubectl apply command to update the istio-Sidecar-injector configuration, or modify the traffic.Sidecar.istio.io/includeOutboundIPRanges in spec. template.metadata. annotations to achieve the same effect.

  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: '2021-06-09T21:59:10+08:00'
        kubesphere.io/restartedAt: '2021-09-13T17:07:03.082Z'
        logging.kubesphere.io/logSidecar-config: '{}'
        Sidecar.istio.io/componentLogLevel: 'ext_authz:trace,filter:debug'
        Sidecar.istio.io/inject: 'true'
        traffic.Sidecar.istio.io/excludeOutboundIPRanges: 10.0.1.0/24

The iptables rules in the Pod will forward the traffic with the destination address of 10.0.xx/24 normally:

# Generated by iptables-save v1.4.21 on Fri Sep 17 14:26:10 2021
*nat
:PREROUTING ACCEPT [131058:7863480]
:INPUT ACCEPT [183446:11006760]
:OUTPUT ACCEPT [53889:3286544]
:POSTROUTING ACCEPT [53953:3290384]
:istio_INBOUND - [0:0]
:istio_IN_REDIRECT - [0:0]
:istio_OUTPUT - [0:0]
:istio_REDIRECT - [0:0]
-A PREROUTING -p tcp -j istio_INBOUND
-A OUTPUT -p tcp -j istio_OUTPUT
-A istio_INBOUND -p tcp -m tcp --dport 22 -j RETURN
-A istio_INBOUND -p tcp -m tcp --dport 15020 -j RETURN
-A istio_INBOUND -p tcp -j istio_IN_REDIRECT
-A istio_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006
-A istio_OUTPUT -s 127.0.0.6/32 -o lo -j RETURN
-A istio_OUTPUT ! -d 127.0.0.1/32 -o lo -j istio_IN_REDIRECT
-A istio_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A istio_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A istio_OUTPUT -d 127.0.0.1/32 -j RETURN
-A istio_OUTPUT -d  10.0.0.0/24  -j RETURN
-A istio_OUTPUT -j istio_REDIRECT
-A istio_REDIRECT -p tcp -j REDIRECT --to-ports 15001
COMMIT
# Completed on Fri Sep 17 14:26:10 2021

Method 3: Use Service to defeat Service

This method is based on how istio will automatically generate listeners for the svc in the cluster, and manually create services and endpoints for external services in the cluster:

apiVersion: v1
kind: Endpoints
metadata:
  name: rediscluster
  labels:
    name: rediscluster
    app: redis-jf
    user: jf
  namespace: dev-self
subsets:
  - addresses:
      - ip: 10.0.x.x
      - ip: 10.0.x.x
      - ip: 10.0.x.x
    ports:
      - name: tcp-7000
        port: 7000
      - name: tcp-7001
        port: 7001
      - name: tcp-7002
        port: 7002
      - name: tcp-7003
        port: 7003
---
apiVersion: v1
kind: Service
metadata:
  name: rediscluster
  namespace: dev-self
spec:
  ports:
  - name: tcp-7000
    protocol: TCP
    port: 7000
    targetPort: 7000
  - name: tcp-7001
    protocol: TCP
    port: 7001
    targetPort: 7001
  - name: tcp-7002
    protocol: TCP
    port: 7002
    targetPort: 7002
  - name: tcp-7003
    protocol: TCP
    port: 7003
    targetPort: 7003
  selector:
    name: rediscluster
    app: redis-jf
    user: jf

After applying the above configuration, istio will automatically generate a service_ip+port listener. Service information is as follows:

Selector:          app=redis-jf,name=rediscluster,user=jf
Type:              ClusterIP
IP:                10.233.40.115
Port:              tcp-7000  7000/TCP
TargetPort:        7000/TCP
Endpoints:         <none>
Port:              tcp-7001  7001/TCP
TargetPort:        7001/TCP
Endpoints:         <none>
Port:              tcp-7002  7002/TCP
TargetPort:        7002/TCP
Endpoints:         <none>
Port:              tcp-7003  7003/TCP
TargetPort:        7003/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>

Part of the Listener information is as follows:

{
   "name": "10.233.59.159_7000",
   "address": {
      "socketAddress": {
         "address": "10.233.59.159",
         "portValue": 7000
         }
     },
      "filterChains": [
          {
             "filters": [
                 {
                   "name": "mixer",
                   "typedConfig": {
                     "@type": "type.googleapis.com/istio.mixer.v1.config.client.TcpClientConfig",
                     "transport": {
                       "networkFailPolicy": {
                          "policy": "FAIL_CLOSE",
                          "baseRetryWait": "0.080s",
                          "maxRetryWait": "1s"
                        },
                     "checkCluster": "outbound|9091||istio-policy.istio-system.svc.cluster.local",
                     "reportCluster": "outbound|9091||istio-telemetry.istio-system.svc.cluster.local",
                     "reportBatchMaxEntries": 100,
                     "reportBatchMaxTime": "1s"
                   },
                   "mixerAttributes": {
                      "attributes": {
                         "context.proxy_version": {
                         "stringValue": "1.4.8"
                     },
......

The listener points to a cluster:

{
  "name": "envoy.tcp_proxy",
  "typedConfig": {
      "@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
      "statPrefix": "outbound|7000||redis",
      "cluster": "outbound|7000||redis"
    }
}

The corresponding service information is as follows:

You can see that the endpoint is the external server address we specified earlier:

Perform an access test:

It can be accessed normally.

Summarize

Finally, let's compare these three methods.

  • Method 1 : Add ServiceEntry to allow access to external services. It allows you to use all the functions of the istio service grid to call services within or outside the cluster. This is the officially recommended method.
  • Method 2 : Directly bypass the istio Sidecar proxy, allowing your service to directly access any external service. However, configuring the agent in this way requires knowledge and configuration of the cluster provider. The monitoring of external service access will be lost, and the istio function will not be able to be applied to the traffic of the external service.
  • Method 3 : Compared with the other two, this method is a bit more complicated to configure, and it also needs to access external services through service, which means that existing applications need to be modified. Whether it can be implemented depends on the actual situation.

The method of method 1 is similar to the "whitelist", which not only achieves the purpose of accessing external services, but also can be treated like the internal services of the cluster (you can use the flow control function of istio). In addition, even if the service is invaded, the intruder cannot (or is more difficult) to pass the traffic back to the invading machine due to the "whitelist" setting, which further ensures the security of the service;

Method 2 bypasses the istio Sidecar proxy directly, allowing your service to directly access any external service. However, configuring the agent in this way requires knowledge and configuration of the cluster provider. You will lose the monitoring of external service access, and you will not be able to apply the istio function to the traffic of the external service;

Method 3 Although the flow control function of istio can also be used to manage external traffic, in actual operation there will be problems such as complex configuration and application modification.

Therefore, it is strongly recommended that you use method one. Finally, a special reminder to everyone. Setting includeOutboundIPRanges to empty is problematic. This equivalent to configuring all services to bypass , then Sidecar will not work, and istio without Sidecar will have no soul. .

This article is published by the blog one article multi-posting OpenWrite

KubeSphere
127 声望61 粉丝

KubeSphere 是一个开源的以应用为中心的容器管理平台,支持部署在任何基础设施之上,并提供简单易用的 UI,极大减轻日常开发、测试、运维的复杂度,旨在解决 Kubernetes 本身存在的存储、网络、安全和易用性等痛...