-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: add FAQ about network stack gvisor
- Loading branch information
1 parent
7b96ce2
commit 9efafca
Showing
4 changed files
with
155 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
--- | ||
sidebar_position: 6 | ||
--- | ||
|
||
# Can not access Service IP or Service name, but can access Pod IP ? | ||
|
||
## Answer: | ||
|
||
K8s cluster kube-proxy maybe using ipvs mode, so `kubevpn` needs to use option `--netstack gvisor`. | ||
|
||
eg: | ||
|
||
- connect mode | ||
- `kubevpn connect --netstack gvisor` | ||
- proxy mode | ||
- `kubevpn proxy deployment/authors --netstack gvisor` | ||
- clone mode | ||
- `kubevpn clone deployment/authors --netstack gvisor` | ||
- dev mode | ||
- `kubevpn dev deployment/authors --netstack gvisor` | ||
|
||
## Why: | ||
|
||
kube-proxy with ipvs mode makes iptables SNAT not work, but `kubepvn` depend on iptables SNAT to access service IP, so | ||
can not access service IP if kube-proxy is in ipvs mode. (Access to Pod IP is not affected.) | ||
|
||
## Solution: | ||
|
||
`kubevpn` use [gVisor](https://github.com/google/gvisor) to access service IP, not relays to iptables | ||
|
||
## Reference: | ||
|
||
https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-modes | ||
|
||
The kube-proxy starts up in different modes, which are determined by its configuration. | ||
On Linux nodes, the available modes for kube-proxy are: | ||
|
||
### iptables | ||
|
||
A mode where the kube-proxy configures packet forwarding rules using iptables. | ||
|
||
In this mode, kube-proxy configures packet forwarding rules using the iptables API of the kernel netfilter subsystem. | ||
For each endpoint, it installs iptables rules which, by default, select a backend Pod at random. | ||
|
||
### ipvs | ||
|
||
A mode where the kube-proxy configures packet forwarding rules using ipvs. | ||
|
||
In ipvs mode, kube-proxy uses the kernel IPVS and iptables APIs to create rules to redirect traffic from Service IPs to | ||
endpoint IPs. | ||
|
||
The IPVS proxy mode is based on netfilter hook function that is similar to iptables mode, but uses a hash table as the | ||
underlying data structure and works in the kernel space. That means kube-proxy in IPVS mode redirects traffic with lower | ||
latency than kube-proxy in iptables mode, with much better performance when synchronizing proxy rules. Compared to the | ||
iptables proxy mode, IPVS mode also supports a higher throughput of network traffic. | ||
|
||
### nftables | ||
|
||
A mode where the kube-proxy configures packet forwarding rules using nftables. | ||
|
||
In this mode, kube-proxy configures packet forwarding rules using the nftables API of the kernel netfilter subsystem. | ||
For each endpoint, it installs nftables rules which, by default, select a backend Pod at random. | ||
|
||
The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than | ||
iptables. The nftables proxy mode is able to process changes to service endpoints faster and more efficiently than the | ||
iptables mode, and is also able to more efficiently process packets in the kernel (though this only becomes noticeable | ||
in clusters with tens of thousands of services). | ||
|
||
## Architecture | ||
|
||
use `gVisor` to access k8s service-cidr and pod-cidr network | ||
|
||
still send `inner-cidr` network traffic to tun device | ||
|
||
![connect_network_stack_gvisor.svg](img/connect_network_stack_gvisor.svg) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
--- | ||
sidebar_position: 6 | ||
--- | ||
|
||
# 无法访问服务 IP 或服务名称,但可以访问 Pod IP? | ||
|
||
## 答案: | ||
|
||
K8s 集群中 kube-proxy 可能使用 ipvs 模式,因此 `kubevpn` 需要使用选项 `--netstack gvisor`。 | ||
|
||
例如: | ||
|
||
- 连接模式 | ||
- `kubevpn connect --netstack gvisor` | ||
- 代理模式 | ||
- `kubevpn proxy deployment/authors --netstack gvisor` | ||
- 克隆模式 | ||
- `kubevpn clone deployment/authors --netstack gvisor` | ||
- 开发模式 | ||
- `kubevpn dev deployment/authors --netstack gvisor` | ||
|
||
## 原因: | ||
|
||
ipvs 模式使得 iptables 的 SNAT 不起作用,但 `kubevpn` 依赖 iptables 的 SNAT 来访问服务 IP,因此如果 | ||
kube-proxy 开启了 ipvs 模式,那么将无法访问服务 IP (Pod IP 的访问不受影响)。 | ||
|
||
## 解决方案: | ||
|
||
`kubevpn` 使用 [gVisor](https://github.com/google/gvisor) 来访问服务 IP,不依赖于 iptables。 | ||
|
||
## 参考资料: | ||
|
||
https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-modes | ||
|
||
kube-proxy 以不同模式启动,这由其配置确定。 | ||
在 Linux 节点上,kube-proxy 的可用模式包括: | ||
|
||
### iptables | ||
|
||
一种模式,其中 kube-proxy 使用 iptables 配置数据包转发规则。 | ||
|
||
在此模式下,kube-proxy 使用内核 netfilter 子系统的 iptables API 配置数据包转发规则。 | ||
对于每个端点,它安装 iptables 规则,默认情况下随机选择一个后端 Pod。 | ||
|
||
### ipvs | ||
|
||
一种模式,其中 kube-proxy 使用 ipvs 配置数据包转发规则。 | ||
|
||
在 ipvs 模式下,kube-proxy 使用内核 IPVS 和 iptables API 创建规则,将流量从服务 IP 重定向到 | ||
端点 IP。 | ||
|
||
IPVS 代理模式基于和 iptables 模式相似的 netfilter 钩子函数,但使用哈希表作为底层数据结构,并在内核空间中工作。这意味着 IPVS | ||
模式下的 kube-proxy 重定向流量的延迟比 iptables 模式下低,当同步代理规则时性能更好。与 | ||
iptables 代理模式相比,IPVS 模式还支持更高的网络流量吞吐量。 | ||
|
||
### nftables | ||
|
||
一种模式,其中 kube-proxy 使用 nftables 配置数据包转发规则。 | ||
|
||
在此模式下,kube-proxy 使用内核 netfilter 子系统的 nftables API 配置数据包转发规则。 | ||
对于每个端点,它安装 nftables 规则,默认情况下随机选择一个后端 Pod。 | ||
|
||
nftables API 是 iptables API 的后继者,旨在提供比 iptables 更好的性能和可扩展性。nftables 代理模式能够比 iptables | ||
模式更快、更高效地处理对服务端点的变更,并且还能在内核中更高效地处理数据包(尽管这只有在拥有数以万计的服务的集群中才会明显)。 | ||
|
||
## 架构 | ||
|
||
使用 `gVisor` 访问 k8s `service-cidr` 和 `pod-cidr` 网络 | ||
|
||
仍然将 `inner-cidr` 网络流量发送到 `tun` 设备 | ||
|
||
![connect_network_stack_gvisor.svg](img/connect_network_stack_gvisor.svg) |
4 changes: 4 additions & 0 deletions
4
...docusaurus-plugin-content-docs/current/faq/img/connect_network_stack_gvisor.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.