@@ -67,38 +67,44 @@ the node is added. A process in one pod should be able to communicate with
67
67
another pod using the IP of the second pod. This connectivity can be
68
68
accomplished in two ways:
69
69
70
- - ** Configure underlay network to route Pod IPs**
71
- - Harder to setup from scratch.
72
- - Google Compute Engine ([ GCE] ( /docs/getting-started-guides/gce ) ) and [ AWS] ( /docs/getting-started-guides/aws ) guides use this approach.
73
- - Need to make the Pod IPs routable by programming routers, switches, etc.
74
- - This can be done in a few different ways:
75
- - Implement in the "Routes" interface of a Cloud Provider module.
76
- - Manually configure static routing external to Kubernetes.
77
- - Generally highest performance.
78
- - ** Use a network plugin**
79
- - Easier to setup
80
- - Pod IPs are made accessible through route distribution or encapsulation.
81
- - Examples:
70
+ - ** Using an overlay network**
71
+ - An overlay network obscures the underlying network architecture from the
72
+ pod network through traffic encapsulation (e.g vxlan).
73
+ - Encapsulation reduces performance, though exactly how much depends on your solution.
74
+ - ** Without an overlay network**
75
+ - Configure the underlying network fabric (switches, routers, etc) to be aware of pod IP addresses.
76
+ - This does not require the encapsulation provided by an overlay, and so can achieve
77
+ better performance.
78
+
79
+ Which method you choose depends on your environment and requirements. There are various ways
80
+ to implement one of the above options:
81
+
82
+ - ** Use a network plugin which is called by Kubernetes**
83
+ - Kubernetes supports the [ CNI] ( https://github.com/containernetworking/cni ) network plugin interface.
84
+ - There are a number of solutions which provide plugins for Kubernetes:
82
85
- [ Flannel] ( https://github.com/coreos/flannel )
83
86
- [ Calico] ( http://https://github.com/projectcalico/calico-containers )
84
87
- [ Weave] ( http://weave.works/ )
85
88
- [ Open vSwitch (OVS)] ( http://openvswitch.org/ )
86
- - Does not require "Routes" portion of Cloud Provider module.
87
- - Reduced performance (exactly how much depends on your solution).
88
- - More information on network plugins can be found [ here] ( /docs/admin/networking#how-to-achieve-this ) .
89
+ - [ More found here] ( /docs/admin/networking#how-to-achieve-this )
90
+ - You can also write your own.
91
+ - ** Compile support directly into Kubernetes**
92
+ - This can be done by implementing the "Routes" interface of a Cloud Provider module.
93
+ - The Google Compute Engine ([ GCE] ( /docs/getting-started-guides/gce ) ) and [ AWS] ( /docs/getting-started-guides/aws ) guides use this approach.
94
+ - ** Configure the network external to Kubernetes**
95
+ - This can be done by manually running commands, or through a set of externally maintained scripts.
96
+ - You have to implement this yourself, but it can give you an extra degree of flexibility.
89
97
90
- You need to select an address range for the Pod IPs.
98
+ You will need to select an address range for the Pod IPs. Note that IPv6 is not yet supported for Pod IPs.
91
99
92
100
- Various approaches:
93
101
- GCE: each project has its own ` 10.0.0.0/8 ` . Carve off a ` /16 ` for each
94
102
Kubernetes cluster from that space, which leaves room for several clusters.
95
103
Each node gets a further subdivision of this space.
96
104
- AWS: use one VPC for whole organization, carve off a chunk for each
97
105
cluster, or use different VPC for different clusters.
98
- - IPv6 is not supported yet.
99
106
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
100
- from which smaller CIDRs are automatically allocated to each node (if nodes
101
- are dynamically added).
107
+ from which smaller CIDRs are automatically allocated to each node.
102
108
- You need max-pods-per-node * max-number-of-nodes IPs in total. A ` /24 ` per
103
109
node supports 254 pods per machine and is a common choice. If IPs are
104
110
scarce, a ` /26 ` (62 pods per machine) or even a ` /27 ` (30 pods) may be sufficient.
@@ -126,8 +132,9 @@ Also, you need to pick a static IP for master node.
126
132
Kubernetes enables the definition of fine-grained network policy between Pods
127
133
using the [ NetworkPolicy] ( /docs/user-guide/networkpolicy ) resource.
128
134
129
- For clusters which choose to enable NetworkPolicy, the Calico
130
- [ policy controller addon] ( https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller )
135
+ Not all networking providers support the Kubernetes NetworkPolicy features.
136
+ For clusters which choose to enable NetworkPolicy, the
137
+ [ Calico policy controller addon] ( https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller )
131
138
can enforce the NetworkPolicy API on top of native cloud-provider networking,
132
139
Flannel, or Calico networking.
133
140
0 commit comments