@@ -57,6 +57,7 @@ on how flags are set on various components.
57
57
58
58
### Network
59
59
60
+ #### Network Connectivity
60
61
Kubernetes has a distinctive [ networking model] ( /docs/admin/networking ) .
61
62
62
63
Kubernetes allocates an IP address to each pod. When creating a cluster, you
@@ -66,34 +67,44 @@ the node is added. A process in one pod should be able to communicate with
66
67
another pod using the IP of the second pod. This connectivity can be
67
68
accomplished in two ways:
68
69
69
- - Configure network to route Pod IPs
70
- - Harder to setup from scratch.
71
- - Google Compute Engine ([ GCE] ( /docs/getting-started-guides/gce ) ) and [ AWS] ( /docs/getting-started-guides/aws ) guides use this approach.
72
- - Need to make the Pod IPs routable by programming routers, switches, etc.
73
- - Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
74
- - Generally highest performance.
75
- - Create an Overlay network
76
- - Easier to setup
77
- - Traffic is encapsulated, so per-pod IPs are routable.
78
- - Examples:
70
+ - ** Using an overlay network**
71
+ - An overlay network obscures the underlying network architecture from the
72
+ pod network through traffic encapsulation (e.g vxlan).
73
+ - Encapsulation reduces performance, though exactly how much depends on your solution.
74
+ - ** Without an overlay network**
75
+ - Configure the underlying network fabric (switches, routers, etc) to be aware of pod IP addresses.
76
+ - This does not require the encapsulation provided by an overlay, and so can achieve
77
+ better performance.
78
+
79
+ Which method you choose depends on your environment and requirements. There are various ways
80
+ to implement one of the above options:
81
+
82
+ - ** Use a network plugin which is called by Kubernetes**
83
+ - Kubernetes supports the [ CNI] ( https://github.com/containernetworking/cni ) network plugin interface.
84
+ - There are a number of solutions which provide plugins for Kubernetes:
79
85
- [ Flannel] ( https://github.com/coreos/flannel )
86
+ - [ Calico] ( http://https://github.com/projectcalico/calico-containers )
80
87
- [ Weave] ( http://weave.works/ )
81
88
- [ Open vSwitch (OVS)] ( http://openvswitch.org/ )
82
- - Does not require "Routes" portion of Cloud Provider module.
83
- - Reduced performance (exactly how much depends on your solution).
89
+ - [ More found here] ( /docs/admin/networking#how-to-achieve-this )
90
+ - You can also write your own.
91
+ - ** Compile support directly into Kubernetes**
92
+ - This can be done by implementing the "Routes" interface of a Cloud Provider module.
93
+ - The Google Compute Engine ([ GCE] ( /docs/getting-started-guides/gce ) ) and [ AWS] ( /docs/getting-started-guides/aws ) guides use this approach.
94
+ - ** Configure the network external to Kubernetes**
95
+ - This can be done by manually running commands, or through a set of externally maintained scripts.
96
+ - You have to implement this yourself, but it can give you an extra degree of flexibility.
84
97
85
- You need to select an address range for the Pod IPs.
98
+ You will need to select an address range for the Pod IPs. Note that IPv6 is not yet supported for Pod IPs.
86
99
87
100
- Various approaches:
88
101
- GCE: each project has its own ` 10.0.0.0/8 ` . Carve off a ` /16 ` for each
89
102
Kubernetes cluster from that space, which leaves room for several clusters.
90
103
Each node gets a further subdivision of this space.
91
104
- AWS: use one VPC for whole organization, carve off a chunk for each
92
105
cluster, or use different VPC for different clusters.
93
- - IPv6 is not supported yet.
94
106
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
95
- from which smaller CIDRs are automatically allocated to each node (if nodes
96
- are dynamically added).
107
+ from which smaller CIDRs are automatically allocated to each node.
97
108
- You need max-pods-per-node * max-number-of-nodes IPs in total. A ` /24 ` per
98
109
node supports 254 pods per machine and is a common choice. If IPs are
99
110
scarce, a ` /26 ` (62 pods per machine) or even a ` /27 ` (30 pods) may be sufficient.
@@ -116,6 +127,17 @@ Also, you need to pick a static IP for master node.
116
127
- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
117
128
- Enable ipv4 forwarding sysctl, ` net.ipv4.ip_forward = 1 `
118
129
130
+ #### Network Policy
131
+
132
+ Kubernetes enables the definition of fine-grained network policy between Pods
133
+ using the [ NetworkPolicy] ( /docs/user-guide/networkpolicy ) resource.
134
+
135
+ Not all networking providers support the Kubernetes NetworkPolicy features.
136
+ For clusters which choose to enable NetworkPolicy, the
137
+ [ Calico policy controller addon] ( https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller )
138
+ can enforce the NetworkPolicy API on top of native cloud-provider networking,
139
+ Flannel, or Calico networking.
140
+
119
141
### Cluster Naming
120
142
121
143
You should pick a name for your cluster. Pick a short name for each cluster
0 commit comments