Skip to content

Commit 0a9d587

Browse files
Merge pull request kubernetes#1174 from caseydavenport/update-scratch-networking
Update from-scratch networking section, include NetworkPolicy
2 parents 79496bb + b0ce198 commit 0a9d587

File tree

1 file changed

+38
-16
lines changed

1 file changed

+38
-16
lines changed

docs/getting-started-guides/scratch.md

+38-16
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,7 @@ on how flags are set on various components.
5757

5858
### Network
5959

60+
#### Network Connectivity
6061
Kubernetes has a distinctive [networking model](/docs/admin/networking).
6162

6263
Kubernetes allocates an IP address to each pod. When creating a cluster, you
@@ -66,34 +67,44 @@ the node is added. A process in one pod should be able to communicate with
6667
another pod using the IP of the second pod. This connectivity can be
6768
accomplished in two ways:
6869

69-
- Configure network to route Pod IPs
70-
- Harder to setup from scratch.
71-
- Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach.
72-
- Need to make the Pod IPs routable by programming routers, switches, etc.
73-
- Can be configured external to Kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
74-
- Generally highest performance.
75-
- Create an Overlay network
76-
- Easier to setup
77-
- Traffic is encapsulated, so per-pod IPs are routable.
78-
- Examples:
70+
- **Using an overlay network**
71+
- An overlay network obscures the underlying network architecture from the
72+
pod network through traffic encapsulation (e.g vxlan).
73+
- Encapsulation reduces performance, though exactly how much depends on your solution.
74+
- **Without an overlay network**
75+
- Configure the underlying network fabric (switches, routers, etc) to be aware of pod IP addresses.
76+
- This does not require the encapsulation provided by an overlay, and so can achieve
77+
better performance.
78+
79+
Which method you choose depends on your environment and requirements. There are various ways
80+
to implement one of the above options:
81+
82+
- **Use a network plugin which is called by Kubernetes**
83+
- Kubernetes supports the [CNI](https://github.com/containernetworking/cni) network plugin interface.
84+
- There are a number of solutions which provide plugins for Kubernetes:
7985
- [Flannel](https://github.com/coreos/flannel)
86+
- [Calico](http://https://github.com/projectcalico/calico-containers)
8087
- [Weave](http://weave.works/)
8188
- [Open vSwitch (OVS)](http://openvswitch.org/)
82-
- Does not require "Routes" portion of Cloud Provider module.
83-
- Reduced performance (exactly how much depends on your solution).
89+
- [More found here](/docs/admin/networking#how-to-achieve-this)
90+
- You can also write your own.
91+
- **Compile support directly into Kubernetes**
92+
- This can be done by implementing the "Routes" interface of a Cloud Provider module.
93+
- The Google Compute Engine ([GCE](/docs/getting-started-guides/gce)) and [AWS](/docs/getting-started-guides/aws) guides use this approach.
94+
- **Configure the network external to Kubernetes**
95+
- This can be done by manually running commands, or through a set of externally maintained scripts.
96+
- You have to implement this yourself, but it can give you an extra degree of flexibility.
8497

85-
You need to select an address range for the Pod IPs.
98+
You will need to select an address range for the Pod IPs. Note that IPv6 is not yet supported for Pod IPs.
8699

87100
- Various approaches:
88101
- GCE: each project has its own `10.0.0.0/8`. Carve off a `/16` for each
89102
Kubernetes cluster from that space, which leaves room for several clusters.
90103
Each node gets a further subdivision of this space.
91104
- AWS: use one VPC for whole organization, carve off a chunk for each
92105
cluster, or use different VPC for different clusters.
93-
- IPv6 is not supported yet.
94106
- Allocate one CIDR subnet for each node's PodIPs, or a single large CIDR
95-
from which smaller CIDRs are automatically allocated to each node (if nodes
96-
are dynamically added).
107+
from which smaller CIDRs are automatically allocated to each node.
97108
- You need max-pods-per-node * max-number-of-nodes IPs in total. A `/24` per
98109
node supports 254 pods per machine and is a common choice. If IPs are
99110
scarce, a `/26` (62 pods per machine) or even a `/27` (30 pods) may be sufficient.
@@ -116,6 +127,17 @@ Also, you need to pick a static IP for master node.
116127
- Open any firewalls to allow access to the apiserver ports 80 and/or 443.
117128
- Enable ipv4 forwarding sysctl, `net.ipv4.ip_forward = 1`
118129

130+
#### Network Policy
131+
132+
Kubernetes enables the definition of fine-grained network policy between Pods
133+
using the [NetworkPolicy](/docs/user-guide/networkpolicy) resource.
134+
135+
Not all networking providers support the Kubernetes NetworkPolicy features.
136+
For clusters which choose to enable NetworkPolicy, the
137+
[Calico policy controller addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/calico-policy-controller)
138+
can enforce the NetworkPolicy API on top of native cloud-provider networking,
139+
Flannel, or Calico networking.
140+
119141
### Cluster Naming
120142

121143
You should pick a name for your cluster. Pick a short name for each cluster

0 commit comments

Comments
 (0)