Skip to content

Commit 1ec4295

Browse files
committed
Increate memory for vagrant slave nodes to 2048
With current default number (1024) i am not able to spawn all required for e2e tests kube-system containers. I had problems with heapster replicas, and it was obvious from kube-scheduler logs that it is in Pending exaclty because of insufficient memory. To reproduce: 1. KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh 2. Run any e2e test Change-Id: I963347e86c7129607f07ce1cea8cc0b536b09b72
1 parent 843d7cd commit 1ec4295

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

Vagrantfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ end
111111
# When doing Salt provisioning, we copy approximately 200MB of content in /tmp before anything else happens.
112112
# This causes problems if anything else was in /tmp or the other directories that are bound to tmpfs device (i.e /run, etc.)
113113
$vm_master_mem = (ENV['KUBERNETES_MASTER_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1280).to_i
114-
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 1024).to_i
114+
$vm_node_mem = (ENV['KUBERNETES_NODE_MEMORY'] || ENV['KUBERNETES_MEMORY'] || 2048).to_i
115115

116116
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
117117
if Vagrant.has_plugin?("vagrant-proxyconf")

0 commit comments

Comments
 (0)