Skip to content

Commit 9c602da

Browse files
author
Sukhesh Halemane
committed
vagrant setup for mesos,marathon,docker and netplugin
1 parent ea15b43 commit 9c602da

File tree

6 files changed

+345
-0
lines changed

6 files changed

+345
-0
lines changed

mgmtfn/mesos-docker/README.md

+85
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# Netplugin with Mesos Marathon
2+
3+
This document explains how to use Netplugin with Mesos Marathon. Currently, netplugin supports docker containerizer with Mesos Marathon.
4+
5+
## Getting started with Vagrant VMs
6+
### Prerequisits
7+
- Virtualbox 5.0.2 or higher
8+
- Vagrant 1.7.4 or higher
9+
- ansible 1.9.4 or higher
10+
11+
### Step 1: Bring up the vagrant VMs
12+
13+
```
14+
$ git clone https://github.com/contiv/netplugin
15+
$ cd netplugin/mgmtfn/mesos-docker
16+
$ vagrant up
17+
```
18+
19+
This will bring up a two node Vagrant setup with Mesos, Marathon and docker.
20+
Bringing up vagrant VMs and provisioning them can take few minutes to complete since it needs to download the VM images and mesos/marathon binaries. Please be patient.
21+
22+
### Step 2: Build netplugin binaries
23+
24+
```
25+
$ vagrant ssh node1
26+
< Inside the VM>
27+
$ cd /opt/gopath/src/github.com/contiv/netplugin
28+
$ make host-build
29+
```
30+
31+
### Step 3: Start netplugin
32+
33+
```
34+
$ cd mgmtfn/mesos-docker
35+
$ ./startPlugin.sh
36+
```
37+
38+
This will start netplugin and netmaster processes on both VMs in the setup.
39+
This will also create a network called `contiv` to launch marathon containers
40+
41+
### Step 4: Launch containers
42+
43+
`docker.json` file in mgmtfn/mesos-docker directory has an example marathon app definition.
44+
45+
```
46+
47+
"container": {
48+
"type": "DOCKER",
49+
"docker": {
50+
"image": "libmesos/ubuntu",
51+
"parameters": [ { "key": "net", "value": "contiv" } ]
52+
}
53+
},
54+
"id": "ubuntu",
55+
"instances": 2,
56+
"constraints": [ ["hostname", "UNIQUE", ""] ],
57+
"cpus": 1,
58+
"mem": 128,
59+
"uris": [],
60+
"cmd": "while sleep 10; do date -u +%T; done"
61+
}
62+
```
63+
64+
This example application definition launches two ubuntu containers with a constraint that both containers be spread on different hosts.
65+
Note that there is a special `net` parameter used in this specification `"parameters": [ { "key": "net", "value": "contiv" } ]`. This tells docker to launch the application in contiv network that we created in step 3.
66+
67+
You can launch this application using following command
68+
69+
```
70+
$ ./launch.sh docker.json
71+
```
72+
73+
Launching the container can take few minutes depending on how long it takes to pull the image.
74+
Once its launched, you should be able to see the containers using docker commands
75+
76+
```
77+
$ docker ps
78+
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
79+
2a68fed77d5a libmesos/ubuntu "/bin/sh -c 'while sl" About an hour ago Up About an hour mesos-cce1c91f-65fb-457d-99af-5fdd4af14f16-S1.da634e3c-1fde-479a-b100-c61a498bcbe7
80+
```
81+
82+
## Notes
83+
84+
1. Mesos and Marathon ports are port-mapped from vagrant VM to host machine. You can access them by logging into localhost:5050 and localhost:8080 respectively.
85+
2. Netmaster web-ui is port-mapped to port 9090 on the host machine

mgmtfn/mesos-docker/Vagrantfile

+169
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
# -*- mode: ruby -*-
2+
# vi: set ft=ruby :
3+
4+
require 'fileutils'
5+
6+
# netplugin_synced_gopath="/opt/golang"
7+
gopath_folder="/opt/gopath"
8+
9+
ANSIBLE_GROUPS = {
10+
"master" => ["node1"],
11+
"nodes" => ["node2"],
12+
"all_groups:children" => ["master, ""nodes"]
13+
}
14+
15+
provision_common = <<SCRIPT
16+
## setup the environment file. Export the env-vars passed as args to 'vagrant up'
17+
echo Args passed: [[ $@ ]]
18+
19+
echo -n "$1" > /etc/hostname
20+
hostname -F /etc/hostname
21+
22+
/sbin/ip addr add "$3/24" dev eth1
23+
/sbin/ip link set eth1 up
24+
/sbin/ip link set eth2 up
25+
26+
echo 'export GOPATH=#{gopath_folder}' > /etc/profile.d/envvar.sh
27+
echo 'export GOBIN=$GOPATH/bin' >> /etc/profile.d/envvar.sh
28+
echo 'export GOSRC=$GOPATH/src' >> /etc/profile.d/envvar.sh
29+
echo 'export PATH=$PATH:/usr/local/go/bin:$GOBIN' >> /etc/profile.d/envvar.sh
30+
echo "export http_proxy='$4'" >> /etc/profile.d/envvar.sh
31+
echo "export https_proxy='$5'" >> /etc/profile.d/envvar.sh
32+
echo "export no_proxy=192.168.2.10,192.168.2.11,127.0.0.1,localhost,netmaster" >> /etc/profile.d/envvar.sh
33+
echo "export CLUSTER_NODE_IPS=192.168.2.10,192.168.2.11" >> /etc/profile.d/envvar.sh
34+
echo "export USE_RELEASE=$6" >> /etc/profile.d/envvar.sh
35+
36+
37+
source /etc/profile.d/envvar.sh
38+
39+
# setup docker cluster store
40+
cp #{gopath_folder}/src/github.com/contiv/netplugin/scripts/docker.service /lib/systemd/system/docker.service
41+
42+
# setup docker remote api
43+
cp #{gopath_folder}/src/github.com/contiv/netplugin/scripts/docker-tcp.socket /etc/systemd/system/docker-tcp.socket
44+
systemctl enable docker-tcp.socket
45+
46+
mkdir /etc/systemd/system/docker.service.d
47+
echo "[Service]" | sudo tee -a /etc/systemd/system/docker.service.d/http-proxy.conf
48+
echo "Environment=\\\"no_proxy=192.168.2.10,192.168.2.11,127.0.0.1,localhost,netmaster\\\" \\\"http_proxy=$http_proxy\\\" \\\"https_proxy=$https_proxy\\\"" | sudo tee -a /etc/systemd/system/docker.service.d/http-proxy.conf
49+
sudo systemctl daemon-reload
50+
sudo systemctl stop docker
51+
systemctl start docker-tcp.socket
52+
sudo systemctl start docker
53+
54+
if [ $# -gt 6 ]; then
55+
shift; shift; shift; shift; shift; shift
56+
echo "export $@" >> /etc/profile.d/envvar.sh
57+
fi
58+
59+
60+
# remove duplicate docker key
61+
rm /etc/docker/key.json
62+
63+
(service docker restart) || exit 1
64+
65+
(ovs-vsctl set-manager tcp:127.0.0.1:6640 && \
66+
ovs-vsctl set-manager ptcp:6640) || exit 1
67+
68+
docker load --input #{gopath_folder}/src/github.com/contiv/netplugin/scripts/dnscontainer.tar
69+
SCRIPT
70+
71+
VAGRANTFILE_API_VERSION = "2"
72+
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
73+
config.vm.box = "contiv/centos71-netplugin"
74+
config.vm.box_version = "0.3.1"
75+
76+
num_nodes = 2
77+
if ENV['CONTIV_NODES'] && ENV['CONTIV_NODES'] != "" then
78+
num_nodes = ENV['CONTIV_NODES'].to_i
79+
end
80+
base_ip = "192.168.33."
81+
node_ips = num_nodes.times.collect { |n| base_ip + "#{n+10}" }
82+
node_names = num_nodes.times.collect { |n| "node#{n+1}" }
83+
node_peers = []
84+
85+
num_nodes.times do |n|
86+
node_name = node_names[n]
87+
node_addr = node_ips[n]
88+
node_peers += ["#{node_name}=http://#{node_addr}:2380,#{node_name}=http://#{node_addr}:7001"]
89+
consul_join_flag = if n > 0 then "-join #{node_ips[0]}" else "" end
90+
consul_bootstrap_flag = "-bootstrap-expect=3"
91+
swarm_flag = "slave"
92+
if num_nodes < 3 then
93+
if n == 0 then
94+
consul_bootstrap_flag = "-bootstrap"
95+
swarm_flag = "master"
96+
else
97+
consul_bootstrap_flag = ""
98+
swarm_flag = "slave"
99+
end
100+
end
101+
config.vm.define node_name do |node|
102+
# node.vm.hostname = node_name
103+
# create an interface for etcd cluster
104+
node.vm.network :private_network, ip: node_addr, virtualbox__intnet: "true", auto_config: false
105+
# create an interface for bridged network
106+
node.vm.network :private_network, ip: "0.0.0.0", virtualbox__intnet: "true", auto_config: false
107+
node.vm.provider "virtualbox" do |v|
108+
# make all nics 'virtio' to take benefit of builtin vlan tag
109+
# support, which otherwise needs to be enabled in Intel drivers,
110+
# which are used by default by virtualbox
111+
v.customize ['modifyvm', :id, '--nictype1', 'virtio']
112+
v.customize ['modifyvm', :id, '--nictype2', 'virtio']
113+
v.customize ['modifyvm', :id, '--nictype3', 'virtio']
114+
v.customize ['modifyvm', :id, '--nicpromisc2', 'allow-all']
115+
v.customize ['modifyvm', :id, '--nicpromisc3', 'allow-all']
116+
v.customize ['modifyvm', :id, '--paravirtprovider', "kvm"]
117+
end
118+
119+
# mount the host directories
120+
node.vm.synced_folder "../../bin", File.join(gopath_folder, "bin")
121+
if ENV["GOPATH"] && ENV['GOPATH'] != ""
122+
node.vm.synced_folder "../../../../../", File.join(gopath_folder, "src"), rsync: true
123+
else
124+
node.vm.synced_folder "../../", File.join(gopath_folder, "src/github.com/contiv/netplugin"), rsync: true
125+
end
126+
127+
node.vm.provision "shell" do |s|
128+
s.inline = "echo '#{node_ips[0]} netmaster' >> /etc/hosts; echo '#{node_addr} #{node_name}' >> /etc/hosts"
129+
end
130+
node.vm.provision "shell" do |s|
131+
s.inline = provision_common
132+
s.args = [node_name, ENV["CONTIV_NODE_OS"] || "", node_addr, ENV["http_proxy"] || "", ENV["https_proxy"] || "", ENV["USE_RELEASE"] || "", *ENV['CONTIV_ENV']]
133+
end
134+
provision_node = <<SCRIPT
135+
## start etcd with generated config
136+
set -x
137+
(nohup etcd --name #{node_name} --data-dir /tmp/etcd \
138+
--listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
139+
--advertise-client-urls http://#{node_addr}:2379,http://#{node_addr}:4001 \
140+
--initial-advertise-peer-urls http://#{node_addr}:2380,http://#{node_addr}:7001 \
141+
--listen-peer-urls http://#{node_addr}:2380 \
142+
--initial-cluster #{node_peers.join(",")} --initial-cluster-state new \
143+
0<&- &>/tmp/etcd.log &) || exit 1
144+
145+
## start consul
146+
(nohup consul agent -server #{consul_join_flag} #{consul_bootstrap_flag} \
147+
-bind=#{node_addr} -data-dir /opt/consul 0<&- &>/tmp/consul.log &) || exit 1
148+
149+
SCRIPT
150+
node.vm.provision "shell", run: "always" do |s|
151+
s.inline = provision_node
152+
end
153+
154+
if n == (num_nodes - 1) then
155+
node.vm.provision "ansible" do |ansible|
156+
ansible.playbook = "playbook.yml"
157+
ansible.groups = ANSIBLE_GROUPS
158+
ansible.limit = "all"
159+
end
160+
end
161+
# forward netmaster port
162+
if n == 0 then
163+
node.vm.network "forwarded_port", guest: 5050, host: 5050
164+
node.vm.network "forwarded_port", guest: 8080, host: 8080
165+
node.vm.network "forwarded_port", guest: 9999, host: 9090
166+
end
167+
end
168+
end
169+
end

mgmtfn/mesos-docker/docker.json

+18
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
{
2+
"container": {
3+
"type": "DOCKER",
4+
"docker": {
5+
"image": "libmesos/ubuntu",
6+
"parameters": [
7+
{ "key": "net", "value": "contiv" }
8+
]
9+
}
10+
},
11+
"id": "ubuntu",
12+
"instances": 2,
13+
"constraints": [ ["hostname", "UNIQUE", ""] ],
14+
"cpus": 1,
15+
"mem": 128,
16+
"uris": [],
17+
"cmd": "while sleep 10; do date -u +%T; done"
18+
}

mgmtfn/mesos-docker/launch.sh

+12
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash
2+
3+
# Launch a marathon job
4+
5+
USAGE="Usage: $0 <marathon-json-file>"
6+
7+
if [ $# -ne 1 ]; then
8+
echo $USAGE
9+
exit 1
10+
fi
11+
12+
curl -X POST -H "Content-Type: application/json" http://localhost:8080/v2/apps -d@$1

mgmtfn/mesos-docker/playbook.yml

+54
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
- hosts: master
3+
remote_user: vagrant
4+
become: yes
5+
become_method: sudo
6+
tasks:
7+
# - name: upgrade system (redhat)
8+
# yum:
9+
# update_cache: true
10+
# name: '*'
11+
# state: latest
12+
- name: install mesosphere yum repo
13+
yum: name=http://repos.mesosphere.com/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm state=present
14+
- name: install zookeeper yum repo
15+
yum: name=http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.x86_64.rpm state=present
16+
- name: install zookeeper
17+
yum: pkg=zookeeper,zookeeper-server state=latest
18+
- name: configure zookeeper ID
19+
become_user: zookeeper
20+
shell: zookeeper-server-initialize --force --myid=1
21+
- name: install mesos and marathon packages
22+
yum: pkg=device-mapper-event-libs,mesos-0.27.0,marathon-0.14.0 state=latest
23+
- name: configure containerizers
24+
lineinfile: dest=/etc/mesos-slave/containerizers create=yes line="docker,mesos"
25+
- name: start zookeeper
26+
service: name=zookeeper-server state=started enabled=yes
27+
- name: start mesos-master
28+
service: name=mesos-master state=started enabled=yes
29+
- name: start mesos-slave
30+
service: name=mesos-slave state=started enabled=yes
31+
- name: start marathon
32+
service: name=marathon state=started enabled=yes
33+
34+
35+
- hosts: nodes
36+
remote_user: vagrant
37+
become: yes
38+
become_method: sudo
39+
tasks:
40+
# - name: upgrade system (redhat)
41+
# yum:
42+
# update_cache: true
43+
# name: '*'
44+
# state: latest
45+
- name: install mesosphere yum repo
46+
yum: name=http://repos.mesosphere.com/el/7/noarch/RPMS/mesosphere-el-repo-7-1.noarch.rpm state=present
47+
- name: install mesos packages
48+
yum: pkg=device-mapper-event-libs,mesos-0.27.0 state=latest
49+
- name: configure containerizers
50+
lineinfile: dest=/etc/mesos-slave/containerizers create=yes line="docker,mesos"
51+
- name: set zookeeper master
52+
replace: dest=/etc/mesos/zk regexp="localhost" replace="192.168.33.10"
53+
- name: start mesos-slave
54+
service: name=mesos-slave state=started enabled=yes

mgmtfn/mesos-docker/startPlugin.sh

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
#!/bin/bash
2+
# Start netplugin on this setup
3+
4+
../../scripts/python/startPlugin.py -nodes 192.168.33.10,192.168.33.11
5+
6+
# Create a network to launch docker containers
7+
netctl net create contiv -s 10.1.1.0/24

0 commit comments

Comments
 (0)