calico install

安装准备

服务器三台,分别是:
172.16.6.100
172.16.6.101
172.16.6.102

下载calicoctl工具,下载地址:
https://github.com/projectcalico/calico-containers/releases

容器启动

节点1:
启动脚本:

1
2
3
4
5
#!/bin/sh
IP=172.16.6.100
calicoctl node run \
--node-image=quay.io/calico/node:v1.2.1 \
--ip=$IP --name=calico-100

节点2:
启动脚本:

1
2
3
4
5
#!/bin/sh
IP=172.16.6.101
calicoctl node run \
--node-image=quay.io/calico/node:v1.2.1 \
--ip=$IP --name=calico-101

节点3:
启动脚本:

1
2
3
4
5
#!/bin/sh
IP=172.16.6.102
calicoctl node run \
--node-image=quay.io/calico/node:v1.2.1 \
--ip=$IP --name=calico-102

calico网络配置

calicoctl在1.0以后的版本,命令与之前有所改变。calicoctl 1.0之后calicoctl管理的都是资源(resource),之前版本的ip pool,profile, policy等都是资源。资源通过yaml或者json格式方式来定义,通过calicoctl create 或者apply来创建和应用,通过calicoctl get命令来查看。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+------------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+------------+-------------+
| 172.16.6.100 | node-to-node mesh | up | 2017-07-17 | Established |
| 172.16.6.101 | node-to-node mesh | up | 2017-07-17 | Established |
+--------------+-------------------+-------+------------+-------------+
IPv6 BGP status
No IPv6 peers found.

创建docker网络

创建一个为apigate和normal的calico网络

1
2
docker network create --driver calico --ipam-driver calico-ipam apigate
docker network create --driver calico --ipam-driver calico-ipam normal

查看网络

1
2
3
4
5
6
7
# docker network ls
NETWORK ID NAME DRIVER SCOPE
e1ab5692360c apigate calico global
ba591edc3f88 bridge bridge local
42410241dcf9 host host local
0a034b98a86b none null local
1a13fb2765d9 normal calico global

定义profile

创建profile.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
- apiVersion: v1
kind: profile
metadata:
name: normal
labels:
role: normal
- apiVersion: v1
kind: profile
metadata:
name: apigate
labels:
role: apigate

导入资源配置:
calicoctl create -f profile.yaml

定义policy

创建policy.yaml文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
- apiVersion: v1
kind: policy
metadata:
name: normal
spec:
order: 0
selector: role == 'normal'
ingress:
- action: allow
source:
selector: role == 'normal'
egress:
- action: allow
- apiVersion: v1
kind: policy
metadata:
name: apigate
spec:
order: 0
selector: role == 'apigate'
ingress:
- action: allow
egress:
- action: allow

在这里apigate这个网络的ingress和egress都是allow的,就是全部的(宿主机和calico网络中的container都能访问)

启动容器

节点1上启动一个aptgate:

1
docker run -d --name apigate-100 --net=apigate -e="CONSUL_CLUSTER=172.16.6.100" 172.16.6.100:5000/td/apigate:0.0.3.12

1
2
3
4
# docker inspect apigate-100 |grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "192.168.171.216",

container的ip为192.168.171.216

节点二启动一个tdesk的容器:

1
# docker run -d --name gtd_tdesk-101 --net=normal -e="CONSUL_CLUSTER=172.16.6.100" -v /app/resource:/opt/gorp/webnode/public 172.16.6.100:5000/td/gtd/tdesk:0.0.2.21

测试容器内部连通性:

1
2
3
4
5
6
# docker exec gtd_tdesk-101 ping 192.168.171.216
PING 192.168.171.216 (192.168.171.216) 56(84) bytes of data.
64 bytes from 192.168.171.216: icmp_seq=1 ttl=62 time=0.138 ms
64 bytes from 192.168.171.216: icmp_seq=2 ttl=62 time=0.088 ms
64 bytes from 192.168.171.216: icmp_seq=3 ttl=62 time=0.092 ms
64 bytes from 192.168.171.216: icmp_seq=4 ttl=62 time=0.087 ms

测试宿主机与容器的连通性,
在节点二上ping节点一上的容器:

1
2
3
4
5
6
7
# ping 192.168.171.216
PING 192.168.171.216 (192.168.171.216) 56(84) bytes of data.
64 bytes from 192.168.171.216: icmp_seq=1 ttl=63 time=0.119 ms
64 bytes from 192.168.171.216: icmp_seq=2 ttl=63 time=0.082 ms
64 bytes from 192.168.171.216: icmp_seq=3 ttl=63 time=0.074 ms
64 bytes from 192.168.171.216: icmp_seq=4 ttl=63 time=0.079 ms
^C