高可用OpenStack(Queen版)集群-1. 集群环境

参考文档:html

  1. Install-guide:https://docs.openstack.org/install-guide/
  2. OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html
  3. 理解Pacemaker:http://www.cnblogs.com/sammyliu/p/5025362.html

 一.环境

1. 组件

组件前端

版本python

Remarklinux

centosweb

7.4vim

controller: 4c12g (测试环境,亲测8g内存不足以支持所有服务;另外并无考虑存储空间,实际生产环境日志量较大,对后端存储有必定要求) 后端

 

compute: 8c8g (测试环境) centos

 

yum源已设置为国内的aliyun:https://opsx.alibaba.com/mirrorapi

openstack网络

queen

 

ceph

v12.2.4 luminous

 

2. 拓扑(逻辑)

  1. congtroller节点运行keystone,glance,horizon,nova&neutron&cinder管理相关组件,ceph-mon&ceph-mgr(非openstack服务),另外openstack相关的基础服务;
  2. compute节点运行nova-compute,neutron-linuxbridge-agent,cinder-volume(后经验证若是后端使用共享存储,建议部署在controller节点,可经过pacemaker控制运行模式,但写文档时,此验证环境的cinder-volume部署在compute节点)等,另有计算虚拟化kvm,ceph-osd等;
  3. 控制节点网络:

    管理网络:含host os管理,api,ceph-public等网络,若是生产环境容许,建议各逻辑网络使用独立的物理网络,api区分admin/internal/public接口,对客户端只开放public接口;

    外部网络:主要针对guest os访问internet/外部的floating ip;

    租户(虚机)隧道网络(与vlan网络共存2选1):guest os之间通信的网络,采用vxlan/gre等方式;

    租户(虚机)vlan网络(与隧道网络共存2选1):guest os之间通信的网络,采用vlan方式;

  4. 计算节点网络:

    管理网络:含host os管理,api,ceph-public等网络;

    存储网络:存储集群内部通信,数据复制同步网络,与外界没有直接联系;

    租户(虚机)隧道网络(与vlan网络共存2选1):guest os之间通信的网络,采用vxlan/gre等方式;

    租户(虚机)vlan网络(与隧道网络共存2选1):guest os之间通信的网络,采用vlan方式;

  5. 采用self-service-networks提供自助网络服务,provider networks不支持专有网络,须要依靠外部基础设施提供3层路由与增值服务(如lbaas,fwaas等);
  6. 前端采用haproxy作高可用;
  7. 无状态的服务,如xxx-api,采起active/active的模式运行;有状态的服务,如neturon-xxx-agent,cinder-volume等,建议采起active/passive的模式运行(因前端采用haproxy,客户端的屡次请求可能会被转发到不一样的控制节点,若是客户端请求被负载到无状态信息的控制节点,可能会致使操做请求失败);自身具备集群机制的服务,如rabbitmq,memcached等采用自己的集群机制便可。

3. 总体规划

Host

IP

Service

Remark

controller01

eth0(Management + API + Message + Storage Public Network): 172.30.200.31

eth1(External Network): 172.30.201.31

eth2(Tunnel Tenant Network):10.0.0.31

 

eth3(Vlan Tenant Network)

1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. ceph-mon, ceph-mgr
8. mariadb, rabbitmq, memcached等

1.控制节点: keystone, glance, horizon, nova&neutron管理组件;

2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;

3.存储节点:调度,监控(ceph)等组件;

4.openstack基础服务

controller02

eth0(Management + API + Message + Storage Public Network): 172.30.200.32

eth1(External Network): 172.30.201.32

eth2(Tunnel Tenant Network):10.0.0.32

 

eth3(Tenant Network)

1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. ceph-mon, ceph-mgr
8. mariadb, rabbitmq, memcached等

1.控制节点: keystone, glance, horizon, nova&neutron管理组件;

2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;

3.存储节点:调度,监控(ceph)等组件;

4.openstack基础服务

controller03

eth0(Management + API + Message + Storage Public Network): 172.30.200.33

eth1(External Network): 172.30.201.33

eth2(Tunnel Tenant Network):10.0.0.33

 

eth3(Tenant Network)

1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. ceph-mon, ceph-mgr
8. mariadb, rabbitmq, memcached等

1.控制节点: keystone, glance, horizon, nova&neutron管理组件;

2.网络节点:虚机网络,L2/L3,dhcp,route,nat等;

3.存储节点:调度,监控(ceph)等组件;

4.openstack基础服务

compute01

eth0(Management + Message + Storage Public Network): 172.30.200.41

 

eth1(Storage Cluster Network):10.0.254.41

 

eth2(Tunnel Tenant Network):10.0.0.41

 

eth3(Tenant Network)

1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(若是后端使用共享存储,建议部署在controller节点)
4. ceph-osd

1.计算节点:hypervisor(kvm);

2.网络节点:虚机网络等;

3.存储节点:卷服务等组件

compute02

eth0(Management + Message + Storage Public Network): 172.30.200.42

eth1(Storage Cluster Network):10.0.254.42

 

eth2(Tunnel Tenant Network):10.0.0.42

 

eth3(Tenant Network)

1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(若是后端使用共享存储,建议部署在controller节点)
4. ceph-osd

1.计算节点:hypervisor(kvm);

2.网络节点:虚机网络等;

3.存储节点:卷服务等组件

compute03

eth0(Management + Message + Storage Public Network): 172.30.200.43

eth1(Storage Cluster Network):10.0.254.43

 

eth2(Tunnel Tenant Network):10.0.0.43

 

eth3(Tenant Network)

1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(若是后端使用共享存储,建议部署在controller节点)
4. ceph-osd

1.计算节点:hypervisor(kvm);

2.网络节点:虚机网络等;

3.存储节点:卷服务等组件

二.基础环境

1. 设置hosts

# 全部节点保持一致的hosts便可,以controller01节点为例;
[root@controller01 ~]# cat /etc/hosts

2. 设置ntp

# 全部节点保持时钟同步,以controller01节点为例
[root@controller01 ~]# yum install chrony -y  # 编辑/etc/chrony.conf文件,设置”172.20.0.252”为时钟源,同时设置3个控制节点做为”备用”时钟源; # 容许”172.30.200.0/24”网段主机从本地同步时钟
[root@controller01 ~]# egrep -v "^$|^#" /etc/chrony.conf 
server 172.20.0.252 iburst server controller01 iburst server controller02 iburst server controller03 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 172.30.200.0/24 logdir /var/log/chrony # 设置开机启动,并重启
[root@controller01 ~]# systemctl enable chronyd.service
[root@controller01 ~]# systemctl restart chronyd.service

# 查看状态
[root@controller01 ~]# systemctl status chronyd.service
[root@controller01 ~]# chronyc sources -v

3. 设置openstack packages

# 安装queen版yum源
[root@controller01 ~]# yum install centos-release-openstack-queens -y
[root@controller01 ~]# yum upgrade -y

# 安装openstackclient
[root@controller01 ~]# yum install python-openstackclient -y

# selinux开启时须要安装openstack-selinux,这里已将seliux设置为默认关闭
[root@controller01 ~]# yum install openstack-selinux -y

4. 设置iptables

# 所有节点提早统一设置完成iptables,以controller01节点为例; # 初始环境已使用iptables替代centos7.x自带的firewalld,同时关闭selinux;
[root@controller01 ~]# vim /etc/sysconfig/iptables # mariadb # tcp3306:服务监听端口; # tcp&udp4567:tcp作数据同步复制,多播复制同时采用tcp与udp; # tcp4568:增量状态传输; # tcp4444:其余状态快照传输; # tcp9200:心跳检测
-A INPUT -p tcp -m state --state NEW -m tcp --dport 3306 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 4444 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 4567:4568 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 4567 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT # rabbitmq # tcp4369:集群邻居发现; # tcp5671,5672:用于AMQP 0.9.1 and 1.0 clients使用; # tcp5673:非rabbitmq默认使用端口,这里用做hapoxy前端监听端口,避免后端服务与haproxy在1个节点时没法启动的问题;若是使用rabbitmq自己的集群机制,则可不设置此端口; # tcp15672:用于http api与rabbitadmin访问,后者仅限在management plugin开启时; # tcp25672:用于erlang分布式节点/工具通讯
-A INPUT -p tcp -m state --state NEW -m tcp --dport 4369 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 5671:5673 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 15672:15673 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 25672 -j ACCEPT # memcached # tcp11211:服务监听端口;
-A INPUT -p tcp -m state --state NEW -m tcp --dport 11211 -j ACCEPT # pcs # tcp2224:pcs web管理服务监听端口,可经过web新建,查看,删除资源等,端口值在/usr/lib/pcsd/ssl.rb文件中设置; # udp5405:中间件corosync服务集群多播通讯端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 2224 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 5404:5405 -j ACCEPT # haproxy # tcp1080:haproxy监听端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 1080 -j ACCEPT # dashboard # tcp80:dashboard监听端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT # keystone # tcp35357:admin-api端口; # tcp5000:public/internal-api端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 35357 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 5000 -j ACCEPT # glance # tcp9191:glance-registry端口; # tcp9292:glance-api端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 9191 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 9292 -j ACCEPT # nova # tcp8773:nova-ec2-api端口; # tcp8774:nova-compute-api端口; # tcp8775:nova-metadata-api端口; # tcp8778:placement-api端口; # tcp6080:vncproxy端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 8773:8775 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 8778 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 6080 -j ACCEPT # cinder # tcp8776:cinder-api端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 8776 -j ACCEPT # neutron # tcp9696:neutron-api端口; # udp4789:vxlan目的端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 9696 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 4789 -j ACCEPT # ceph # tcp6789:ceph-mon端口; # tcp6800~7300:ceph-osd端口
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6789 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 6800:7300 -j ACCEPT [root@controller01 ~]# service iptables restart