此处使用的是 Ubuntu16.04 KVM 虚拟机.
(1)更换软件源
cp /etc/apt/sources.list /etc/apt/sources.list.bak
/etc/apt/sources.list:
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ xenial main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ xenial-proposed main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ xenial-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
sudo apt-get update && sudo apt-get upgrade
(2)主机名解析
编辑 /etc/hosts:
... 192.168.122.124
(3)安装 ceph-deloy
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list sudo apt-get update sudo sudo apt-get install ceph-deploy
ceph 版本:https://download.ceph.com/
(4)部署 monitor
mkdir ceph-cluster && cd ceph-cluster sudo ceph-deploy newsudo ceph-deploy install sudo ceph-deploy mon create-initial
sudo ceph-deploy adminsudo chmod +r /etc/ceph/ceph.client.admin.keyring ceph config set mon auth_allow_insecure_global_id_reclaim false
(5)部署 manager
sudo ceph-deploy mgr create
(6)部署 OSD
此处
sudo ceph-deploy disk zap/dev/vdb sudo ceph-deploy disk zap /dev/vdc sudo ceph-deploy disk zap /dev/vdd sudo ceph-deploy osd create --data /dev/vdb sudo ceph-deploy osd create --data /dev/vdc sudo ceph-deploy osd create --data /dev/vdd
(7)部署 radosgw
sudo apt-get install radosgw sudo ceph-deploy rgw create
(8)查看集群状态
$ ceph -s
cluster:
id: 39d6281e-2563-4f00-9369-0122a03e24d0
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph (age 11m)
mgr: ceph(active, since 1.42257s)
osd: 3 osds: 3 up (since 4m), 3 in (since 4m)
rgw: 1 daemon active (ceph)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 57 GiB / 60 GiB avail
pgs:



