栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 系统运维 > 运维 > Linux

服务器配置lxc管理

Linux 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

服务器配置lxc管理

服务器配置lxc管理

前言一、分区

1.创建分区2.格式化分区 二、lxd

1.安装2.初始化3.容器创建4.配置ssh5.升级ubuntu6.添加显卡7.安装驱动8.安装图形化界面9.制作镜像10.共享存储卷 总结


前言

由于之前的服务器被我搞坏了所以要重新开始配置服务器lxc管理


一、分区 1.创建分区

目的:划分出3T的分区来挂载lxc
查看磁盘

sudo fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

这一块3.7T的就是要进行分区操作的盘,然后由于默认是mbr格式,无法通过fdisk划分2T以上的分区,因此用parted工具将mbr转化为gpt

sudo parted /dev/sda #进入parted 
mklabel gpt #将磁盘设置为gpt格式
mkpart logical 0 -1 #将磁盘所有的容量设置为GPT格式,这里会格式化丢失原先文件,请慎重
print #查看分区结果

这个时候应该是默认进行分了一个/dev/sda1这个分区,这个分区包含了整个磁盘大小,不是我们想要的3T,因此要删除重新配置

rm 1 #删除编号为1的分区
unit GB #改变默认单位GB
mkpart #进入创立分区交互指令
Partition name?  []? pv1
File system type?  [ext2]? zfs
Start? 0
End? 3072GB
print #分区完成后打印出来看一下结果
Model: ATA HGST HUS726T4TAL (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  3072GB  3072GB  zfs          pv1

在parted中完成了分区,然后通过fdisk验证一下

sudo fdisk -l
Device     Start        End    Sectors  Size Type
/dev/sda1   2048 6000001023 5999998976  2.8T Linux filesystem

fdisk中也可以看到对应的分区结果,至此完成了分区。

2.格式化分区

将刚刚分出来的sda1格式化为ext4的格式

sudo mkfs -t ext4 /dev/sda1
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 749999872 4k blocks and 187506688 inodes
Filesystem UUID: a66ff254-e95f-4981-921e-a090e7aff4a1
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

c
查看硬盘/dev/sda1 对应的UUID

sudo blkid

注意: 唯一的sda1的UUID号。再事先准备好一个地方来做挂载点,比如我这里是/data1然后再用命令打开配置文件:

sudo vim /etc/fstab

然后在文件末尾添加

UUID=a66ff254-e95f-4981-921e-a090e7aff4a1 /data1       ext4    defaults 0       0

重启

df
Filesystem      1K-blocks     Used  Available Use% Mounted on
udev             65774384        0   65774384   0% /dev
tmpfs            13166396    10880   13155516   1% /run
/dev/sdb2       921923300 46802332  828220132   6% /
tmpfs            65831968        0   65831968   0% /dev/shm
tmpfs                5120        4       5116   1% /run/lock
tmpfs            65831968        0   65831968   0% /sys/fs/cgroup
/dev/loop0         113536   113536          0 100% /snap/core/12725
tmpfs            13166392        0   13166392   0% /run/user/1000
/dev/sda1      2951859536    90140 2801753040   1% /data1

可以看到/dev/sda1已经成功挂载到了/data1下。证明了开机自动挂载完成。最后再测验一下卸载和挂载功能

sudo umount -l /dev/sda1
df
Filesystem     1K-blocks     Used Available Use% Mounted on
udev            65774384        0  65774384   0% /dev
tmpfs           13166396    10880  13155516   1% /run
/dev/sdb2      921923300 46802332 828220132   6% /
tmpfs           65831968        0  65831968   0% /dev/shm
tmpfs               5120        4      5116   1% /run/lock
tmpfs           65831968        0  65831968   0% /sys/fs/cgroup
/dev/loop0        113536   113536         0 100% /snap/core/12725
tmpfs           13166392        0  13166392   0% /run/user/1000

没有找到/dev/sda1,卸载成功

sudo mount /dev/sda1 /data1
df
Filesystem      1K-blocks     Used  Available Use% Mounted on
udev             65774384        0   65774384   0% /dev
tmpfs            13166396    10880   13155516   1% /run
/dev/sdb2       921923300 46802332  828220132   6% /
tmpfs            65831968        0   65831968   0% /dev/shm
tmpfs                5120        4       5116   1% /run/lock
tmpfs            65831968        0   65831968   0% /sys/fs/cgroup
/dev/loop0         113536   113536          0 100% /snap/core/12725
tmpfs            13166392        0   13166392   0% /run/user/1000
/dev/sda1      2951859536    90140 2801753040   1% /data1

又重新挂载上了,至此分区工作完成。

二、lxd 1.安装

lxd是lxc的升级版,安装包名为lxd,命令仍然沿用lxc。
lxd的安装方式有两种,apt和snap,下面列举了两者的差异。

apt比较熟悉,刚开始想通过apt安装,但是安装过程中一直提示连接不上snap导致无法安装。由此推断apt也是封装snap命令进行安装lxd,所以直接采用snap进行安装,并且选用推荐的4.x版本。

sudo snap install lxd

安装完成后可以通过snap version看到对应的版本信息

ssme@ssme-server:~$ snap version
snap    2.54.3.2
snapd   2.54.3.2
series  16
ubuntu  20.04
kernel  5.4.0-100-generic

当然过程中不一定是一帆风顺的,我遇到过两个问题。
第一次提示我

connect: connection refused

这是网络连接问题,查看网络是否可用,我通过设置代理解决了
第二此提示我

read: connection reset by peer

字面意思是连接被同行重置,理解一下就是网络配置被修改了。我因为修改过代理,并且是通过export http设置,vim ~/.bashrc以及snap自带的vim /etc/environment 等三种方式修改了代理,虽然代理是同一个ip,但是经过无数次的失败后,最后尝试将之前的修改的代理只保留一个。

export proxy可以通过重启或者export http_proxy="",设置为空来清除~/.bashrc 保留/etc/environment 删除proxy配置信息

然后重新尝试安装,发现安装完成,坑爹啊!就算是完成了,我依然感觉这是个bug。

2.初始化

首先把之前挂载的分区拿下来,这里不需要挂,lxc有一套自己的挂载方式

umount /dev/sda1

在块设备 /dev/sda1 上创建一个ZFS存储池

sudo lxc storage create zfs-pool zfs source=/dev/sda1

创建成功后查看一下,因为没有配置占用空间,所以默认占用全部大约2.7T

ssme@ssme-server:~$ sudo lxc storage info zfs-pool
info:
  description: ""
  driver: zfs
  name: zfs-pool
  space used: 1.56MiB
  total space: 2.69TiB

lxd初始化,第二轮互动询问是否要创建资源池,这里我们通过指定分区创建资源池了,所以不需要默认创建(如果默认创建的话就会挂载到别的分区去)

ssme@ssme-server:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]: no
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
The requested network bridge "lxdbr0" already exists. Please choose another name.
What should the new bridge be called? [default=lxdbr0]: lxdbr0
The requested network bridge "lxdbr0" already exists. Please choose another name.
What should the new bridge be called? [default=lxdbr0]: lxdbb1
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
config: {}
networks:
- config:
    ipv4.address: auto
    ipv6.address: auto
  description: ""
  name: lxdbb1
  type: ""
  project: default
storage_pools: []
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbb1
      type: nic
  name: default
projects: []
cluster: null

再次配置,修改网桥(由于我上面拼写错了,所以这里顺带修改了)

 sudo lxc profile edit default
3.容器创建

目的:采用清华源中的ubuntu镜像创建容器,并通过ssh连接
添加清华镜像源地址

sudo lxc remote add tuna-images https://mirrors.tuna.tsinghua.edu.cn/lxc-images/ --protocol=simplestreams --public

查看该仓库中可用镜像

sudo lxc image list tuna-images:

使用清华源中的ubuntu镜像创建一个叫alpha的容器,这里我报错,网上资料显示应该是配置文件出了问题

ssme@ssme-server:~$ sudo lxc launch tuna-images:ubuntu/18.04 alpha
Creating alpha
Error: Failed instance creation: Failed creating instance record: Failed initialising instance: Invalid devices: Failed detecting root disk device: No root device could be found

按照要求修改配置文件,这里我添加了root一栏,下面设置了根目录位置以及最大空间。

sudo lxc profile edit default


然后我再次创建容器alpha,创建成功

4.配置ssh
ssme@ssme-server:~$ sudo lxc launch tuna-images:ubuntu/18.04 alpha
Creating alpha
Starting alpha

进入容器

sudo lxc exec test bash

我们登录的是root用户,在这个容器中已经存在了一个叫ubuntu的用户

修改密码

passwd root
passwd ubuntu

安装ssh

apt install ssh

通过ssh连接容器,查看容器与宿主机的ip,因为我们没有设置桥接网卡,不能从外部电脑访问容器(不能ping通容器的ip),因此我们采用端口监听的方式来访问我们的容器。
退出容器

exit

在宿主机查看容器

ssme@ssme-server:~$ sudo lxc list
[sudo] password for ssme:
+-------+---------+----------------------+------+-----------+-----------+
| NAME  |  STATE  |         IPV4         | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+----------------------+------+-----------+-----------+
| alpha | RUNNING | 10.246.13.238 (eth0) |      | ConTAINER | 0         |
+-------+---------+----------------------+------+-----------+-----------+

图中可知容器的ip地址为10.246.13.238
查看宿主机ip地址

ip addr

可知宿主机ip为10.83.17.189
端口转发

ssme@ssme-server:~$ sudo lxc config device add alpha proxy0 proxy listen=tcp:10.83.17.189:60601 connect=tcp:10.246.13.238:22 bind=host
[sudo] password for ssme:
Device proxy0 added to alpha

60601是我们定的端口号,通过宿主机的60601端口号监听容器中22端口号(SSH默认端口号)完成端口绑定,这样我们就可以通过ssh直接连接容器,同时可以查看对应的容器配置文件,发现已经将端口转发配置固定下来了

sudo lxc config edit alpha
architecture: x86_64
config:
  image.architecture: amd64
  image.description: Ubuntu bionic amd64 (20220228_07:42)
  image.os: Ubuntu
  image.release: bionic
  image.serial: "20220228_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 04d0148a87275d123d7d0972047d1216330a39513796caff1f1827943d3e7154
  volatile.eth0.host_name: veth9948d26c
  volatile.eth0.hwaddr: 00:16:3e:95:6e:3f
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 1333a63f-0370-4a5a-94a1-bc62ecde9f3c
devices:
  proxy0:
    bind: host
    connect: tcp:10.246.13.238:22
    listen: tcp:10.83.17.189:60601
    type: proxy
ephemeral: false
profiles:
- default
stateful: false

对应字段解释请看下图

后面还要配置显卡,磁盘大小,都会相应的写入配置文件中
使用ssh连接容器成功

ssme@ssme-server:~$ sudo ssh ubuntu@10.83.17.189 -p 60601
5.升级ubuntu

更换apt源

sudo mv /etc/apt/sources.list /etc/apt/sources.list.bak
sudo vim /etc/apt/sources.list

网易源 bionic版本

deb http://mirrors.163.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ bionic-backports main restricted universe multiverse

然后更新源,养成好习惯

sudo apt update

查看一下内核版本和发行版本

ubuntu@alpha:~$ cat /proc/version
Linux version 5.4.0-100-generic (buildd@lcy02-amd64-002) (gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)) #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022
ubuntu@alpha:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.6 LTS
Release:        18.04
Codename:       bionic

发现内核版本和发行版本对应不上,担心以后会出现问题,所以直接升级成ubuntu20,操作比较简单,过程就是一直默认往下进行

apt update 
apt upgrade
apt dist-upgrade 
do-release-upgrade 

更新成功以后重启再查看发行版版本

ubuntu@alpha:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:        20.04
Codename:       focal

成功升级到ubuntu20focal版本,再换源,养成好习惯

# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse

# 预发布软件源,不建议启用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse

采用清华源,注意版本ubuntu20 focal。换源步骤同上。

6.添加显卡

为容器添加所有GPU:

lxc config device add alpha gpu gpu

添加指定GPU:
先查看gpu阵列

ssme@ssme-server:~$ nvidia-smi
Fri Mar  4 09:41:17 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:18:00.0 Off |                  N/A |
| 16%   29C    P8     3W / 250W |      5MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:3B:00.0 Off |                  N/A |
| 16%   29C    P8     7W / 250W |     18MiB / 11018MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  On   | 00000000:86:00.0 Off |                  N/A |
| 16%   29C    P8     9W / 250W |      5MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce ...  On   | 00000000:AF:00.0 Off |                  N/A |
| 16%   29C    P8    15W / 250W |      5MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1561      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A      1561      G   /usr/lib/xorg/Xorg                  9MiB |
|    1   N/A  N/A      1646      G   /usr/bin/gnome-shell                6MiB |
|    2   N/A  N/A      1561      G   /usr/lib/xorg/Xorg                  4MiB |
|    3   N/A  N/A      1561      G   /usr/lib/xorg/Xorg                  4MiB |
+-----------------------------------------------------------------------------+

把第一张卡分给alpha

lxc config device add alpha gpu0 gpu id=0

或者,这里pci中第一段8个0要换成4个0,可能是lxc只能读取四位,全部复制会有bug

lxc config device add jellyfish gpu gpu pci=0000:18:00.0 
7.安装驱动

添加好显卡后,就相当于我们给容器安装了显卡,我们回到容器,然后安装显卡驱动与宿主机的显卡版本必须一致,安装方法参照我的另一篇博客ubuntu下安装nividia驱动
ubuntu上安装非常简单

ubuntu@alpha:~$ nvidia-smi
Wed Mar  2 05:42:54 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01    Driver Version: 470.42.01    CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:18:00.0 Off |                  N/A |
| 20%   37C    P8     4W / 250W |      5MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:3B:00.0 Off |                  N/A |
| 19%   38C    P8     9W / 250W |     18MiB / 11018MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  On   | 00000000:86:00.0 Off |                  N/A |
| 19%   37C    P8     8W / 250W |      5MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce ...  On   | 00000000:AF:00.0 Off |                  N/A |
| 19%   37C    P8    15W / 250W |      5MiB / 11019MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|

再验证nvcc发现出错

ubuntu@alpha:~$ nvcc -v
-bash: nvcc: command not found

应该是没有把cuda写到环境变量中,打开bashrc,添加nvcc的文件夹位置

ubuntu@alpha:/usr/local/cuda/bin$ sudo vim ~/.bashrc

在结尾处加入 export PATH=$PATH:/usr/local/cuda/bin
再激活一下,nvcc就配置好了。

ubuntu@alpha:/usr/local/cuda/bin$ source ~/.bashrc
ubuntu@alpha:/usr/local/cuda/bin$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__2_19:15:15_PDT_2021
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
8.安装图形化界面

刷新源

sudo apt update

安装无推荐软件的ubuntu桌面(默认安装gnome,完整安装会有很多无关的软件)

sudo apt install --no-install-recommends ubuntu-desktop

安装远程连接使用安装脚本(安装git后下载我们之后需要用的东西)

sudo apt install git
git clone https://github.com/shenuiuin/LXD_GPU_SERVER

打开文件夹,赋予脚本可执行权限

cd LXD_GPU_SERVER/
sudo chmod a+x xrdp-installer-1.2.3.sh

脚本会下载一些文件,需要有Downloads文件夹

mkdir -p ~/Downloads

安装脚本

./xrdp-installer-1.2.3.sh -c -l -s

安装完成

退出容器,添加端口绑定,命令跟ssh差不多,在服务器上远程连接测试

ssme@ssme-server:~$ lxc config device add alpha proxy1 proxy listen=tcp:10.83.17.189:60611 connect=tcp:10.246.13.238:3389 bind=host

用远程桌面进行连接。
桌面安装完成

9.制作镜像

将容器母本alpha制作成镜像,方便克隆多个容器给朋友使用。
停止容器

sudo lxc stop alpha

确认容器状态,已停止

ssme@ssme-server:~$ sudo lxc list
+-------+---------+------+------+-----------+-----------+
| NAME  |  STATE  | IPV4 | IPV6 |   TYPE    | SNAPSHOTS |
+-------+---------+------+------+-----------+-----------+
| alpha | STOPPED |      |      | ConTAINER | 0         |

将test容器保存为ubuntudemo镜像

ssme@ssme-server:~$ sudo lxc publish alpha --alias ubuntudemo --public
Instance published with fingerprint: 988a7ead01a8648ac1da593f306047ef926e327f1e7e1bca20fa5201b199299c

查看生成的镜像,大概有14个G

ssme@ssme-server:~$ sudo lxc image list
[sudo] password for ssme:
+------------+--------------+--------+--------------------------------------+--------------+-----------+------------+-----------------------------+
|   ALIAS    | FINGERPRINT  | PUBLIC |             DEscriptION              | ARCHITECTURE |   TYPE    |    SIZE    |         UPLOAD DATE         |
+------------+--------------+--------+--------------------------------------+--------------+-----------+------------+-----------------------------+
| ubuntudemo | 988a7ead01a8 | yes    | Ubuntu bionic amd64 (20220228_07:42) | x86_64       | ConTAINER | 14020.81MB | Mar 2, 2022 at 7:07am (UTC) |
+------------+--------------+--------+--------------------------------------+--------------+-----------+------------+-----------------------------+
|            | 04d0148a8727 | no     | Ubuntu bionic amd64 (20220228_07:42) | x86_64       | ConTAINER | 105.35MB   | Mar 1, 2022 at 7:18am (UTC) |

再用镜像克隆出相应的容器,取名jellyfish

ssme@ssme-server:~$ sudo lxc launch 988a7ead01a8 jellyfish
Creating jellyfish
Starting jellyfish

克隆出来的容器jellyfish能够拥有alpha相同的内部环境,但是配置文件是空的,因此我们要给他配置

ssh:参考本章第四节远程桌面:参考本章第八节显卡:参考本章第六节
ssh

ssme@ssme-server:~$ sudo lxc config device add jellyfish proxy0 proxy listen=tcp:10.83.17.189:50601 connect=tcp:10.246.13.245:22 bind=host

远程桌面

ssme@ssme-server:~$ lxc config device add jellyfish proxy1 proxy listen=tcp:10.83.17.189:50611 connect=tcp:10.246.13.245:3389 bind=host

显卡

ssme@ssme-server:~$ lxc config device add jellyfish  gpu0 gpu pci=0000:18:00.0

至此完成容器克隆

10.共享存储卷

在容器alpha和jellyfish之间开辟一段共享存储卷。
创建共享共享存储卷,命名为dataset-vol

lxc storage volume create zfs-pool dataset-vol

设置存储卷大小

ssme@ssme-server:~$ lxc storage volume set zfs-pool dataset-vol size=300GB

连接自定义卷到alpha容器中,data是设备名,可以自定义,/data1是共享目录地址,连接前要先停止容器

lxc stop alpha
lxc storage volume attach zfs-pool dataset-vol alpha data /data1

连接成功后不会有任何提示,直接进入容器,找到共享目录,创建一个文件夹做测试

sudo ssh ubuntu@10.83.17.189 -p 60601
cd /data1
sudo mkdir test

同理将共享存储卷连接到jellyfish

lxc stop jellyfish
lxc storage volume attach zfs-pool dataset-vol jellyfish data /data1

然后进入jellyfish的/data1目录下,查看是否存在test文件夹

ubuntu@jellyfish:~$ sudo ls /data1
test

存在test文件夹,说明共享存储卷完成。

总结

整一套服务器配置流程走下来因为走了很多弯路,增长了很多计算机知识,虽然起因是搞坏了服务器,但是感谢我的mentor没有放弃我,在整个过程中一直帮助我解决问题。
参考:https://github.com/shenuiuin/LXD_GPU_SERVER

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/751693.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号