Linux:为服务器新加一个硬盘,并分配到指定的文件系统

使用 df -h 查看目前磁盘空间和使用情况

1
2
3
4
5
6
7
8
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home

/dev/mapper/VolGroup-lv_root 已经使用了 85%,是时候扩展容量了,原来的硬盘是 1T 的,也已经分配完了,通过 vgdisplay 命令是可以进行查看是否还有可用空间的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[nukc@localhost ~]# vgdisplay
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 930.82 GiB
PE Size 4.00 MiB
Total PE 238291
Alloc PE / Size 238290 / 930.82 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID OAINo1-Xamg-GhOD-byhH-IMDV-6JPE-W6K9aX

Free 只有 4MB 了,不指望了… 后来,加了个 4T 的硬盘,机房那边说只负责加硬盘,说不会挂载那些,我 fff… 佛慈悲

fdisk -l 列出所有安装的磁盘及分区信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
[nukc@localhost ~]# fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn 't support GPT. Use GNU Parted.
Disk /dev/sda: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn 't support GPT. Use GNU Parted.
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 121602 976762583+ ee GPT
Partition 1 does not start on physical sector boundary.
Disk /dev/mapper/VolGroup-lv_root: 969.6 GB, 969614032896 bytes
255 heads, 63 sectors/track, 117882 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_swap: 8371 MB, 8371830784 bytes
255 heads, 63 sectors/track, 1017 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
WARNING: GPT (GUID Partition Table) detected on '/dev/mapper/ddf1_44656c6c202020 201000005b10281f38460f100c210e2867'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/mapper/ddf1_44656c6c202020201000005b10281f38460f100c210e2867: 4000.2 G B, 4000225165312 bytes
255 heads, 63 sectors/track, 486333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot St art End Blocks Id System
/dev/mapper/ddf1_44656c6c202020201000005b10281f38460f100c210e2867p1 1 267350 2147483647+ ee GPT
Disk /dev/mapper/ddf1_44656c6c202020201000005b10281f38460f100c210e2867p1: 4000.2 GB, 4000223068160 bytes
255 heads, 63 sectors/track, 486332 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/VolGroup-lv_home: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

接下是错误的过程,如果不想看,可以直接跳转到 正确的姿势

看到 warning 了,WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn 't support GPT. Use GNU Parted. 我可不是专业的,先 google,

搜索结果如下:

2 种分区表:
MBR 分区表:(MBR 含义:主引导记录)
所支持的最大卷:2T (T; terabytes,1TB=1024GB)
对分区的设限:最多 4 个主分区或 3 个主分区加一个扩展分区。

GPT 分区表:(GPT 含义:GUID 分区表)
支持最大卷:18EB,(E:exabytes,1EB=1024TB)
每个磁盘最多支持 128 个分区

如果要大于 2TB 的卷或分区就必须得用 GPT 分区表。

Linux 下 fdisk 工具不支持 GPT,得使用另一个 GNU 发布的强大分区工具 parted。fdisk 工具用的话,会有那个的警告信息。

根据我之前找到的资料,接下来就是使用 parted 进行硬盘分区。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[nukc@localhost ~]# parted /dev/sda
GNU Parted 2.1
使用 /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt (用gpt格式可以将3TB弄在一个分区里)
警告: The existing disk label on /dev/sda will be destroyed and all data on this
disk will be lost. Do you want to continue?
是/Yes/否/No? y
(parted) unit TB (设置单位为TB)
(parted) print (显示设置的分区大小)
Model: ATA ST4000NM0035-1V4 (scsi)
Disk /dev/sda: 4.00TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name 标志
(parted) mkpart primary 0 4 (设置为一个主分区,大小为 4TB,开始是 0,结束是 4)
(parted) print (显示设置的分区大小)
Model: ATA ST4000NM0035-1V4 (scsi)
Disk /dev/sda: 4.00TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name 标志
1 0.00TB 4.00TB 4.00TB ext4 primary
(parted) quit (退出 parted 程序)
信息: You may need to update /etc/fstab

接着格式化分区

1
2
3
4
[nukc@localhost ~]# mkfs.ext4 /dev/sda1
mke2fs 1.41.12 (17-May-2010)
/dev/sda1 is apparently in use by the system; will not make a 文件系统 here!

/dev/sda1 正在被使用,解决办法:

1
2
3
4
5
6
7
8
[nukc@localhost ~]# dmsetup status
VolGroup-lv_swap: 0 16351232 linear
VolGroup-lv_root: 0 104857600 linear
VolGroup-lv_root: 104857600 1788919808 linear
ddf1_44656c6c202020201000005b10281f38460f100c210e2867: 0 7812939776 linear
ddf1_44656c6c202020201000005b10281f38460f100c210e2867p1: 0 7812935680 linear
VolGroup-lv_home: 0 41943040 linear
[nukc@localhost ~]# dmsetup remove_all
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[nukc@localhost ~]# mkfs.ext4 /dev/sda1
mke2fs 1.41.12 (17-May-2010)
文件系统标签=
操作系统:Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
244195328 inodes, 976754176 blocks
48837708 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=4294967296
29809 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: mount /dev/sda1 /usr/ local/data
完成
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

然后我乖乖的进行了挂载 mount

1
2
3
4
5
6
7
8
9
[nukc@localhost ~]# mount /dev/sda1 /usr/local/data
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
/dev/sda1 3.6T 196M 3.4T 1% /usr/local/data

到了这里,我的想法就是把 /dev/sda1 的 size 减少,然后加到 /dev/mapper/VolGroup-lv_root 上。

以下就是实际操作的过程:

1
2
3
4
5
6
7
8
9
[nukc@localhost ~]# lvreduce -L 3.3T /dev/sda1
Path required for Logical Volume "sda1".
Please provide a volume group name
Run `lvreduce --help' for more information.
[nukc@localhost ~]# umount /dev/sda1
[nukc@localhost ~]# lvreduce -L 3.3T /dev/sda1
Path required for Logical Volume "sda1".
Please provide a volume group name
Run `lvreduce --help' for more information.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[nukc@localhost ~]# pvcreate /dev/sda1
Physical volume "/dev/sda1" successfully created
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# vgcreate mygroup /dev/sda1
Volume group "mygroup" successfully created
[nukc@localhost ~]# bgdisplay mygroup | grep "Total PE"
-bash: bgdisplay: command not found
[nukc@localhost ~]# vgdisplay mygroup | grep "Total PE"
Total PE 953861
[nukc@localhost ~]# lvcreate -l 953861 mygroup -n data
Logical volume "data" created.
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
[nukc@localhost ~]# mkfs.ext4 /dev/mygroup/data
mke2fs 1.41.12 (17-May-2010)
文件系统标签=
操作系统:Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
244195328 inodes, 976753664 blocks
48837683 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=4294967296
29809 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information:
完成
This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[nukc@localhost ~]# mount -a
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
/dev/mapper/mygroup-data 3.6T 196M 3.4T 1% /usr/local/data

又重新挂载了一次。。。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[nukc@localhost ~]# umount /usr/local/data
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
[nukc@localhost ~]# resize2fs -p /dev/mapper/mygroup-data 10G
resize2fs 1.41.12 (17-May-2010)
请先运行 'e2fsck -f /dev/mapper/mygroup-data'.
[nukc@localhost ~]# e2fsck -f /dev/mapper/mygroup-data
e2fsck 1.41.12 (17-May-2010)
第一步: 检查inode,块,和大小
第二步: 检查目录结构
第3步: 检查目录连接性
Pass 4: Checking reference counts
第5步: 检查簇概要信息
/dev/mapper/mygroup-data: 11/244195328 files (0.0% non-contiguous), 15377150/976 753664 blocks
[nukc@localhost ~]# resize2fs -p /dev/mapper/mygroup-data 10G
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/mygroup-data to 2621440 (4k) blocks.
Begin pass 2 (max = 32768)
正在重定位块 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 29809)
正在扫描inode表 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/mapper/mygroup-data is now 2621440 blocks long.
[nukc@localhost ~]# mount /usr/local/data
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
/dev/mapper/mygroup-data 9.9G 164M 9.2G 2% /usr/local/data

确实减到了 10G,但是接下来却又发生了点事情。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
[nukc@localhost ~]# lvreduce -L 3.5T /dev/mapper/mygroup-data
WARNING: Reducing active and open logical volume to 3.50 TiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mygroup/data? [y/n]: y
Size of logical volume mygroup/data changed from 3.64 TiB (953861 extents) to 3.50 TiB (917504 extents).
Logical volume data successfully resized.
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 712G 130G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
/dev/mapper/mygroup-data 9.9G 164M 9.2G 2% /usr/local/data
[nukc@localhost ~]# vgdisplay
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 930.82 GiB
PE Size 4.00 MiB
Total PE 238291
Alloc PE / Size 238290 / 930.82 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID OAINo1-Xamg-GhOD-byhH-IMDV-6JPE-W6K9aX
--- Volume group ---
VG Name mygroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953861
Alloc PE / Size 917504 / 3.50 TiB
Free PE / Size 36357 / 142.02 GiB
VG UUID 72Tki3-3lO0-1c9D-bppy-zTlw-PM2c-QE2bZZ

mygroup 的 Free PE 只有 142.02 GB,出现问题了,umount 之后再试一次

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
[nukc@localhost ~]# umount /dev/mapper/mygroup-data
[nukc@localhost ~]# resize2fs -p /dev/mapper/mygroup-data
resize2fs 1.41.12 (17-May-2010)
请先运行 'e2fsck -f /dev/mapper/mygroup-data'.
[nukc@localhost ~]# e2fsck -f /dev/mapper/mygroup-data
e2fsck 1.41.12 (17-May-2010)
第一步: 检查inode,块,和大小
第二步: 检查目录结构
第3步: 检查目录连接性
Pass 4: Checking reference counts
第5步: 检查簇概要信息
/dev/mapper/mygroup-data: 11/655360 files (0.0% non-contiguous), 83119/2621440 blocks
[nukc@localhost ~]# resize2fs -p /dev/mapper/mygroup-data
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/mygroup-data to 939524096 (4k) blocks.
Begin pass 1 (max = 28592)
正在扩充inode表 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/mapper/mygroup-data is now 939524096 blocks long.
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 713G 129G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# mount /dev/mapper/mygroup-data
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 713G 129G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
/dev/mapper/mygroup-data 3.5T 197M 3.3T 1% /usr/local/data
[nukc@localhost ~]# umount /usr/local/data
[nukc@localhost ~]# resize2fs -p /dev/mapper/mygroup-data 10G
resize2fs 1.41.12 (17-May-2010)
请先运行 'e2fsck -f /dev/mapper/mygroup-data'.
[nukc@localhost ~]# e2fsck -f /dev/mapper/mygroup-data
e2fsck 1.41.12 (17-May-2010)
第一步: 检查inode,块,和大小
第二步: 检查目录结构
第3步: 检查目录连接性
Pass 4: Checking reference counts
第5步: 检查簇概要信息
/dev/mapper/mygroup-data: 11/234881024 files (0.0% non-contiguous), 14792732/939524096 blocks
[nukc@localhost ~]# resize2fs -p /dev/mapper/mygroup-data 10G
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/mygroup-data to 2621440 (4k) blocks.
Begin pass 3 (max = 28672)
正在扫描inode表 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/mapper/mygroup-data is now 2621440 blocks long.
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# mount /usr/local/data
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 886G 713G 129G 85% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
/dev/mapper/mygroup-data 9.9G 164M 9.2G 2% /usr/local/data
[nukc@localhost ~]# lvreduce -L 10G /dev/mapper/mygroup-data
WARNING: Reducing active and open logical volume to 10.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mygroup/data? [y/n]: y
Size of logical volume mygroup/data changed from 3.50 TiB (917504 extents) to 10.00 GiB (2560 extents).
Logical volume data successfully resized.
[nukc@localhost ~]# vgdisplay
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 930.82 GiB
PE Size 4.00 MiB
Total PE 238291
Alloc PE / Size 238290 / 930.82 GiB
Free PE / Size 1 / 4.00 MiB
VG UUID OAINo1-Xamg-GhOD-byhH-IMDV-6JPE-W6K9aX
--- Volume group ---
VG Name mygroup
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 3.64 TiB
PE Size 4.00 MiB
Total PE 953861
Alloc PE / Size 2560 / 10.00 GiB
Free PE / Size 951301 / 3.63 TiB
VG UUID 72Tki3-3lO0-1c9D-bppy-zTlw-PM2c-QE2bZZ

现在 Free PE 有 3.63TB 了

1
2
3
[nukc@localhost ~]# lvextend -L +3.63TiB /dev/mapper/VolGroup-lv_root
Rounding size to boundary between physical extents: 3.63 TiB.
Insufficient free space: 951583 extents needed, but only 1 available

卧槽,不行,说是只有 1 可用。。。然后想了下,可能是不同的 Volume group 不能这样分配。那就重新来过,删除之前创建的 mygroup

1
2
3
4
[nukc@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 3 0 wz--n- 930.82g 4.00m
mygroup 1 1 0 wz--n- 3.64t 3.63t
1
2
3
4
5
6
7
8
9
10
11
[nukc@localhost ~]# umount /usr/local/data
[nukc@localhost ~]# lvremove /dev/mapper/mygroup-data
Do you really want to remove active logical volume data? [y/n]: y
Logical volume "data" successfully removed
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# vgremove mygroup
Volume group "mygroup" successfully removed
[nukc@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 3 0 wz--n- 930.82g 4.00m

好了,已经删除了。

搞了那么多,之前都是不对。之前通过 fdisk -l 知道了有个 4T 的硬盘 /dev/sda1 ,通过 vgs命令知道了 VolGroup 这个 Volume group 。然后就是直接把 /dev/sda1 直接拓展到 VolGroup

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[nukc@localhost ~]# vgextend VolGroup /dev/sda1
Volume group "VolGroup" successfully extended
[nukc@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 2 3 0 wz--n- 4.55t 3.64t
[nukc@localhost ~]# vgdisplay
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 10
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size 4.55 TiB
PE Size 4.00 MiB
Total PE 1192152
Alloc PE / Size 238290 / 930.82 GiB
Free PE / Size 953862 / 3.64 TiB
VG UUID OAINo1-Xamg-GhOD-byhH-IMDV-6JPE-W6K9aX

这次算是 OK 了,接下来就是把 Free 的空间加到 /dev/mapper/VolGroup-lv_root

1
2
3
4
5
6
7
8
9
10
11
12
13
[nukc@localhost ~]# lvextend -L +3.64TiB /dev/mapper/VolGroup-lv_root
Rounding size to boundary between physical extents: 3.64 TiB.
Insufficient free space: 954205 extents needed, but only 953862 available
You have new mail in /var/spool/mail/root
[nukc@localhost ~]# lvextend -L +3.60TiB /dev/mapper/VolGroup-lv_root
Rounding size to boundary between physical extents: 3.60 TiB.
Size of logical volume VolGroup/lv_root changed from 903.02 GiB (231174 extents) to 4.48 TiB (1174893 extents).
Logical volume lv_root successfully resized.
[nukc@localhost ~]# resize2fs -p /dev/mapper/VolGroup-lv_root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/VolGroup-lv_root is mounted on /; on-line resizing required
old desc_blocks = 57, new_desc_blocks = 287
Performing an on-line resize of /dev/mapper/VolGroup-lv_root to 1203090432 (4k) blocks.

最后就是等最后那个命令完成,要是突然中断了,只要再执行一次最后那个命令就好。

1
2
3
4
5
6
7
8
[nukc@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 4.5T 714G 3.5T 17% /
tmpfs 7.8G 72K 7.8G 1% /dev/shm
/dev/sdb2 485M 40M 421M 9% /boot
/dev/sdb1 200M 260K 200M 1% /boot/efi
/dev/mapper/VolGroup-lv_home 20G 172M 19G 1% /home
You have new mail in /var/spool/mail/root