Recovering LVM data after a Physical Volume (PV) disk is removed is complex and dependent on how the data was stored (e.g., linear vs. striped Logical Volumes). The core of the recovery process involves replacing the missing disk with a new one of at least the same size and restoring the LVM metadata to link the new disk with the old PV's unique identifier (UUID)
Example : We have RHEL 8.10 VM and its having 4 data disks with below sizes and configured with LVM and VG name is vgdata and all PV's are added to single VG vgdata
Note: The name of disk may vary depends on OS/VM
Note: I will remove/detach the sdd disk , hence bolding the sdd and its related LV's
/dev/sda - 32GB
/dev/sdb - 40GB
/dev/sdc - 50GB
/dec/sdd - 60GB
VG Name - vgdata
[root@RHEL8VM naveen]# lsblk -o NAME,TYPE,FSTYPE,LABEL,SIZE,RO,MOUNTPOINT
4 LV's are created as above and mounted to 4 mount paths and data is there in as below
Filesystem . . Type Size Used Avail Use% Mounted on
/dev/mapper/vgdata-data xfs 10G 104M 9.9G 2% /appdata
/dev/mapper/vgdata-applog xfs 40G 30G 11G 75% /applog
/dev/mapper/vgdata-backup xfs 70G 30G 41G 43% /backup
/dev/mapper/vgdata-shared xfs 60G 60G 50M 100% /shared
[root@RHEL8VM shared]# pvscan
PV /dev/sde2 VG rootvg lvm2 [<63.02 GiB / <40.02 GiB free]
PV /dev/sda VG vgdata lvm2 [<32.00 GiB / 1.98 GiB free]
PV /dev/sdb VG vgdata lvm2 [<40.00 GiB / 0 free]
PV /dev/sdd VG vgdata lvm2 [<60.00 GiB / 0 free]
PV /dev/sdc VG vgdata lvm2 [<50.00 GiB / 0 free]
Total: 5 [245.00 GiB] / in use: 5 [245.00 GiB] / in no VG: 0 [0 ]
[root@RHEL8VM shared]# vgscan
Found volume group "rootvg" using metadata type lvm2
Found volume group "vgdata" using metadata type lvm2
[root@RHEL8VM shared]# lvscan | grep -i vgdata
ACTIVE '/dev/vgdata/data' [10.00 GiB] inherit
ACTIVE '/dev/vgdata/applog' [40.00 GiB] inherit
ACTIVE '/dev/vgdata/shared' [60.00 GiB] inherit
ACTIVE '/dev/vgdata/backup' [70.00 GiB] inherit
I have created test files in each mount point and you can see the disk usage output below
[root@RHEL8VM shared]# ls -lrt /applog/ | wc -l
1001
[root@RHEL8VM shared]# ls -lrt /backup/ | wc -l
1002
[root@RHEL8VM shared]# ls -lrt /shared/ | wc -l
1001
[root@RHEL8VM shared]# ls -lrt /appdata/ | wc -l
1
[root@RHEL8VM shared]# du -sh /applog/ /appdata/ /backup/ /shared/
30G /applog/
0 /appdata/
30G /backup/
60G /shared/
========================================================================
pvdisplay -m This command is used to display information about physical volumes.
- -m: This option stands for "maps" and shows the physical extents and the logical volumes that use them
Note: Please be noted , I am removing the rootvg output to avoid the confusion
[root@RHEL8VM shared]# pvdisplay -m
--- Physical volume ---
PV Name /dev/sda
VG Name vgdata
PV Size 32.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 8191
Free PE 508
Allocated PE 7683
PV UUID 4p3U30-xgxn-ZrDl-PQ0T-35Du-ojqs-oXJh95
--- Physical Segments ---
Physical extent 0 to 2559:
Logical volume /dev/vgdata/data
Logical extents 0 to 2559
Physical extent 2560 to 5120:
Logical volume /dev/vgdata/shared
Logical extents 12799 to 15359
Physical extent 5121 to 7682:
Logical volume /dev/vgdata/backup
Logical extents 15358 to 17919
Physical extent 7683 to 8190:
FREE
--- Physical volume ---
PV Name /dev/sdb
VG Name vgdata
PV Size 40.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 10239
Free PE 0
Allocated PE 10239
PV UUID XURd9R-sTzr-wHPF-37a0-kr1K-Hm05-2Bcdda
--- Physical Segments ---
Physical extent 0 to 10238:
Logical volume /dev/vgdata/backup
Logical extents 0 to 10238
--- Physical volume ---
PV Name /dev/sdd => We are going to detach this disk , so only it will affect applog & backup mounts
VG Name vgdata
PV Size 60.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 15359
Free PE 0
Allocated PE 15359
PV UUID tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB
--- Physical Segments ---
Physical extent 0 to 10239:
Logical volume /dev/vgdata/applog
Logical extents 0 to 10239
Physical extent 10240 to 15358:
Logical volume /dev/vgdata/backup
Logical extents 10239 to 15357
--- Physical volume ---
PV Name /dev/sdc
VG Name vgdata
PV Size 50.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 12799
Free PE 0
Allocated PE 12799
PV UUID ABXqWZ-L5ES-cngi-P1cb-3wre-iDAu-4HTcoF
--- Physical Segments ---
Physical extent 0 to 12798:
Logical volume /dev/vgdata/shared
Logical extents 0 to 12798
[root@RHEL8VM shared]#
======================================================================
lvdisplay: This command is used to display information about logical volumes.
- -m: This option stands for "maps" and shows the physical volumes and physical extents that make up the logical volume.
[root@RHEL8VM shared]# lvdisplay -m
--- Logical volume ---
LV Path /dev/vgdata/data
LV Name data
VG Name vgdata
LV UUID Y3Q4cq-NF6C-BHkB-G9T3-NCXf-tHvn-9pURkI
LV Write Access read/write
LV Creation host, time RHEL8Backup1, 2025-11-05 07:42:25 +0000
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:5
--- Segments ---
Logical extents 0 to 2559:
Type linear
Physical volume /dev/sda
Physical extents 0 to 2559
--- Logical volume ---
LV Path /dev/vgdata/applog
LV Name applog
VG Name vgdata
LV UUID RMZAsC-TW8C-INId-3XVX-KGbT-eLn5-3ua8Ak
LV Write Access read/write
LV Creation host, time RHEL8Backup1, 2025-11-05 07:44:13 +0000
LV Status available
# open 1
LV Size 40.00 GiB
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:6
--- Segments ---
Logical extents 0 to 10239:
Type linear
Physical volume /dev/sdd ==> The applog mount data is stored under /dev/sdd ONLY
Physical extents 0 to 10239
--- Logical volume ---
LV Path /dev/vgdata/shared
LV Name shared
VG Name vgdata
LV UUID Y5uDJO-qxWr-iuf5-gdu6-q52p-4xIb-mmbzkt
LV Write Access read/write
LV Creation host, time RHEL8Backup1, 2025-11-05 07:46:31 +0000
LV Status available
# open 1
LV Size 60.00 GiB
Current LE 15360
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:7
--- Segments ---
Logical extents 0 to 12798:
Type linear
Physical volume /dev/sdc
Physical extents 0 to 12798
Logical extents 12799 to 15359:
Type linear
Physical volume /dev/sda
Physical extents 2560 to 5120
--- Logical volume ---
LV Path /dev/vgdata/backup
LV Name backup
VG Name vgdata
LV UUID dhGvD8-86EA-vs5F-bWGZ-4kWn-UVXV-p3Inff
LV Write Access read/write
LV Creation host, time RHEL8Backup1, 2025-11-05 08:50:12 +0000
LV Status available
# open 1
LV Size 70.00 GiB
Current LE 17920
Segments 3 ===> How many disks are used to store this LVM data
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:8
--- Segments ---
Logical extents 0 to 10238:
Type linear
Physical volume /dev/sdb
Physical extents 0 to 10238 ==> (10238*4)/1024= 40 GB ; PE Size is 4.00 MiB
Logical extents 10239 to 15357:
Type linear
Physical volume /dev/sdd
Physical extents 10240 to 15358 ==> ((15358-10240 )*4)/1024= 20 GB
Logical extents 15358 to 17919:
Type linear
Physical volume /dev/sda
Physical extents 5121 to 7682 ==> ((7682-5121 )*4)/1024= 10 GB
Which means , for the LV /dev/vgdata/backup , the data is stored in 3 DATA disks (sdb,sdd,sda)
and also we can see Physical extents for that DATA Disk and logical extents for LV.
We can calculate how much physical disk is allocated to this backup mount path
<< REMOVING THE DISK OR DETATCH THE DISK >>
Now I am going to detach the DISK /dev/sdd with is 60GB disk from backend , as we can see above that /dev/sdd is storing the lvdata for the /backup and /applog file systems
Once we detach the /dev/sdd - 60GB , we can see below errors while executing pvs/vgs/lvs
[root@RHEL8VM naveen]# pvs
WARNING: Couldn't find device with uuid tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB.
WARNING: VG vgdata is missing PV tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB (last written to /dev/sde).
PV VG Fmt Attr PSize PFree
/dev/sda vgdata lvm2 a-- <32.00g 1.98g
/dev/sdb vgdata lvm2 a-- <40.00g 0
/dev/sdc vgdata lvm2 a-- <50.00g 0
/dev/sde2 rootvg lvm2 a-- <63.02g <40.02g
[unknown] vgdata lvm2 a-m <60.00g 0 ==> This is Missing Disk with 60 GB
[root@RHEL8VM naveen]# vgs
WARNING: Couldn't find device with uuid tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB.
WARNING: VG vgdata is missing PV tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB (last written to /dev/sde).
VG #PV #LV #SN Attr VSize VFree
rootvg 1 5 0 wz--n- <63.02g <40.02g
vgdata 4 4 0 wz-pn- 181.98g 1.98g
[root@RHEL8VM naveen]# lvs
WARNING: Couldn't find device with uuid tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB.
WARNING: VG vgdata is missing PV tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB (last written to /dev/sde).
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
homelv rootvg -wi-ao---- 1.00g
rootlv rootvg -wi-ao---- 2.00g
tmplv rootvg -wi-ao---- 2.00g
usrlv rootvg -wi-ao---- 10.00g
varlv rootvg -wi-ao---- 8.00g
applog vgdata -wi-ao--p- 40.00g ==> p means Partial
backup vgdata -wi-ao--p- 70.00g
data vgdata -wi-ao---- 10.00g
shared vgdata -wi-ao---- 60.00g
I have not rebooted the VM , but still I can able to access below mounts
[root@RHEL8VM applog]# uptime
13:09:36 up 8:44, 2 users, load average: 0.00, 0.00, 0.00
[root@RHEL8VM applog]# ls -lrt /applog
ls: cannot access '/applog': Input/output error ===> ERROR
I can't able to access applog mount due all DATA Stored under /dev/sdd disk only , which we were removed or detached
[root@RHEL8VM applog]# ls -lrt /backup/ | wc -l ==> This one I can able to access
1002
But still we can able to access /backup mount due to the data is still not yet stored under /dev/sdd , it was stored under /dev/sdb till 40GB and then it will go to sdd-20GB and then it will go to sda-10GB but currently the data occupied only 30GB which is in /dev/sdb, still we have 10 GB free in the sdb. If this 10GB also occupied then it will store in sdd and then it will goto sda.
Now I am trying to reboot and lets check access the both the mounts and pvs;vgs;lvs outputs
I have rebooted the VM and we can see that sdd data disk which was 60G is not showing in lsblk and can't able to mount /backup and /applog mounts and showing above errors
pvs, vgs and lvs showing below error
[root@RHEL8VM backup]# pvs
WARNING: Couldn't find device with uuid tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB.
WARNING: VG vgdata is missing PV tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB (last written to /dev/sde).
PV VG Fmt Attr PSize PFree
/dev/sda vgdata lvm2 a-- <32.00g 1.98g
/dev/sdb vgdata lvm2 a-- <40.00g 0
/dev/sdc vgdata lvm2 a-- <50.00g 0
/dev/sdd2 rootvg lvm2 a-- <63.02g <40.02g
[unknown] vgdata lvm2 a-m <60.00g 0 ==> DISK IS MISSING HERE
vgchange -ay --partial vgdatapvscan
vgscan
lvscan
getting following error if we mount the filesystems
LVM Data Restoration Steps
Recovering LVM data after a Physical Volume (PV) disk is removed is complex and dependent on how the data was stored (e.g., linear vs. striped Logical Volumes). The core of the recovery process involves replacing the missing disk with a new one of at least the same size and restoring the LVM metadata to link the new disk with the old PV's unique identifier (UUID).
LVM stores metadata backups in the /etc/lvm/archive and /etc/lvm/backup directories, and restoring from these files is the primary method for recovery.
/etc/lvm/backup stores the current configuration, while /etc/lvm/archive stores a history of changes.
List available metadata backups: vgcfgrestore --list <VG_NAME>
[root@RHEL8VM naveen]#
vgcfgrestore --list vgdataFile: /etc/lvm/archive/vgdata_00004-1246932297.vg
VG name: vgdata
Description: Created *before* executing 'lvcreate -L 40G -n applog vgdata'
Backup Time: Wed Nov 5 07:44:13 2025
File: /etc/lvm/archive/vgdata_00005-236385369.vg
VG name: vgdata
Description: Created *before* executing 'lvcreate -L 60G -n shared vgdata'
Backup Time: Wed Nov 5 07:46:31 2025
File: /etc/lvm/archive/vgdata_00006-232178662.vg==>
This is last modifiedVG name: vgdata
Description: Created *before* executing 'lvcreate -L 70G -n backup vgdata'
Backup Time: Wed Nov 5 08:50:12 2025
File: /etc/lvm/archive/vgdata_00007-884955256.vg
VG name: vgdata
Description: Created *before* executing 'vgreduce --removemissing vgdata'
Backup Time: Fri Nov 7 14:07:14 2025
File: /etc/lvm/backup/vgdata
VG name: vgdata
Description: Created *after* executing 'vgreduce --removemissing vgdata'
Backup Time: Fri Nov 7 14:07:14 2025
Replace the Physical VolumeYou need a new disk that is at least the size of the missing PV to act as a replacement.
Recreate the PV with the Missing UUID: Use the pvcreate command with the --restorefile and the missing PV's --uuid to label the new disk exactly like the old one. This overwrites only the LVM metadata area on the new disk
pvcreate --restorefile /etc/lvm/archive/<VG_archive_file>.vg --uuid <MISSING_PV_UUID> /dev/sdXI have added new disk which is attached as /dev/sde with same size of 60GB
[root@RHEL8VM naveen]# lsblk | grep -i sde
sde 8:64 0 60G 0 disk
pvcreate --restorefile /etc/lvm/archive/vgdata_00006-232178662.vg --uuid tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB /dev/sde
[root@RHEL8VM naveen]# lsblk -f | grep -i sde
sde LVM2_member tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB
Restore the Volume Group Metadata
Test the Restore: Always run the restore command with a test first.
vgcfgrestore --test <VG_NAME>
Perform the Restore: If the test succeeds, run the restore for real.
vgcfgrestore <VG_NAME>
This step uses the LVM metadata to recreate the VG and LV structures, including the necessary links to the newly created PV.
But I am getting following error while executing vgcfgrestore
[root@RHEL8VM archive]# vgcfgrestore --test vgdata
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
WARNING: VG vgdata was previously updated while PV /dev/sde was missing.
Cannot restore Volume Group vgdata with 1 PVs marked as missing.
Restore failed.
[root@RHEL8VM archive]#
I can see in this file /etc/lvm/backup/vgdata it contain flags like MISSING and device is unknown, that is the reason restore is not working , so try to update the device and remove the MISSING flags
After update the file /etc/lvm/backup/vgdata , the content shows like below
pv2 {
id = "tdO17x-I1gI-iXdR-hd0L-w3Qz-VOHI-dwOnlB"
device = "/dev/sde" # Hint only ==> update device name
status = ["ALLOCATABLE"]
flags = [] ===> remove MISSING
dev_size = 125829120 # 60 Gigabytes
pe_start = 2048
pe_count = 15359 # 59.9961 Gigabytes
}
[root@RHEL8VM backup]# vgcfgrestore --test vgdata ==> TEST MODE
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
Restored volume group vgdata.
[root@RHEL8VM backup]# vgcfgrestore vgdata
Restored volume group vgdata.
We haven't received any error and successfully vg restored
[root@RHEL8VM backup]# pvscan
PV /dev/sdd2 VG rootvg lvm2 [<63.02 GiB / <40.02 GiB free]
PV /dev/sda VG vgdata lvm2 [<32.00 GiB / 1.98 GiB free]
PV /dev/sdb VG vgdata lvm2 [<40.00 GiB / 0 free]
PV /dev/sde VG vgdata lvm2 [<60.00 GiB / 0 free]
PV /dev/sdc VG vgdata lvm2 [<50.00 GiB / 0 free]
Total: 5 [245.00 GiB] / in use: 5 [245.00 GiB] / in no VG: 0 [0 ]
[root@RHEL8VM backup]# vgscan
Found volume group "rootvg" using metadata type lvm2
Found volume group "vgdata" using metadata type lvm2
[root@RHEL8VM backup]# lvscan
ACTIVE '/dev/vgdata/data' [10.00 GiB] inherit
ACTIVE '/dev/vgdata/applog' [40.00 GiB] inherit
ACTIVE '/dev/vgdata/shared' [60.00 GiB] inherit
ACTIVE '/dev/vgdata/backup' [70.00 GiB] inherit
[root@RHEL8VM naveen]# lsblk -o NAME,TYPE,FSTYPE,LABEL,SIZE,RO,MOUNTPOINT
NAME TYPE FSTYPE LABEL SIZE RO MOUNTPOINT
sda disk LVM2_member 32G 0
├─vgdata-data lvm xfs 10G 0 /appdata
├─vgdata-shared lvm xfs 60G 0 /shared
└─vgdata-backup lvm xfs 70G 0
sdb disk LVM2_member 40G 0
└─vgdata-backup lvm xfs 70G 0
sdc disk LVM2_member 50G 0
└─vgdata-shared lvm xfs 60G 0 /shared
sdd disk 64G 0
├─sdd1 part xfs 500M 0 /boot
├─sdd2 part LVM2_member 63G 0
│ ├─rootvg-tmplv lvm xfs 2G 0 /tmp
│ ├─rootvg-usrlv lvm xfs 10G 0 /usr
│ ├─rootvg-homelv lvm xfs 1G 0 /home
│ ├─rootvg-varlv lvm xfs 8G 0 /var
│ └─rootvg-rootlv lvm xfs 2G 0 /
├─sdd14 part 4M 0
└─sdd15 part vfat 495M 0 /boot/efi
sde disk LVM2_member 60G 0
├─vgdata-applog lvm 40G 0 ==> file system info not showing but why
└─vgdata-backup lvm xfs 70G 0 ==> xfs fs is showing
[root@RHEL8VM backup]# mount -av
mount: /backup: can't read superblock on /dev/mapper/vgdata-backup.
mount: /applog: wrong fs type, bad option, bad superblock on /dev/mapper/vgdata-applog, missing codepage or helper program, or other error.
For above super block error we have to do file system repair
e2fsck -f /dev/<VG_NAME>/<LV_NAME>
xfs_repair /dev/<VG_NAME>/<LV_NAME>
[root@RHEL8VM naveen]# xfs_repair /dev/vgdata/backup
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
bad magic number
Metadata CRC error detected at 0x55d06d4f6612, xfs_agf block 0x6900008/0x1000
Metadata CRC error detected at 0x55d06d523452, xfs_agi block 0x6900010/0x1000
bad on-disk superblock 3 - bad magic number
primary/secondary superblock 3 conflict - AG superblock geometry info conflicts with filesystem geometry
[root@RHEL8VM naveen]# xfs_repair /dev/mapper/vgdata-applog
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!
attempting to find secondary superblock...
.........................................................................................................................................................................................................................................................................................................................................................................................
it wont find any superblocks and it wont succeed , interesting any guest why we can't able to see xfs filesystem in lsblk output and why can't do xfs_repair , answer is already stated above
If you still can't able to understand why we can't able to mount /dev/mapper/vgdata-applog , because as stated in above outputs , the complete utilized data of 30 GB is stored under REMOVED DISK , and we have added NEW RAW disk, so complete data loss will be there, we have to restore from the BACKUP ONLY.
Command Used |
pvs |
vgs |
lvs |
lsblk ; lsblk -f |
lsblk -o NAME,TYPE,FSTYPE,LABEL,SIZE,RO,MOUNTPOINT |
pvscan |
vgscan |
lvscan |
pvdisplay -m |
lvdisplay -m |
vgchange -ay --partial vgdata |
vgcfgrestore --list vgdata |
ls -lrt /etc/lvm/backup |
ls -lrt /etc/lvm/archive |
pvcreate --restorefile /etc/lvm/archive/<VG_archive_file>.vg --uuid <MISSING_PV_UUID> /dev/sdX |
vgcfgrestore --test <VG_NAME> |
vgcfgrestore <VG_NAME> |
xfs_repair /dev/mapper/vgdata-applog |
NOTE: This is the scenario when the physical disk is removed accidentality and which we can't able to attach to VM. If we remove the Logical Volume, the steps will be different
Post a Comment