Guide

How LVM Works

A practical reference for planning and operating Linux Logical Volume Manager layouts.

Open LVM Builder

PV, VG, and LV in Real Operations

Physical volumes (PVs) are initialized devices. Volume groups (VGs) aggregate PV capacity into an allocation pool. Logical volumes (LVs) are carved from that pool for applications, filesystems, or swap. This layering lets storage teams adapt capacity over time without redesigning disk partitioning every cycle.

If you need command scaffolding while reading this reference, use the LVM command builder homepage. For inherited hosts where current state is unclear, start with Import Existing LVM Layout before planning changes.

Resizing and Lifecycle

Capacity growth is usually straightforward: extend LV, then extend filesystem. Shrinking is riskier and filesystem-dependent. Plan shrink workflows only with tested procedures and verified backups.

For filesystem-level implications, continue to XFS vs ext4 for LVM volumes.

One VG vs Multiple VGs

One larger VG keeps free space flexible and reduces allocation friction. Multiple VGs improve boundary control when teams need different backup policy, lifecycle cadence, or operational ownership across storage domains.

A practical pattern is one VG for stable platform data and another for high-change or experimental workloads. This reduces accidental contention and makes risk review clearer in change control.

Snapshots and Thin Pools

Snapshots are useful for short-term rollback and consistency checkpoints before maintenance. Thin pools add flexible allocation for many logical volumes but require active pool monitoring and alerting to avoid service-impacting exhaustion.

For production planning details, read the LVM thin provisioning guide and pair rollout decisions with the LVM safety checklist.

RAID Relationship

LVM is not inherently a replacement for all RAID decisions. Many environments run mdadm RAID beneath LVM for resilience/performance, then use LVM for logical allocation and growth workflows.

Use RAID and LVM layout planner guidance for architecture tradeoffs and common layout comparisons for pattern selection.

Example: Expanding A Data Host Without Repartitioning

A team adds two new SSDs to an analytics server. They initialize both as PVs, extend the existing VG, and then grow only the LV and filesystem serving the ingestion path. This keeps other services unchanged while increasing only the constrained tier.

More Operational Scenarios

Mixed-risk host split into two VGs: one VG holds durable business data, another holds disposable build artifacts. The split limits operational blast radius and makes backup policy enforcement easier.

Snapshot window before schema migration: an admin creates a short-lived snapshot before a database upgrade, validates migration success, then removes the snapshot to reclaim capacity.

Large archive platform with tuned extents: a storage team standardizes larger PE sizing to keep extent counts manageable across multi-terabyte growth cycles and reduce long-term management friction.

Additional Operational Scenarios

Split VG for staged upgrades: a team keeps application binaries and persistent data in separate VGs so maintenance on one domain does not force capacity decisions on the other.

Snapshot-backed patch rollout: before OS-level middleware updates, admins create short-lived snapshots for fast rollback and remove them immediately after validation to control copy-on-write growth.

What To Verify Before Proceeding

Resize order and filesystem limits: confirm whether target filesystems support online growth or shrink and sequence operations accordingly.

VG free-space assumptions: validate actual free extents and competing LV growth commitments before approving expansion plans.

Snapshot duration: keep snapshot lifetimes short and explicit; long-lived snapshots can distort performance and capacity usage.

Split-VG intent: split VGs only when they map to real policy boundaries, not just temporary organizational preferences.

FAQ

Should I split into multiple VGs? Split VGs when operational boundaries differ (backup policy, growth cadence, risk domain). Keep one VG when simplicity and flexible reuse are priorities.

Are extents something I need to tune often? Usually defaults are fine, but very large designs may benefit from deliberate PE sizing to keep extent counts manageable.

What is the correct order for resizing? Growth usually means extend LV then grow filesystem. Shrink workflows are filesystem-sensitive and require tested runbooks and stronger controls.

How much space should snapshots have? Size snapshot capacity based on expected changed-block rate and retention duration, not only dataset size.

When should I revisit PE size? Revisit PE sizing when operating very large VGs/LVs where extent count growth becomes a scaling concern.

What is a common resizing mistake? Assuming LVM flexibility removes filesystem constraints; filesystem behavior still determines what is safe and possible.

When should snapshot workflows be avoided? Avoid routine long-lived snapshots on high-write workloads unless capacity and performance impact are explicitly budgeted.

When is one large VG a bad idea? It is usually a bad fit when teams need strict policy isolation across data classes with different risk and retention controls.

What should I review before production changes? Start with the LVM safety checklist and capture current state before execution.