Skip to content

Oracle ASM Administration - Diskgroups, Disks & ASMCMD

Oracle ASM Administration - Diskgroups, Disks & ASMCMD

Section titled “Oracle ASM Administration - Diskgroups, Disks & ASMCMD”

Oracle Automatic Storage Management (ASM) provides a built-in volume manager and file system designed specifically for Oracle database files. It handles striping, mirroring, rebalancing, and failure group management automatically, eliminating the need for third-party volume managers for Oracle storage. This guide covers day-to-day ASM administration from diskgroup creation through disk replacement and ASMCMD usage.

ComponentDescription
ASM InstanceA special Oracle instance (no database) that manages diskgroups
DiskgroupA logical storage pool composed of one or more physical disks
Failure GroupA subset of disks within a diskgroup representing a single failure domain
ASM DiskA raw block device, partition, or LUN visible to ASM
ASM FileAn Oracle-managed file within a diskgroup (e.g., datafiles, redo logs)
ASMCMDCommand-line utility for navigating ASM file system
ASMLIB / AFDOptional disk stamping layers (ASMLIB on older Linux, ASM Filter Driver on 12.2+)
TypeDescriptionMinimum DisksMirrors
EXTERNALNo ASM mirroring (rely on storage hardware RAID)1None
NORMALTwo-way mirroring (two failure groups)22x
HIGHThree-way mirroring (three failure groups)33x
FLEXVariable mirroring per file (12.2+)32x or 3x per file
-- Connect to the ASM instance (not the database instance)
-- export ORACLE_SID=+ASM
-- sqlplus / as sysasm
-- View diskgroups and their current state
SELECT
name,
state,
type AS redundancy,
total_mb,
free_mb,
ROUND((total_mb - free_mb) / total_mb * 100, 1) AS pct_used,
usable_file_mb,
required_mirror_free_mb,
offline_disks,
compatibility,
database_compatibility
FROM v$asm_diskgroup
ORDER BY name;

-- Single failure group, no ASM mirroring
CREATE DISKGROUP DATA_EXT
EXTERNAL REDUNDANCY
DISK
'/dev/sdc',
'/dev/sdd',
'/dev/sde'
ATTRIBUTE
'AU_SIZE' = '4M',
'CELL_SMART_SCAN_CAPABLE' = 'FALSE',
'compatible.asm' = '19.0',
'compatible.rdbms' = '19.0';
-- Two failure groups - each group represents one storage controller or shelf
CREATE DISKGROUP DATA
NORMAL REDUNDANCY
FAILGROUP CTRL1 DISK
'/dev/sdc' NAME DATA_0001,
'/dev/sdd' NAME DATA_0002
FAILGROUP CTRL2 DISK
'/dev/sde' NAME DATA_0003,
'/dev/sdf' NAME DATA_0004
ATTRIBUTE
'AU_SIZE' = '4M',
'compatible.asm' = '19.0',
'compatible.rdbms' = '19.0';
CREATE DISKGROUP RECO
HIGH REDUNDANCY
FAILGROUP CTRL1 DISK
'/dev/sdg' NAME RECO_0001
FAILGROUP CTRL2 DISK
'/dev/sdh' NAME RECO_0002
FAILGROUP CTRL3 DISK
'/dev/sdi' NAME RECO_0003
ATTRIBUTE
'compatible.asm' = '19.0',
'compatible.rdbms' = '19.0';
-- Flex diskgroup allows per-file redundancy selection
CREATE DISKGROUP DATA_FLEX
FLEX REDUNDANCY
FAILGROUP CTRL1 DISK '/dev/sdj', '/dev/sdk'
FAILGROUP CTRL2 DISK '/dev/sdl', '/dev/sdm'
FAILGROUP CTRL3 DISK '/dev/sdn', '/dev/sdo'
ATTRIBUTE
'compatible.asm' = '12.2',
'compatible.rdbms' = '12.2';

-- Add disks to expand capacity (triggers rebalance automatically)
ALTER DISKGROUP DATA ADD DISK
'/dev/sdp' NAME DATA_0005,
'/dev/sdq' NAME DATA_0006;
-- Add disks to a specific failure group
ALTER DISKGROUP DATA ADD
FAILGROUP CTRL1 DISK '/dev/sdp' NAME DATA_0005,
FAILGROUP CTRL2 DISK '/dev/sdq' NAME DATA_0006;
-- Add with controlled rebalance power (1=slow, 11=maximum speed)
ALTER DISKGROUP DATA ADD DISK '/dev/sdr' NAME DATA_0007
REBALANCE POWER 4;
-- Remove a disk (triggers rebalance to redistribute data)
ALTER DISKGROUP DATA DROP DISK DATA_0001;
-- Remove multiple disks
ALTER DISKGROUP DATA DROP DISK DATA_0001, DATA_0002;
-- Force drop a disk even if it causes imbalance (use with caution)
ALTER DISKGROUP DATA DROP DISK DATA_0001 FORCE;
-- Undrop a disk (cancel pending drop before rebalance completes)
ALTER DISKGROUP DATA UNDROP DISKS;
-- Step 1: Identify the failed disk
SELECT
group_number,
disk_number,
name,
path,
mode_status,
state,
failgroup,
total_mb,
free_mb
FROM v$asm_disk
WHERE state != 'NORMAL'
OR mode_status != 'ONLINE'
ORDER BY group_number, disk_number;
-- Step 2: Drop the failed disk
ALTER DISKGROUP DATA DROP DISK DATA_FAILED_001;
-- Step 3: After physical replacement, add the new disk
-- ASM will automatically detect it if ASM_DISKSTRING covers the path
ALTER DISKGROUP DATA ADD
FAILGROUP CTRL1 DISK '/dev/sdc' NAME DATA_REPLACEMENT_001;

-- Current rebalance operations
SELECT
group_number,
operation,
state,
power,
actual,
sofar,
est_work,
est_rate,
est_minutes
FROM v$asm_operation
ORDER BY group_number;
-- Adjust rebalance power while in progress
ALTER DISKGROUP DATA REBALANCE POWER 8;
-- Wait for rebalance to complete before proceeding
-- Poll this until no rows returned:
SELECT * FROM v$asm_operation WHERE operation = 'REBAL';
-- Detailed rebalance progress
SELECT
dg.name AS diskgroup,
op.operation,
op.state,
op.power,
op.sofar,
op.est_work,
ROUND(op.sofar / NULLIF(op.est_work, 0) * 100, 1) AS pct_done,
op.est_minutes
FROM v$asm_operation op
JOIN v$asm_diskgroup dg
ON op.group_number = dg.group_number;

ASM_DISKSTRING controls which device paths ASM scans for candidate disks. An incorrect value means ASM cannot see new disks.

-- Check current ASM_DISKSTRING
SHOW PARAMETER asm_diskstring;
-- Add a new path pattern (do not remove existing patterns)
ALTER SYSTEM SET asm_diskstring = '/dev/sd*', '/dev/oracleasm/disks/*' SCOPE = BOTH;
-- Rescan for new disks matching the discovery string
ALTER DISKGROUP DATA MOUNT; -- Full remount with rediscovery
-- Or simply reference the new path in ADD DISK; ASM scans automatically
-- List all visible disks (including unclaimed ones)
SELECT
path,
name,
group_name,
mode_status,
state,
total_mb,
free_mb,
failgroup
FROM v$asm_disk
ORDER BY group_name, failgroup, name;

Compatibility attributes control which database version can use the diskgroup and what ASM features are available. They can only be advanced, never rolled back.

-- Check compatibility attributes
SELECT
dg.name,
attr.name AS attribute,
attr.value
FROM v$asm_diskgroup dg
JOIN v$asm_attribute attr
ON attr.group_number = dg.group_number
WHERE attr.name IN ('compatible.asm', 'compatible.rdbms', 'au_size', 'sector_size')
ORDER BY dg.name, attr.name;
-- Advance compatibility (irreversible)
ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '19.0';
ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.rdbms' = '19.0';

ASM Preferred Read (Preferred Read Failure Groups)

Section titled “ASM Preferred Read (Preferred Read Failure Groups)”

In extended clusters or Data Guard environments with ASM, you can configure ASM to prefer reads from the local failure group to reduce cross-site I/O.

-- Set preferred read failure group (typically set to local storage group)
-- Set in the database instance (not ASM instance), in init.ora / SPFILE
ALTER SYSTEM SET asm_preferred_read_failure_groups = 'DATA.CTRL1' SCOPE = BOTH;
-- Verify preferred read setting
SHOW PARAMETER asm_preferred_read_failure_groups;
-- On each node of a RAC or extended cluster, set to the local failure group:
-- Node 1: ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.SITE1_FG,RECO.SITE1_FG'
-- Node 2: ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.SITE2_FG,RECO.SITE2_FG'

ASMCMD is a command-line utility for browsing and managing ASM files. Run it on the ASM host as the oracle user.

Terminal window
# Start ASMCMD
asmcmd
# Or run a single command non-interactively
asmcmd ls +DATA
# Alternatively with privilege
asmcmd --privilege sysdba
Terminal window
# List diskgroups
ASMCMD> ls
# List contents of a diskgroup
ASMCMD> ls +DATA
# List with details (size, creation time)
ASMCMD> ls -l +DATA/ORCL/DATAFILE/
# Show disk usage
ASMCMD> du +DATA
ASMCMD> du +DATA/ORCL
# Change directory
ASMCMD> cd +DATA/ORCL/DATAFILE
# Show current directory
ASMCMD> pwd
# Find a file
ASMCMD> find +DATA '*.dbf'
ASMCMD> find +DATA 'SYSTEM*'
# Copy a file (within ASM or between ASM and OS)
ASMCMD> cp +DATA/ORCL/DATAFILE/USERS.268.1234567890 /tmp/users_backup.dbf
ASMCMD> cp /tmp/users_backup.dbf +DATA/ORCL/DATAFILE/
# Move or rename a file
ASMCMD> mv +DATA/ORCL/DATAFILE/oldname.dbf +DATA/ORCL/DATAFILE/newname.dbf
# Delete a file (use with extreme caution)
ASMCMD> rm +DATA/ORCL/TEMPFILE/TEMP.267.1234567890
Terminal window
# List diskgroup metadata
ASMCMD> lsdg
ASMCMD> lsdg --discovery # Also list undiscovered disks
# List disks
ASMCMD> lsdsk
ASMCMD> lsdsk --discovery # Include unclaimed disks
ASMCMD> lsdsk --member # Only member disks
# List failure groups
ASMCMD> lsfg
# Check diskgroup attributes
ASMCMD> lsattr -l -G DATA
# Set a diskgroup attribute
ASMCMD> setattr -G DATA compatible.rdbms 19.0
Terminal window
# Backup ASM diskgroup metadata (critical before disk replacement)
ASMCMD> md_backup -b /tmp/asm_metadata_backup.bkp -g DATA
ASMCMD> md_backup -b /tmp/asm_all_metadata.bkp # All diskgroups
# Restore from backup
ASMCMD> md_restore -b /tmp/asm_metadata_backup.bkp -t full -g DATA
# List contents of a metadata backup
ASMCMD> md_backup -b /tmp/asm_all_metadata.bkp -t check

-- Use RMAN to copy datafiles into ASM while database is open (online migration)
-- Connect RMAN to the database (not ASM instance)
-- Copy a specific datafile to ASM
RMAN> COPY DATAFILE '/oradata/users01.dbf'
TO '+DATA';
-- Copy all datafiles to ASM
RMAN> BACKUP AS COPY DATABASE FORMAT '+DATA';
RMAN> SWITCH DATABASE TO COPY;
-- Or use ALTER DATABASE to move one file at a time (12c online move)
ALTER DATABASE MOVE DATAFILE '/oradata/users01.dbf' TO '+DATA';
-- Move with rename
ALTER DATABASE MOVE DATAFILE '/oradata/users01.dbf'
TO '+DATA/ORCL/DATAFILE/users.dbf';
-- Track progress of an online datafile move
SELECT
sid,
serial#,
opname,
target_desc,
sofar,
totalwork,
ROUND(sofar / NULLIF(totalwork, 0) * 100, 1) AS pct_done,
time_remaining
FROM v$session_longops
WHERE opname = 'MOVE DATAFILE'
AND sofar < totalwork;

-- Diskgroup health summary
SELECT
name,
state,
type,
total_mb,
free_mb,
usable_file_mb,
offline_disks,
voting_files
FROM v$asm_diskgroup
ORDER BY name;
-- Disk-level detail including I/O statistics
SELECT
dg.name AS diskgroup,
d.failgroup,
d.name AS disk_name,
d.path,
d.mode_status,
d.state,
d.total_mb,
d.free_mb,
d.reads,
d.writes,
d.read_time,
d.write_time,
d.bytes_read / 1024 / 1024 / 1024 AS gb_read,
d.bytes_written / 1024 / 1024 / 1024 AS gb_written
FROM v$asm_disk d
JOIN v$asm_diskgroup dg
ON d.group_number = dg.group_number
ORDER BY dg.name, d.failgroup, d.name;
-- ASM files by database and file type
SELECT
dg.name AS diskgroup,
af.type,
COUNT(*) AS file_count,
ROUND(SUM(af.bytes) / 1024 / 1024 / 1024, 2) AS total_gb
FROM v$asm_file af
JOIN v$asm_diskgroup dg
ON af.group_number = dg.group_number
GROUP BY dg.name, af.type
ORDER BY dg.name, af.type;
-- Alert log equivalent for ASM (check for errors)
-- Located in: $ORACLE_BASE/diag/asm/+asm/<SID>/trace/alert_+ASM1.log
-- Or query through ADRCI or V$DIAG_INFO:
SELECT value AS log_location
FROM v$diag_info
WHERE name = 'Diag Trace';

  1. Always define failure groups explicitly - Let ASM choose failure groups automatically only for EXTERNAL redundancy. For NORMAL and HIGH, define failure groups matching your storage topology (controllers, shelves, or data centre sites).
  2. Use NORMAL or HIGH redundancy even on hardware RAID - Hardware RAID protects against disk failure but not against controller failure or accidental file deletion. ASM mirroring provides an independent protection layer.
  3. Set AU_SIZE = 4M for OLTP, 8M for data warehousing - The default 1 MB allocation unit works but larger AU sizes reduce metadata overhead on large files.
  4. Run md_backup before any disk operation - Metadata backups are small and fast; losing ASM metadata is catastrophic.
  5. Monitor V$ASM_OPERATION during rebalance - Control power to avoid saturating I/O during business hours.
  6. Keep ASM_DISKSTRING tight - Overly broad patterns increase discovery scan time and risk of ASM claiming wrong devices.
  7. Advance compatible.rdbms only after all databases are upgraded - Lowering it later is impossible.