gc buffer busy - Diagnose Oracle RAC Global Cache Waits
gc buffer busy acquire / gc buffer busy release
Section titled “gc buffer busy acquire / gc buffer busy release”Overview
Section titled “Overview”Wait Event Class: Cluster
Parameters: file#, block#, class#
gc buffer busy acquire and gc buffer busy release are RAC-specific wait events that occur when a session needs a block that is currently involved in a global cache (GCS) operation on the same or another instance. They are the RAC equivalents of single-instance buffer busy waits, extended across the entire cluster.
Two Distinct Events
Section titled “Two Distinct Events”gc buffer busy acquire: A session is waiting to acquire a block that another session on the same instance is currently requesting from the Global Cache Service (GCS). The waiting session has found the buffer in the buffer cache but it is “busy” being transferred in from another instance.
gc buffer busy release: A session is waiting for a block that is in its local buffer cache but which another instance is requesting. The local block holder must flush or transfer the block to the requesting instance, and during this process the block is temporarily pinned.
Both events indicate that the same block is being accessed concurrently by multiple sessions — either within the same instance or across instances — and the inter-instance coordination overhead is creating serialization.
The RAC Cache Coherence Protocol
Section titled “The RAC Cache Coherence Protocol”In RAC, every block has a Global Cache role:
- Current (exclusive): One instance holds the master (modified) copy
- Consistent Read (shared): Multiple instances can hold CR copies for read consistency
- Past Image (PI): Older versions retained for read consistency across instances
When Instance 1 wants a block held by Instance 2:
- Instance 1 sends a request to the Global Cache Service (GCS master for that block)
- GCS contacts Instance 2 and requests a transfer or flush
- Instance 2 ships the block over the interconnect to Instance 1
- Instance 1 receives the block, places it in its buffer cache
The interconnect latency for this transfer (typically 0.1–1 ms over InfiniBand, 1–5 ms over Ethernet) accumulates into significant wait times when the same blocks are repeatedly transferred between instances.
When This Wait Is a Problem
Section titled “When This Wait Is a Problem”Thresholds
Section titled “Thresholds”| Average Wait Time | Assessment |
|---|---|
| < 1 ms | Excellent — low interconnect latency and minimal cross-instance conflict |
| 1–5 ms | Acceptable — typical for moderate cross-instance access |
| 5–10 ms | Investigate — interconnect contention or hot blocks shared across instances |
| > 10 ms | Problem — significant cross-instance contention or interconnect degradation |
Normal vs. Concerning
Section titled “Normal vs. Concerning”Some gc buffer busy waits are normal in any active RAC cluster. They become a concern when:
- They appear consistently in AWR Top 5 wait events
- Average wait time is trending upward without workload changes
- A small number of specific blocks (file/block) account for the majority of waits
- The interconnect utilization is high
- Application sessions show degraded response time correlated with this wait
Diagnostic Queries
Section titled “Diagnostic Queries”1. RAC-Wide Global Cache Wait Statistics
Section titled “1. RAC-Wide Global Cache Wait Statistics”-- Current instance GC wait statisticsSELECT event, total_waits, total_timeouts, ROUND(time_waited / 100, 2) AS total_secs, ROUND(average_wait * 10, 2) AS avg_wait_msFROM v$system_eventWHERE event LIKE 'gc%' AND total_waits > 0ORDER BY time_waited DESCFETCH FIRST 20 ROWS ONLY;
-- GCS statistics — block transfer efficiencySELECT name, valueFROM v$sysstatWHERE name IN ( 'gc cr blocks received', 'gc cr block receive time', 'gc current blocks received', 'gc current block receive time', 'gc cr blocks served', 'gc current blocks served', 'gc blocks lost', 'gc blocks corrupt')ORDER BY name;
-- Derive average block receive timesSELECT 'CR Blocks' AS block_type, ROUND(SUM(CASE WHEN name = 'gc cr block receive time' THEN value END) * 10.0 / NULLIF(SUM(CASE WHEN name = 'gc cr blocks received' THEN value END), 0), 4) AS avg_receive_msFROM v$sysstatWHERE name IN ('gc cr blocks received', 'gc cr block receive time')UNION ALLSELECT 'Current Blocks' AS block_type, ROUND(SUM(CASE WHEN name = 'gc current block receive time' THEN value END) * 10.0 / NULLIF(SUM(CASE WHEN name = 'gc current blocks received' THEN value END), 0), 4)FROM v$sysstatWHERE name IN ('gc current blocks received', 'gc current block receive time');2. Sessions Currently Waiting on GC Buffer Busy
Section titled “2. Sessions Currently Waiting on GC Buffer Busy”-- Active sessions in gc buffer busy eventsSELECT sw.sid, sw.serial#, s.username, s.program, s.sql_id, s.inst_id, sw.event, sw.p1 AS file_number, sw.p2 AS block_number, sw.p3 AS class_number, sw.seconds_in_waitFROM gv$session_wait swJOIN gv$session s ON sw.sid = s.sid AND sw.inst_id = s.inst_idWHERE sw.event IN ('gc buffer busy acquire', 'gc buffer busy release', 'gc cr request', 'gc current request')ORDER BY sw.seconds_in_wait DESC;3. Identify Hot Blocks Across the Cluster
Section titled “3. Identify Hot Blocks Across the Cluster”-- Find the hot blocks causing GC waits from ASH across all instances-- Requires Diagnostics Pack licenseSELECT ash.inst_id, ash.current_file#, ash.current_block#, ash.current_obj#, o.object_name, o.object_type, o.owner, COUNT(*) AS ash_samples, COUNT(*) * 10 AS est_wait_secsFROM gv$active_session_history ashLEFT JOIN dba_objects o ON ash.current_obj# = o.object_idWHERE ash.event IN ('gc buffer busy acquire', 'gc buffer busy release', 'gc cr request', 'gc current request', 'gc cr multi block request') AND ash.sample_time > SYSDATE - 1/24GROUP BY ash.inst_id, ash.current_file#, ash.current_block#, ash.current_obj#, o.object_name, o.object_type, o.ownerORDER BY ash_samples DESCFETCH FIRST 25 ROWS ONLY;4. Interconnect Performance Statistics
Section titled “4. Interconnect Performance Statistics”-- RAC interconnect statisticsSELECT name, valueFROM v$sysstatWHERE name LIKE 'ges%' OR name LIKE 'gcs%' OR name LIKE 'gc%blocks%'ORDER BY name;
-- Check interconnect IPC performanceSELECT interface_name, ip_address, is_public, sourceFROM v$cluster_interconnects;
-- Global cache block server statistics (blocks served to other instances)SELECT * FROM v$cr_block_server;SELECT * FROM v$current_block_server;5. Segment-Level Cross-Instance Activity
Section titled “5. Segment-Level Cross-Instance Activity”-- Segment statistics related to GC operationsSELECT owner, object_name, object_type, statistic_name, valueFROM gv$segment_statisticsWHERE statistic_name IN ( 'gc cr blocks received', 'gc current blocks received', 'gc buffer busy', 'ITL waits') AND value > 0ORDER BY value DESCFETCH FIRST 30 ROWS ONLY;Root Causes
Section titled “Root Causes”1. Hot Blocks Accessed Across Multiple Instances
Section titled “1. Hot Blocks Accessed Across Multiple Instances”The classic RAC contention pattern: a frequently-updated block (sequence cache block, segment header, popular index leaf block) is needed by sessions on multiple instances. Every cross-instance transfer pays the interconnect latency, and if many sessions on different instances simultaneously need the same block, they queue on gc buffer busy acquire/release.
Common hot block types in RAC:
- Sequence blocks:
NEXTVALon a sequence with small cache requires frequent updates to the sequence header block, which is a highly contested current block across instances - Segment headers: INSERT operations on tables with MSSM (non-ASSM) need the segment header; all instances contend for it
- Popular index leaf blocks: Right-edge inserts on sequence-based indexes
- Hot rows: A status or counter column updated by many sessions across all instances
2. Poor Instance Affinity — No Service-Based Workload Routing
Section titled “2. Poor Instance Affinity — No Service-Based Workload Routing”When all application sessions connect to all RAC instances without separation by workload type, every instance accesses every table. If two OLTP application servers both connect to Instance 1 and Instance 2 interchangeably, they constantly cross-transfer blocks. A session on Instance 1 that updates a row then requires another session on Instance 2 to wait for that current block to be transferred before it can update a different row in the same block.
Resolution: Use Oracle Services to route specific application modules to specific instances. Sessions working on the same data should be routed to the same instance.
3. Sequences Without Adequate Cache (or ORDER Sequences)
Section titled “3. Sequences Without Adequate Cache (or ORDER Sequences)”In RAC, sequences defined with ORDER guarantee globally sequential values across all instances by forcing every NEXTVAL to go to the sequence master instance. This makes NEXTVAL a cross-instance operation for non-master instances, generating gc current request and gc buffer busy waits on the sequence block.
Similarly, sequences with very small CACHE values (or NOCACHE) require frequent trips to the master instance.
4. Interconnect Latency or Bandwidth Saturation
Section titled “4. Interconnect Latency or Bandwidth Saturation”If the private interconnect is congested (high utilization, packet loss, or high latency), all GC operations slow down. Even without hot block contention, sustained high interconnect utilization degrades gc cr request and gc current request latency, which feeds into gc buffer busy waits.
5. Global Cache Lost or Corrupt Blocks
Section titled “5. Global Cache Lost or Corrupt Blocks”gc blocks lost in V$SYSSTAT indicates blocks that were in transit on the interconnect but never arrived at the requesting instance (the sender sent them, the receiver never got them). Lost blocks cause timeouts and retries, massively increasing GC wait times. This is usually an interconnect hardware or configuration issue.
Resolution Steps
Section titled “Resolution Steps”Implement Service-Based Instance Affinity
Section titled “Implement Service-Based Instance Affinity”-- Create services that route specific workloads to specific instances-- This reduces cross-instance block sharingBEGIN DBMS_SERVICE.CREATE_SERVICE( service_name => 'OLTP_INST1', network_name => 'OLTP_INST1', preferred_instances => 'INST1', available_instances => 'INST2' -- Failover instance ); DBMS_SERVICE.START_SERVICE('OLTP_INST1');END;/
BEGIN DBMS_SERVICE.CREATE_SERVICE( service_name => 'OLTP_INST2', network_name => 'OLTP_INST2', preferred_instances => 'INST2', available_instances => 'INST1' ); DBMS_SERVICE.START_SERVICE('OLTP_INST2');END;/
-- Configure application connection pools to use instance-specific services-- OLTP App Server 1 -> connects to OLTP_INST1-- OLTP App Server 2 -> connects to OLTP_INST2-- If application servers process the same customer data, route by customer partitionFix Sequence Contention in RAC
Section titled “Fix Sequence Contention in RAC”-- Check current sequence configurationSELECT sequence_name, cache_size, order_flag, cycle_flagFROM dba_sequencesWHERE sequence_owner = 'YOUR_SCHEMA'ORDER BY sequence_name;
-- Fix 1: Increase cache size dramatically (most impactful)ALTER SEQUENCE orders_seq CACHE 1000;-- With cache 1000, each instance caches 1000 values locally-- Cross-instance traffic drops by ~1000x
-- Fix 2: Remove ORDER constraint if strict ordering is not required-- NOTE: ORDER guarantees globally sequential values — only remove if truly not neededALTER SEQUENCE orders_seq NOORDER;-- After removing ORDER, each instance generates from its own cache independently
-- Fix 3: If ORDER is truly required, consider application-level ID generation-- (e.g., use instance ID as part of the ID to avoid cross-instance contention)Eliminate Segment Header Hot Blocks
Section titled “Eliminate Segment Header Hot Blocks”-- Migrate hot tables from MSSM to ASSM tablespace-- ASSM eliminates segment header contention by using bitmap-based space management
-- Create new ASSM tablespaceCREATE TABLESPACE app_assm_tbsp DATAFILE '/data/app_assm01.dbf' SIZE 10G EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
-- Move hot table online (12c+)ALTER TABLE hot_table MOVE ONLINE TABLESPACE app_assm_tbsp;
-- Rebuild indexesALTER INDEX hot_idx REBUILD ONLINE TABLESPACE idx_assm_tbsp;Address Hot Index Blocks
Section titled “Address Hot Index Blocks”-- Option 1: Hash partition the index to distribute insertsCREATE INDEX idx_orders_pk_hash ON orders(order_id) GLOBAL PARTITION BY HASH (order_id) PARTITIONS 16;-- 16 partitions = 16 separate rightmost leaf blocks instead of 1
-- Option 2: Increase sequence CACHE so each instance uses its own range-- (Already covered in sequence tuning above)
-- Option 3: Redesign the PK to use a composite key that naturally distributes-- e.g., (instance_id, local_sequence) instead of (global_sequence)Investigate and Resolve Lost GC Blocks
Section titled “Investigate and Resolve Lost GC Blocks”-- Check for lost blocksSELECT name, valueFROM v$sysstatWHERE name IN ('gc blocks lost', 'gc blocks corrupt', 'gc claim blocks lost');
-- If gc blocks lost > 0, investigate:-- 1. Check interconnect health (OS-level: ifconfig, netstat, ping on private IPs)-- 2. Check cluster alert log for interconnect errors-- 3. Review /proc/net/dev on Linux for packet errors on interconnect NIC
-- Check interconnect configurationSELECT * FROM v$cluster_interconnects;
-- Verify private IPs are reachable from each node:-- $ ping -c 100 -i 0.1 <other_node_private_ip>-- Look for packet lossPrevention & Tuning
Section titled “Prevention & Tuning”1. Design for Data Affinity from the Start
Section titled “1. Design for Data Affinity from the Start”Before deploying a RAC database, analyze the data access patterns of each application module. Group modules that access the same data onto the same service/instance. This is the most impactful RAC tuning step and is architectural — retrofitting it is difficult.
2. Use AWR RAC Reports for Cross-Instance Analysis
Section titled “2. Use AWR RAC Reports for Cross-Instance Analysis”-- Generate AWR RAC-wide report (captures cross-instance wait data)-- In SQL*Plus:@?/rdbms/admin/awrgrpt.sql-- Choose HTML format for easier viewing of RAC-specific sections
-- Key sections in AWR RAC report:-- "Global Cache Load Profile" — block transfer rates-- "Global Cache Efficiency Percentages" — hit rates-- "Global Cache and Enqueue Services - Workload Characteristics"-- "Top Global Cache Events" — gc* events ranked by impact3. Monitor the Interconnect Proactively
Section titled “3. Monitor the Interconnect Proactively”Set up OS-level monitoring for the private interconnect NICs. Track:
- Utilization (should stay below 60–70% for headroom)
- Packet errors and retransmits
- Latency (ping between nodes on private IPs)
High interconnect utilization is often a leading indicator of gc buffer busy problems that emerge under peak load.
4. Consider Application-Level Partitioning for RAC
Section titled “4. Consider Application-Level Partitioning for RAC”For applications with a natural partitioning key (customer_id, region, account range), route sessions to specific instances based on that key. This is “instance partitioning” — Instance 1 owns customers A-M, Instance 2 owns N-Z. Cross-instance block transfers become nearly zero.
Related Wait Events
Section titled “Related Wait Events”- buffer busy waits — Single-instance equivalent; same hot block patterns, but local only
- db file sequential read — Single-block I/O; in RAC, compare gc block latency vs local physical read latency
- enq: TX - row lock contention — Row locks also coordinate across RAC instances; long-held locks contribute to cross-instance contention
- latch free — Hot blocks cause both latch contention and GC contention; address the underlying hot block for both