Skip to content

gc buffer busy - Diagnose Oracle RAC Global Cache Waits

gc buffer busy acquire / gc buffer busy release

Section titled “gc buffer busy acquire / gc buffer busy release”

Wait Event Class: Cluster

Parameters: file#, block#, class#

gc buffer busy acquire and gc buffer busy release are RAC-specific wait events that occur when a session needs a block that is currently involved in a global cache (GCS) operation on the same or another instance. They are the RAC equivalents of single-instance buffer busy waits, extended across the entire cluster.

gc buffer busy acquire: A session is waiting to acquire a block that another session on the same instance is currently requesting from the Global Cache Service (GCS). The waiting session has found the buffer in the buffer cache but it is “busy” being transferred in from another instance.

gc buffer busy release: A session is waiting for a block that is in its local buffer cache but which another instance is requesting. The local block holder must flush or transfer the block to the requesting instance, and during this process the block is temporarily pinned.

Both events indicate that the same block is being accessed concurrently by multiple sessions — either within the same instance or across instances — and the inter-instance coordination overhead is creating serialization.

In RAC, every block has a Global Cache role:

  • Current (exclusive): One instance holds the master (modified) copy
  • Consistent Read (shared): Multiple instances can hold CR copies for read consistency
  • Past Image (PI): Older versions retained for read consistency across instances

When Instance 1 wants a block held by Instance 2:

  1. Instance 1 sends a request to the Global Cache Service (GCS master for that block)
  2. GCS contacts Instance 2 and requests a transfer or flush
  3. Instance 2 ships the block over the interconnect to Instance 1
  4. Instance 1 receives the block, places it in its buffer cache

The interconnect latency for this transfer (typically 0.1–1 ms over InfiniBand, 1–5 ms over Ethernet) accumulates into significant wait times when the same blocks are repeatedly transferred between instances.


Average Wait TimeAssessment
< 1 msExcellent — low interconnect latency and minimal cross-instance conflict
1–5 msAcceptable — typical for moderate cross-instance access
5–10 msInvestigate — interconnect contention or hot blocks shared across instances
> 10 msProblem — significant cross-instance contention or interconnect degradation

Some gc buffer busy waits are normal in any active RAC cluster. They become a concern when:

  • They appear consistently in AWR Top 5 wait events
  • Average wait time is trending upward without workload changes
  • A small number of specific blocks (file/block) account for the majority of waits
  • The interconnect utilization is high
  • Application sessions show degraded response time correlated with this wait

-- Current instance GC wait statistics
SELECT
event,
total_waits,
total_timeouts,
ROUND(time_waited / 100, 2) AS total_secs,
ROUND(average_wait * 10, 2) AS avg_wait_ms
FROM v$system_event
WHERE event LIKE 'gc%'
AND total_waits > 0
ORDER BY time_waited DESC
FETCH FIRST 20 ROWS ONLY;
-- GCS statistics — block transfer efficiency
SELECT
name,
value
FROM v$sysstat
WHERE name IN (
'gc cr blocks received',
'gc cr block receive time',
'gc current blocks received',
'gc current block receive time',
'gc cr blocks served',
'gc current blocks served',
'gc blocks lost',
'gc blocks corrupt'
)
ORDER BY name;
-- Derive average block receive times
SELECT
'CR Blocks' AS block_type,
ROUND(SUM(CASE WHEN name = 'gc cr block receive time' THEN value END) * 10.0 /
NULLIF(SUM(CASE WHEN name = 'gc cr blocks received' THEN value END), 0), 4)
AS avg_receive_ms
FROM v$sysstat
WHERE name IN ('gc cr blocks received', 'gc cr block receive time')
UNION ALL
SELECT
'Current Blocks' AS block_type,
ROUND(SUM(CASE WHEN name = 'gc current block receive time' THEN value END) * 10.0 /
NULLIF(SUM(CASE WHEN name = 'gc current blocks received' THEN value END), 0), 4)
FROM v$sysstat
WHERE name IN ('gc current blocks received', 'gc current block receive time');

2. Sessions Currently Waiting on GC Buffer Busy

Section titled “2. Sessions Currently Waiting on GC Buffer Busy”
-- Active sessions in gc buffer busy events
SELECT
sw.sid,
sw.serial#,
s.username,
s.program,
s.sql_id,
s.inst_id,
sw.event,
sw.p1 AS file_number,
sw.p2 AS block_number,
sw.p3 AS class_number,
sw.seconds_in_wait
FROM gv$session_wait sw
JOIN gv$session s ON sw.sid = s.sid
AND sw.inst_id = s.inst_id
WHERE sw.event IN ('gc buffer busy acquire', 'gc buffer busy release',
'gc cr request', 'gc current request')
ORDER BY sw.seconds_in_wait DESC;
-- Find the hot blocks causing GC waits from ASH across all instances
-- Requires Diagnostics Pack license
SELECT
ash.inst_id,
ash.current_file#,
ash.current_block#,
ash.current_obj#,
o.object_name,
o.object_type,
o.owner,
COUNT(*) AS ash_samples,
COUNT(*) * 10 AS est_wait_secs
FROM gv$active_session_history ash
LEFT JOIN dba_objects o ON ash.current_obj# = o.object_id
WHERE ash.event IN ('gc buffer busy acquire', 'gc buffer busy release',
'gc cr request', 'gc current request',
'gc cr multi block request')
AND ash.sample_time > SYSDATE - 1/24
GROUP BY ash.inst_id, ash.current_file#, ash.current_block#,
ash.current_obj#, o.object_name, o.object_type, o.owner
ORDER BY ash_samples DESC
FETCH FIRST 25 ROWS ONLY;
-- RAC interconnect statistics
SELECT
name,
value
FROM v$sysstat
WHERE name LIKE 'ges%'
OR name LIKE 'gcs%'
OR name LIKE 'gc%blocks%'
ORDER BY name;
-- Check interconnect IPC performance
SELECT
interface_name,
ip_address,
is_public,
source
FROM v$cluster_interconnects;
-- Global cache block server statistics (blocks served to other instances)
SELECT * FROM v$cr_block_server;
SELECT * FROM v$current_block_server;
-- Segment statistics related to GC operations
SELECT
owner,
object_name,
object_type,
statistic_name,
value
FROM gv$segment_statistics
WHERE statistic_name IN (
'gc cr blocks received',
'gc current blocks received',
'gc buffer busy',
'ITL waits'
)
AND value > 0
ORDER BY value DESC
FETCH FIRST 30 ROWS ONLY;

1. Hot Blocks Accessed Across Multiple Instances

Section titled “1. Hot Blocks Accessed Across Multiple Instances”

The classic RAC contention pattern: a frequently-updated block (sequence cache block, segment header, popular index leaf block) is needed by sessions on multiple instances. Every cross-instance transfer pays the interconnect latency, and if many sessions on different instances simultaneously need the same block, they queue on gc buffer busy acquire/release.

Common hot block types in RAC:

  • Sequence blocks: NEXTVAL on a sequence with small cache requires frequent updates to the sequence header block, which is a highly contested current block across instances
  • Segment headers: INSERT operations on tables with MSSM (non-ASSM) need the segment header; all instances contend for it
  • Popular index leaf blocks: Right-edge inserts on sequence-based indexes
  • Hot rows: A status or counter column updated by many sessions across all instances

2. Poor Instance Affinity — No Service-Based Workload Routing

Section titled “2. Poor Instance Affinity — No Service-Based Workload Routing”

When all application sessions connect to all RAC instances without separation by workload type, every instance accesses every table. If two OLTP application servers both connect to Instance 1 and Instance 2 interchangeably, they constantly cross-transfer blocks. A session on Instance 1 that updates a row then requires another session on Instance 2 to wait for that current block to be transferred before it can update a different row in the same block.

Resolution: Use Oracle Services to route specific application modules to specific instances. Sessions working on the same data should be routed to the same instance.

3. Sequences Without Adequate Cache (or ORDER Sequences)

Section titled “3. Sequences Without Adequate Cache (or ORDER Sequences)”

In RAC, sequences defined with ORDER guarantee globally sequential values across all instances by forcing every NEXTVAL to go to the sequence master instance. This makes NEXTVAL a cross-instance operation for non-master instances, generating gc current request and gc buffer busy waits on the sequence block.

Similarly, sequences with very small CACHE values (or NOCACHE) require frequent trips to the master instance.

4. Interconnect Latency or Bandwidth Saturation

Section titled “4. Interconnect Latency or Bandwidth Saturation”

If the private interconnect is congested (high utilization, packet loss, or high latency), all GC operations slow down. Even without hot block contention, sustained high interconnect utilization degrades gc cr request and gc current request latency, which feeds into gc buffer busy waits.

gc blocks lost in V$SYSSTAT indicates blocks that were in transit on the interconnect but never arrived at the requesting instance (the sender sent them, the receiver never got them). Lost blocks cause timeouts and retries, massively increasing GC wait times. This is usually an interconnect hardware or configuration issue.


-- Create services that route specific workloads to specific instances
-- This reduces cross-instance block sharing
BEGIN
DBMS_SERVICE.CREATE_SERVICE(
service_name => 'OLTP_INST1',
network_name => 'OLTP_INST1',
preferred_instances => 'INST1',
available_instances => 'INST2' -- Failover instance
);
DBMS_SERVICE.START_SERVICE('OLTP_INST1');
END;
/
BEGIN
DBMS_SERVICE.CREATE_SERVICE(
service_name => 'OLTP_INST2',
network_name => 'OLTP_INST2',
preferred_instances => 'INST2',
available_instances => 'INST1'
);
DBMS_SERVICE.START_SERVICE('OLTP_INST2');
END;
/
-- Configure application connection pools to use instance-specific services
-- OLTP App Server 1 -> connects to OLTP_INST1
-- OLTP App Server 2 -> connects to OLTP_INST2
-- If application servers process the same customer data, route by customer partition
-- Check current sequence configuration
SELECT
sequence_name,
cache_size,
order_flag,
cycle_flag
FROM dba_sequences
WHERE sequence_owner = 'YOUR_SCHEMA'
ORDER BY sequence_name;
-- Fix 1: Increase cache size dramatically (most impactful)
ALTER SEQUENCE orders_seq CACHE 1000;
-- With cache 1000, each instance caches 1000 values locally
-- Cross-instance traffic drops by ~1000x
-- Fix 2: Remove ORDER constraint if strict ordering is not required
-- NOTE: ORDER guarantees globally sequential values — only remove if truly not needed
ALTER SEQUENCE orders_seq NOORDER;
-- After removing ORDER, each instance generates from its own cache independently
-- Fix 3: If ORDER is truly required, consider application-level ID generation
-- (e.g., use instance ID as part of the ID to avoid cross-instance contention)
-- Migrate hot tables from MSSM to ASSM tablespace
-- ASSM eliminates segment header contention by using bitmap-based space management
-- Create new ASSM tablespace
CREATE TABLESPACE app_assm_tbsp
DATAFILE '/data/app_assm01.dbf' SIZE 10G
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
-- Move hot table online (12c+)
ALTER TABLE hot_table MOVE ONLINE TABLESPACE app_assm_tbsp;
-- Rebuild indexes
ALTER INDEX hot_idx REBUILD ONLINE TABLESPACE idx_assm_tbsp;
-- Option 1: Hash partition the index to distribute inserts
CREATE INDEX idx_orders_pk_hash
ON orders(order_id)
GLOBAL PARTITION BY HASH (order_id) PARTITIONS 16;
-- 16 partitions = 16 separate rightmost leaf blocks instead of 1
-- Option 2: Increase sequence CACHE so each instance uses its own range
-- (Already covered in sequence tuning above)
-- Option 3: Redesign the PK to use a composite key that naturally distributes
-- e.g., (instance_id, local_sequence) instead of (global_sequence)
-- Check for lost blocks
SELECT name, value
FROM v$sysstat
WHERE name IN ('gc blocks lost', 'gc blocks corrupt', 'gc claim blocks lost');
-- If gc blocks lost > 0, investigate:
-- 1. Check interconnect health (OS-level: ifconfig, netstat, ping on private IPs)
-- 2. Check cluster alert log for interconnect errors
-- 3. Review /proc/net/dev on Linux for packet errors on interconnect NIC
-- Check interconnect configuration
SELECT * FROM v$cluster_interconnects;
-- Verify private IPs are reachable from each node:
-- $ ping -c 100 -i 0.1 <other_node_private_ip>
-- Look for packet loss

1. Design for Data Affinity from the Start

Section titled “1. Design for Data Affinity from the Start”

Before deploying a RAC database, analyze the data access patterns of each application module. Group modules that access the same data onto the same service/instance. This is the most impactful RAC tuning step and is architectural — retrofitting it is difficult.

2. Use AWR RAC Reports for Cross-Instance Analysis

Section titled “2. Use AWR RAC Reports for Cross-Instance Analysis”
-- Generate AWR RAC-wide report (captures cross-instance wait data)
-- In SQL*Plus:
@?/rdbms/admin/awrgrpt.sql
-- Choose HTML format for easier viewing of RAC-specific sections
-- Key sections in AWR RAC report:
-- "Global Cache Load Profile" — block transfer rates
-- "Global Cache Efficiency Percentages" — hit rates
-- "Global Cache and Enqueue Services - Workload Characteristics"
-- "Top Global Cache Events" — gc* events ranked by impact

Set up OS-level monitoring for the private interconnect NICs. Track:

  • Utilization (should stay below 60–70% for headroom)
  • Packet errors and retransmits
  • Latency (ping between nodes on private IPs)

High interconnect utilization is often a leading indicator of gc buffer busy problems that emerge under peak load.

4. Consider Application-Level Partitioning for RAC

Section titled “4. Consider Application-Level Partitioning for RAC”

For applications with a natural partitioning key (customer_id, region, account range), route sessions to specific instances based on that key. This is “instance partitioning” — Instance 1 owns customers A-M, Instance 2 owns N-Z. Cross-instance block transfers become nearly zero.


  • buffer busy waits — Single-instance equivalent; same hot block patterns, but local only
  • db file sequential read — Single-block I/O; in RAC, compare gc block latency vs local physical read latency
  • enq: TX - row lock contention — Row locks also coordinate across RAC instances; long-held locks contribute to cross-instance contention
  • latch free — Hot blocks cause both latch contention and GC contention; address the underlying hot block for both