Class AdaptivePoolingAllocator.SizeClassedChunk

  • All Implemented Interfaces:
    ReferenceCounted
    Enclosing class:
    AdaptivePoolingAllocator

    private static final class AdaptivePoolingAllocator.SizeClassedChunk
    extends AdaptivePoolingAllocator.Chunk
    Removes per-allocation retain()/release() atomic ops from the hot path by replacing ref counting with a segment-count state machine. Atomics are only needed on the cold deallocation path (markToDeallocate()), which is rare for long-lived chunks that cycle segments many times. The tradeoff is a MpscIntQueue.size() call (volatile reads, no RMW) per remaining segment return after mark — acceptable since it avoids atomic RMWs entirely.

    State transitions:

    • AVAILABLE (-1): chunk is in use, no deallocation tracking needed
    • 0..N: local free list size at the time markToDeallocate() was called; used to track when all segments have been returned
    • DEALLOCATED (Integer.MIN_VALUE): all segments returned, chunk deallocated

    Ordering: external releaseSegment(int, int) pushes to the MPSC queue (which has an implicit StoreLoad barrier via its offer()), then reads state — this guarantees visibility of any preceding markToDeallocate() write.