This document gives an overview of the categories of memory-ordering
operations provided by the Linux-kernel memory model (LKMM).


Categories of Ordering
======================

This section lists LKMM's three top-level categories of memory-ordering
operations in decreasing order of strength:

1.	Barriers (also known as "fences").  A barrier orders some or
	all of the CPU's prior operations against some or all of its
	subsequent operations.

2.	Ordered memory accesses.  These operations order themselves
	against some or all of the CPU's prior accesses or some or all
	of the CPU's subsequent accesses, depending on the subcategory
	of the operation.

3.	Unordered accesses, as the name indicates, have no ordering
	properties except to the extent that they interact with an
	operation in the previous categories.  This being the real world,
	some of these "unordered" operations provide limited ordering
	in some special situations.

Each of the above categories is described in more detail by one of the
following sections.


Barriers
========

Each of the following categories of barriers is described in its own
subsection below:

a.	Full memory barriers.

b.	Read-modify-write (RMW) ordering augmentation barriers.

c.	Write memory barrier.

d.	Read memory barrier.

e.	Compiler barrier.

Note well that many of these primitives generate absolutely no code
in kernels built with CONFIG_SMP=n.  Therefore, if you are writing
a device driver, which must correctly order accesses to a physical
device even in kernels built with CONFIG_SMP=n, please use the
ordering primitives provided for that purpose.  For example, instead of
smp_mb(), use mb().  See the "Linux Kernel Device Drivers" book or the
https://lwn.net/Articles/698014/ article for more information.


Full Memory Barriers
--------------------

The Linux-kernel primitives that provide full ordering include:

o	The smp_mb() full memory barrier.

o	Value-returning RMW atomic operations whose names do not end in
	_acquire, _release, or _relaxed.

o	RCU's grace-period primitives.

First, the smp_mb() full memory barrier orders all of the CPU's prior
accesses against all subsequent accesses from the viewpoint of all CPUs.
In other words, all CPUs will agree that any earlier action taken
by that CPU happened before any later action taken by that same CPU.
For example, consider the following:

	WRITE_ONCE(x, 1);
	smp_mb(); // Order store to x before load from y.
	r1 = READ_ONCE(y);

All CPUs will agree that the store to "x" happened before the load
from "y", as indicated by the comment.  And yes, please comment your
memory-ordering primitives.  It is surprisingly hard to remember their
purpose after even a few months.

Second, some RMW atomic operations provide full ordering.  These
operations include value-returning RMW atomic operations (that is, those
with non-void return types) whose names do not end in _acquire, _release,
or _relaxed.  Examples include atomic_add_return(), atomic_dec_and_test(),
cmpxchg(), and xchg().  Note that conditional RMW atomic operations such
as cmpxchg() are only guaranteed to provide ordering when they succeed.
When RMW atomic operations provide full ordering, they partition the
CPU's accesses into three groups:

1.	All code that executed prior to the RMW atomic operation.

2.	The RMW atomic operation itself.

3.	All code that executed after the RMW atomic operation.

All CPUs will agree that any operation in a given partition happened
before any operation in a higher-numbered partition.

In contrast, non-value-returning RMW atomic operations (that is, those
with void return types) do not guarantee any ordering whatsoever.  Nor do
value-returning RMW atomic operations whose names end in _relaxed.
Examples of the former include atomic_inc() and atomic_dec(),
while examples of the latter include atomic_cmpxchg_relaxed() and
atomic_xchg_relaxed().  Similarly, value-returning non-RMW atomic
operations such as atomic_read() do not guarantee full ordering, and
are covered in the later section on unordered operations.

Value-returning RMW atomic operations whose names end in _acquire or
_release provide limited ordering, and will be described later in this
document.

Finally, RCU's grace-period primitives provide full ordering.  These
primitives include synchronize_rcu(), synchronize_rcu_expedited(),
synchronize_srcu() and so on.  However, these primitives have orders
of magnitude greater overhead than smp_mb(), atomic_xchg(), and so on.
Furthermore, RCU's grace-period primitives can only be invoked in
sleepable contexts.  Therefore, RCU's grace-period primitives are
typically instead used to provide ordering against RCU read-side critical
sections, as documented in their comment headers.  But of course if you
need a synchronize_rcu() to interact with readers, it costs you nothing
to also rely on its additional full-memory-barrier semantics.  Just please
carefully comment this, otherwise your future self will hate you.


RMW Ordering Augmentation Barriers
----------------------------------

As noted in the previous section, non-value-returning RMW operations
such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever.
Nevertheless, a number of popular CPU families, including x86, provide
full ordering for these primitives.  One way to obtain full ordering on
all architectures is to add a call to smp_mb():

	WRITE_ONCE(x, 1);
	atomic_inc(&my_counter);
	smp_mb(); // Inefficient on x86!!!
	r1 = READ_ONCE(y);

This works, but the added smp_mb() adds needless overhead for
x86, on which atomic_inc() provides full ordering all by itself.
The smp_mb__after_atomic() primitive can be used instead:

	WRITE_ONCE(x, 1);
	atomic_inc(&my_counter);
	smp_mb__after_atomic(); // Order store to x before load from y.
	r1 = READ_ONCE(y);

The smp_mb__after_atomic() primitive emits code only on CPUs whose
atomic_inc() implementations do not guarantee full ordering, thus
incurring no unnecessary overhead on x86.  There are a number of
variations on the smp_mb__*() theme:

o	smp_mb__before_atomic(), which provides full ordering prior
	to an unordered RMW atomic operation.

o	smp_mb__after_atomic(), which, as shown above, provides full
	ordering subsequent to an unordered RMW atomic operation.

o	smp_mb__after_spinlock(), which provides full ordering subsequent
	to a successful spinlock acquisition.  Note that spin_lock() is
	always successful but spin_trylock() might not be.

o	smp_mb__after_srcu_read_unlock(), which provides full ordering
	subsequent to an srcu_read_unlock().

It is bad practice to place code between the smp__*() primitive and the
operation whose ordering that it is augmenting.  The reason is that the
ordering of this intervening code will differ from one CPU architecture
to another.


Write Memory Barrier
--------------------

The Linux kernel's write memory barrier is smp_wmb().  If a CPU executes
the following code:

	WRITE_ONCE(x, 1);
	smp_wmb();
	WRITE_ONCE(y, 1);

Then any given CPU will see the write to "x" has having happened before
the write to "y".  However, you are usually better off using a release
store, as described in the "Release Operations" section below.

Note that smp_wmb() might fail to provide ordering for unmarked C-language
stores because profile-driven optimization could determine that the
value being overwritten is almost always equal to the new value.  Such a
compiler might then reasonably decide to transform "x = 1" and "y = 1"
as follows:

	if (x != 1)
		x = 1;
	smp_wmb(); // BUG: does not order the reads!!!
	if (y != 1)
		y = 1;

Therefore, if you need to use smp_wmb() with unmarked C-language writes,
you will need to make sure that none of the compilers used to build
the Linux kernel carry out this sort of transformation, both now and in
the future.


Read Memory Barrier
-------------------

The Linux kernel's read memory barrier is smp_rmb().  If a CPU executes
the following code:

	r0 = READ_ONCE(y);
	smp_rmb();
	r1 = READ_ONCE(x);

Then any given CPU will see the read from "y" as having preceded the read from
"x".  However, you are usually better off using an acquire load, as described
in the "Acquire Operations" section below.

Compiler Barrier
----------------

The Linux kernel's compiler barrier is barrier().  This primitive
prohibits compiler code-motion optimizations that might move memory
references across the point in the code containing the barrier(), but
does not constrain hardware memory ordering.  For example, this can be
used to prevent to compiler from moving code across an infinite loop:

	WRITE_ONCE(x, 1);
	while (dontstop)
		barrier();
	r1 = READ_ONCE(y);

Without the barrier(), the compiler would be within its rights to move the
WRITE_ONCE() to follow the loop.  This code motion could be problematic
in the case where an interrupt handler terminates the loop.  Another way
to handle this is to use READ_ONCE() for the load of "dontstop".

Note that the barriers discussed previously use barrier() or its low-level
equivalent in their implementations.


Ordered Memory Accesses
=======================

The Linux kernel provides a wide variety of ordered memory accesses:

a.	Release operations.

b.	Acquire operations.

c.	RCU read-side ordering.

d.	Control dependencies.

Each of the above categories has its own section below.


Release Operations
------------------

Release operations include smp_store_release(), atomic_set_release(),
rcu_assign_pointer(), and value-returning RMW operations whose names
end in _release.  These operations order their own store against all
of the CPU's prior memory accesses.  Release operations often provide
improved readability and performance compared to explicit barriers.
For example, use of smp_store_release() saves a line compared to the
smp_wmb() example above:

	WRITE_ONCE(x, 1);
	smp_store_release(&y, 1);

More important, smp_store_release() makes it easier to connect up the
different pieces of the concurrent algorithm.  The variable stored to
by the smp_store_release(), in this case "y", will normally be used in
an acquire operation in other parts of the concurrent algorithm.

To see the performance advantages, suppose that the above example read
from "x" instead of writing to it.  Then an smp_wmb() could not guarantee
ordering, and an smp_mb() would be needed instead:

	r1 = READ_ONCE(x);
	smp_mb();
	WRITE_ONCE(y, 1);

But smp_mb() often incurs much higher overhead than does
smp_store_release(), which still provides the needed ordering of "x"
against "y".  On x86, the version using smp_store_release() might compile
to a simple load instruction followed by a simple store instruction.
In contrast, the smp_mb() compiles to an expensive instruction that
provides the needed ordering.

There is a wide variety of release operations:

o	Store operations, including not only the aforementioned
	smp_store_release(), but also atomic_set_release(), and
	atomic_long_set_release().

o	RCU's rcu_assign_pointer() operation.  This is the same as
	smp_store_release() except that: (1) It takes the pointer to
	be assigned to instead of a pointer to that pointer, (2) It
	is intended to be used in conjunction with rcu_dereference()
	and similar rather than smp_load_acquire(), and (3) It checks
	for an RCU-protected