std::execution::sequenced_policy,std::execution::parallel_policy, (3) - Linux Manuals
std::execution::sequenced_policy,std::execution::parallel_policy,: std::execution::sequenced_policy,std::execution::parallel_policy,
NAME
std::execution::sequenced_policy,std::execution::parallel_policy, - std::execution::sequenced_policy,std::execution::parallel_policy,
Synopsis
Defined in header
class sequenced_policy
class parallel_policy
class parallel_unsequenced_policy
class unsequenced_policy
1)
algorithm overloading and require that a parallel algorithm's execution may not be
parallelized. The invocations of element access functions in parallel algorithms
invoked with this policy
indeterminately sequenced in the calling thread.
2)
algorithm overloading and indicate that a parallel algorithm's execution may be
parallelized. The invocations of element access functions in parallel algorithms
invoked with this policy
execute in either the invoking thread or in a thread implicitly created by the
library to support parallel algorithm execution. Any such invocations executing in
the same thread are indeterminately sequenced with respect to each other.
3)
algorithm overloading and indicate that a parallel algorithm's execution may be
parallelized, vectorized, or migrated across threads
scheduler). The invocations of element access functions in parallel algorithms
invoked with this policy are permitted to execute in an unordered fashion in
unspecified threads, and unsequenced with respect to one another within each thread.
4)
algorithm overloading and indicate that a parallel algorithm's execution may be
vectorized, e.g., executed on a single thread using instructions that operate on
multiple data items.
During the execution of a parallel algorithm with any of these three execution
policies, if the invocation of an element access function exits via an uncaught
exception, std::terminate is called, but the implementations may define additional
execution policies that handle exceptions differently.
Notes
When using parallel execution policy, it is the programmer's responsibility to avoid
data races and deadlocks:
v.push_back(i*2+1);
x.fetch_add(1, std::memory_order_relaxed);
while
std::lock_guard<std::mutex>
++x;
Unsequenced execution policies are the only case where function calls are
unsequenced with respect to each other, meaning they can be interleaved. In all
other situations in C++, they are indeterminately-sequenced
Because of that, users are not allowed to allocate or deallocate memory, acquire
mutexes, use non-lockfree std::atomic specializations, or, in general, perform any
vectorization-unsafe operations when using these policies
functions are the ones that synchronize-with another function, e.g.
std::mutex::unlock synchronizes-with the next std::mutex::lock)
std::lock_guard<std::mutex>
++x;
If the implementation cannot parallelize or vectorize
resources), all standard execution policies can fall back to sequential execution.
See also
seq
par
par_unseq
unseq
(C++17)
(C++17)
(C++17)
(C++20)