Multithreading in C++
(Partly from https://www.modernescpp.com/index.php/multithreading-in-modern-c/)
The 2011 published standard defines how a C++ program has to behave in the presence of multiple threads. These capabilities are composed of two components:
- the memory model, and
- the standardized threading interface.
The headers supporting atomic and thread are:
- <atomic>
- <thread>
- <mutex>
- <condition_variable>
- <future>
Threads as Subprocesses
(Heavily from https://www.geeksforgeeks.org/cpp/multithreading-in-cpp/)
The <thread> header in C++ provides a simple and powerful interface for managing threads. Below are some of the most common operations performed on threads:
Create a Thread
The std::thread
class represents the thread. Instantiating an instance of this class will create a thread with the given callable as its task.
thread thread_name(callable);
where,
- thread_name is an object of the
thread
class, and - callable is a callable object such as a function pointer, function object, or a lambda.
Example:
#include <bits/stdc++.h> using namespace std; // Function to be run by the thread void func() { cout << "Hello from the thread!" << endl; } int main() { // Create a thread that runs // the function func thread t(func); // Main thread waits for 't' to finish t.join(); cout << "Main thread finished."; return 0; }
Output
Hello from the thread! Main thread finished.
Explanation: In the above program we have created a thread t that prints Hello from the thread!
and this thread is joined with the main thread so that the main thread waits for the completion of this thread. Once the thread t is finished the main thread resumes its execution and prints Main thread finished
.
Joining a Thread
Before joining a thread it is preferred to check if the thread can be joined using the joinable()
method, which checks whether the thread is in a state suitable for joining.
thread_name.joinable();
(The joinable()
method returns true if the thread is joinable, else returns false.)
Joining two threads in C++ blocks the current thread until the thread associated with the std::thread
object finishes execution. To join two threads in C++ we use the join()
function, which is called inside the body of the thread to which the specified thread is to be joined.
thread_name.join();
The join()
function throws std::system_error
if the thread is not joinable.
Note: Joining two non-main threads is risky as it may lead to race condition or logic errors.
Detaching a Thread
A joined thread can be detached from the calling thread using the std::thread::detach()
member function. When a thread is detached, it runs independently in the background, and the other thread does not waits for it to finish.
thread_name.detach();
Getting Thread ID
Each thread in C++ has a unique ID which can be obtained by using the get_id()
function.
thread_name.get_id();
The get_id()
function returns an object representing the thread's ID
Example Program Using the Operations Above
#include <iostream> #include <thread> #include <chrono> using namespace std; void task1() { cout << "Thread 1 is running. ID: " << this_thread::get_id() << "\n"; } void task2() { cout << "Thread 2 is running. ID: " << this_thread::get_id() << "\n"; } int main() { thread t1(task1); thread t2(task2); // Show thread IDs cout << "t1 ID: " << t1.get_id() << "\n"; cout << "t2 ID: " << t2.get_id() << "\n"; // Join t1 if joinable if (t1.joinable()) { t1.join(); cout << "t1 joined\n"; } // Detach t2 if (t2.joinable()) { t2.detach(); cout << "t2 detached\n"; } cout << "Main thread sleeping for 1 second...\n"; this_thread::sleep_for(chrono::seconds(1)); cout << "Main thread awake.\n"; return 0; }
The std::this_thread
This namespace groups a set of functions that access the current thread.
thread::id get_id() noexcept; |
Returns the thread id of the calling thread. This value uniquely identifies the thread. // thread::get_id / this_thread::get_id #include <iostream> // std::cout #include <thread> // std::thread, std::thread::id, std::this_thread::get_id #include <chrono> // std::chrono::seconds std::thread::id main_thread_id = std::this_thread::get_id(); void is_main_thread() { if ( main_thread_id == std::this_thread::get_id() ) std::cout << "This is the main thread.\n"; else std::cout << "This is not the main thread.\n"; } int main() { is_main_thread(); std::thread th (is_main_thread); th.join(); } Output: This is the main thread. This is not the main thread. |
void yield() noexcept; |
Yield to other threads. The calling thread yields, offering the implementation the opportunity to reschedule. This function shall be called when a thread waits for other threads to advance without blocking. // this_thread::yield example #include <iostream> // std::cout #include <thread> // std::thread, std::this_thread::yield #include <atomic> // std::atomic std::atomic<bool> ready (false); void count1m(int id) { while (!ready) { // wait until main() sets ready... std::this_thread::yield(); } for (volatile int i=0; i<1000000; ++i) {} std::cout << id; } int main () { std::thread threads[10]; std::cout << "race of 10 threads that count to 1 million:\n"; for (int i=0; i<10; ++i) threads[i]=std::thread(count1m,i); ready = true; // go! for (auto& th : threads) th.join(); std::cout << '\n'; return 0; } Possible output (last line may vary): race of 10 threads that count to 1 million... 6189370542 |
sleep_until |
template <class Clock, class Duration> void sleep_until (const chrono::time_point<Clock, Duration>& abs_time); Output (after an avg. of 30 seconds): Current time: 11:52:36 Waiting for the next minute to begin... 11:53:00 reached! |
sleep_for |
template <class Rep, class Period> void sleep_for (const chrono::duration<Rep,Period>& rel_time); The execution of the current thread is stopped until at least rel_time has passed from now. Other threads continue their execution. Example: // this_thread::sleep_for example #include <iostream> // std::cout, std::endl #include <thread> // std::this_thread::sleep_for #include <chrono> // std::chrono::seconds int main() { std::cout << "countdown:\n"; for (int i=10; i > 0; --i) { std::cout << i << std::endl; std::this_thread::sleep_for (std::chrono::seconds(1)); } std::cout << "Lift off!\n"; return 0; } Output (after 10 seconds): countdown: 10 9 8 7 6 5 4 3 2 1 Lift off! |
Callables in Multithreading
A callable (such as a function, lambda, or function object) is passed to a thread. The callable is executed in parallel by the thread when it starts. For instance, thread t(func);
creates a thread that runs the func callable.
Moreover, we can also pass parameters along with callable, like this thread t(func, param1, param2);
There are four categories of Callables in C++:
Function Pointer
A function can be a callable object to pass to the thread constructor for initializing a thread.
#include <bits/stdc++.h> using namespace std; // Function to be run // by the thread void func(int n) { cout << n; } int main() { // Create a thread that runs // the function func: thread t(func, 4); // Wait for thread to finish t.join(); return 0; }
Output
4
Lambda Expression
A thread object can also be initialized with a lambda expression as a callable. This can be passed directly inside the thread object:
#include <iostream> #include <thread> using namespace std; int main() { int n = 3; // Create a thread that runs // a lambda expression thread t([](int n){ cout << n; }, n); // Wait for the thread to complete t.join(); return 0; }
Output
3
Function Objects
Function Objects or Functors can also be passed into a thread as a callable. To make functors callable, we need to overload the operator parentheses or operator()
.
#include <iostream> #include <thread> using namespace std; // Define a function object (functor) class SumFunctor { public: int n; SumFunctor(int a) : n(a) {} // Overload the operator() to // make it callable void operator()() const { cout << n; }; }; int main() { // Create a thread using // the functor object thread t(SumFunctor(3)); // Wait for the thread to // complete t.join(); return 0; }
Output
3
Non-Static and Static Member Function
We can also use thread using the non-static or static member functions of a class. For non-static member function, we need to create an object of a class, but this is not necessary with static member functions.
#include <iostream> #include <thread> using namespace std; class MyClass { public: // Non-static member function void f1(int num) { cout << num << endl; }; // Static member function that takes one parameter static void f2(int num) { cout << num; }; }; int main() { // Member functions // requires an object MyClass obj; // Passing object and parameter thread t1(&MyClass::f1, &obj, 3); t1.join(); // Static member function can // be called without an object thread t2(&MyClass::f2, 7); // Wait for the thread to finish t2.join(); return 0; }
Output
3 7
Thread Management
In the C++ thread library, various classes and functions are defined to manage threads that can be used to perform multiple tasks. Some of them are listed below:
Class/Function | Description |
---|---|
join() | It ensures that the calling thread waits for the specified thread to complete its execution. |
detach() | Allows the thread to run independently of the main thread, meaning the main thread does not need to wait. |
mutex | A mutex is used to protect shared data between threads to prevent data races and ensure synchronization. |
lock_guard | A wrapper for mutexes that automatically locks and unlocks the mutex in a scoped block. |
condition_variable | Used to synchronize threads, allowing one thread to wait for a condition before proceeding. |
atomic | Manages shared variables between threads in a thread-safe manner without using locks. |
sleep_for() | Pauses the execution of the current thread for a specified duration. |
sleep_until() | Pauses the execution of the current thread until a specified time point is reached. |
hardware_concurrency() | Returns the number of hardware threads available for use, allowing you to optimize the use of system resources. |
get_id | Retrieves the unique ID of the current thread, useful for logging or debugging purposes. |
std::mutex
*
std::lock_guard
lock_guard
is a simple class that is used to manage the locking and unlocking of a mutex. Its main purpose is to automatically lock a mutex when it is created and automatically unlock it when the lock_guard
object goes out of scope. Following is the syntax to use lock_guard in C++:
lock_guard<mutex> name(myMutex);
where,
- name: name assigned to the shared_lock object
- myMutex: a placeholder for the actual type of the mutex.
Example. The following program illustrates the use of lock_guard
in C++:
// C++ Program using std::lock_guard #include <mutex> #include <thread> #include <iostream> using namespace std; // Global mutex to protect shared_data mutex mtx; // Shared data variable int shared_data = 0; // Function to increment shared_data void increment_data() { // Create a lock_guard object which locks the mutex lock_guard<mutex> lock(mtx); // Critical section: safely modify shared_data shared_data+=2; // Lock is automatically released when 'lock' goes out of scope } int main() { // Create two threads that run the increment_data function thread t1(increment_data); thread t2(increment_data); // Wait for both threads to finish t1.join(); t2.join(); // Output the value of shared_data cout << "Value of shared variable: " << shared_data; return 0; }
Output
Value of shared variable: 4
Some key features of unique_lock
:
- Simplicity: lock_guard is very simple to use with minimal overhead.
- RAII(Resource Acquisition Is Initialization): Ensures that mutex is released when the lock_guard goes out of scope.
- No Unlocking: Does not support manual unlocking before the end of its scope.
These are the use cases when you should consider using lock_guard:
- When you need simple lock that automatically unlocks when the scope ends. The locking operation is straightforward and does not require unlocking before scope ends.
- You prioritize minimal overhead and simplicity.
std::unique_lock
unique_lock
offers more flexibility than lock_guard
. It provides features like manual locking and unlocking, deferred locking, and ownership transfer. Unlike lock_guard
, which automatically locks and unlocks the mutex, unique_lock
requires explicit calls to lock and unlock.
The following is the syntax to use unique_lock in C++:
shared_lock<mutex> Name( myMutex, lockingBehavior);
where,
- name: Name assigned to the shared_lock object
- myMutex: a placeholder for the actual type of the mutex
- lockingBehavior: (optional) determines how the mutex is locked and managed
Example. The following program illustrates the use of unique_lock
in C++:
// C++ Program using std::unique_lock #include <mutex> #include <thread> #include <iostream> using namespace std; // Global mutex to protect shared_data mutex mtx; // Shared data variable int shared_data = 0; // Function to increment shared_data void increment_data() { // Create a unique_lock object, but defer locking the mutex unique_lock<mutex> ulck(mtx, defer_lock); // Explicitly acquire the lock ulck.lock(); // Critical section: safely modify shared_data: shared_data += 2; // Manually release the lock: ulck.unlock(); } int main() { // Create two threads that run the increment_data function thread t1(increment_data); thread t2(increment_data); // Wait for both threads to finish t1.join(); t2.join(); // Output the value of shared_data cout << "Value of shared variable: " << shared_data; return 0; }
Output
Value of shared variable: 4
These are the key features of unique_lock
:
- Flexibility: can lock and unlock multiple times within its scope
- Deferred Locking: This can be constructed without locking the mutex immediately
- Timed Locking: Supports times and try-locking operations
- Ownership Transfer: It allows transferring mutex ownership to another unique_lock
These are some use cases when you should consider using unique_lock:
- You need more control over the locking mechanism including ability to lock and unlock manually.
- You need to defer locking or conditionally lock a mutex.
- You require timed locking to prevent blocking indefinitely.
- You need to transfer lock ownership between different scopes or threads.
std::atomic
*
(From https://cplusplus.com/reference/atomic/)
Atomic types are types that encapsulate a value whose access is guaranteed to not cause data races and can be used to synchronize memory accesses among different threads.
This header declares two C++ classes, atomic
and atomic_flag
, that implement all the features of atomic types in self-contained classes. The header also declares an entire set of C-style types and functions compatible with the atomic support in C.
Class std::atomic
template <class T> struct atomic;
The main characteristic of atomic objects is that access to this contained value from different threads cannot cause data races (i.e., doing that is well-defined behavior, with accesses properly sequenced). Generally, for all other objects, the possibility of causing a data race for accessing the same object concurrently qualifies the operation as undefined behavior.
Additionally, atomic objects have the ability to synchronize access to other non-atomic objects in their threads by specifying different memory orders.
(constructor) | |
operator= |
T operator= (T val) noexcept;T operator= (T val) volatile noexcept; atomic& operator= (const atomic&) = delete; atomic& operator= (const atomic&) volatile = delete; Replaces the stored value by val. This operation is atomic and uses sequential consistency (
|
std::atomic::is_lock_free() |
bool is_lock_free() const volatile noexcept; bool is_lock_free() const noexcept; A lock-free object does not cause other threads to be blocked when accessed (possibly using some sort of transactional memory for the type). |
store(VALUE) |
Modify contained value void store (T val, memory_order sync = memory_order_seq_cst) volatile noexcept; void store (T val, memory_order sync = memory_order_seq_cst) noexcept; |
load |
Read contained value T load (memory_order sync = memory_order_seq_cst) const volatile noexcept; T load (memory_order sync = memory_order_ |
operator T |
operator T() const volatile noexcept; operator T() const noexcept; Returns the stored value by val. This is a type-cast operator: evaluating an atomic object in an expression that expects a value of its contained type (T), calls this member function, accessing the contained value. This operation is atomic and uses sequential consistency (memory_order_seq_cst). To retrieve the value with a different memory ordering, see atomic::load. |
exchange |
T exchange (T val, memory_order sync = memory_order_seq_cst) volatile noexcept; T exchange (T val, memory_order sync = memory_order_seq_cst) noexcept; Replaces the contained value by val and returns the value it had immediately before. The entire operation is atomic (an atomic read-modify-write operation): the value is not affected by other threads between the instant its value is read (to be returned) and the moment it is modified by this function. |
compare_exchange_weak |
Compare and exchange contained value (weak) bool compare_exchange_weak (T& expected, T val, memory_order sync = memory_order_seq_cst) volatile noexcept; bool compare_exchange_weak (T& expected, T val, memory_order sync = memory_order_seq_cst) noexcept; bool compare_exchange_weak (T& expected, T val, memory_order success, memory_order failure) volatile noexcept; bool compare_exchange_weak (T& expected, T val, memory_order success, memory_order failure) noexcept; Compares the contents of the atomic object's contained value with expected:
The function always accesses the contained value to read it, and -if the comparison is true- it then also replaces it. But the entire operation is atomic: the value cannot be modified by other threads between the instant its value is read and the moment it is replaced. The memory order used in the second set of prototypes depends on the result of the comparison: if true, it uses success; if false, it uses failure. Note that this function compares directly the physical contents of the contained value with the contents of expected; This may result in failed comparisons for values that compare equal using Unlike For non-looping algorithms, compare_exchange_strong is generally preferred. |
compare_exchange_strong |
Compare and exchange contained value (strong) Operation completely analogous to its twin However, on certain machines, and for certain algorithms that check this in a loop, |
Template Specializations of Class std::atomic<T>
The atomic
class template is fully specialized for all fundamental integral types (except bool), and any extended integral types needed for the typedefs in <cstdint>.
Besides, aliases are provided like typedef atomic<char> atomic_char
, that is, prefix atomic_
is added.
These specializations have the following additional member functions listed below.
Each of these functions accesses the contained value, apply the proper operator and return the value the contained value had immediately before the operation; all in a single atomic operation that cannot be affected by other threads.
atomic::fetch_add
atomic::fetch_sub
atomic::fetch_and
atomic::fetch_or
atomic::fetch_xor
atomic::operator++
atomic::operator--
-
operator?=(SOMETHING)
T operator+= (T val) volatile noexcept; T operator+= (T val) noexcept; T operator-= (T val) volatile noexcept; T operator-= (T val) noexcept; T operator&= (T val) volatile noexcept; T operator&= (T val) noexcept; T operator|= (T val) volatile noexcept; T operator|= (T val) noexcept; T operator^= (T val) volatile noexcept; T operator+= (ptrdiff_t val) volatile noexcept; T operator+= (ptrdiff_t val) noexcept; T operator-= (ptrdiff_t
Global Functions
kill_dependency | Kill dependency |
atomic_thread_fence | Thread fence |
atomic_signal_fence | Signal fence |
atomic_is_lock_free | Is lock-free |
atomic_init | Initialize atomic object |
atomic_store | Modify contained value |
atomic_store_explicit | Modify contained value (explicit memory order) |
atomic_load | Read contained value |
atomic_load_explicit | Read contained value (explicit memory order) |
atomic_exchange | Read and modify contained value |
atomic_exchange_explicit | Read and modify contained value (explicit memory order) |
atomic_compare_exchange_weak | Compare and exchange contained value (weak) |
atomic_compare_exchange_weak_explicit | Compare and exchange contained value (weak, explicit) |
atomic_compare_exchange_strong | Compare and exchange contained value (strong) |
atomic_compare_exchange_strong_explicit | Compare and exchange contained value (strong, explicit) |
atomic_fetch_add | Add to contained value |
atomic_fetch_add_explicit | Add to contained value (explicit memory order) |
atomic_fetch_sub | Subtract from contained value |
atomic_fetch_sub_explicit | Subtract from contained value (explicit memory order) |
atomic_fetch_and | Apply bitwise AND to contained value |
atomic_fetch_and_explicit | Apply bitwise AND to contained value (explicit memory order) |
atomic_fetch_or | Apply bitwise OR to contained value |
atomic_fetch_or_explicit | Apply bitwise OR to contained value (explicit memory order) |
atomic_fetch_xor | Apply bitwise XOR to contained value |
atomic_fetch_xor_explicit | Apply bitwise XOR to contained value (explicit memory order) |
atomic_flag_test_and_set | Test and set atomic flag |
atomic_flag_test_and_set_explicit | Test and set atomic flag (explicit memory order) |
atomic_flag_clear | Clear atomic flag |
atomic_flag_clear_explicit | Clear atomic flag (explicit memory order) |
ATOMIC_VAR_INIT | Initialization of atomic variable (macro) |
ATOMIC_FLAG_INIT | Initialization of atomic flag (macro) |
struct atomic_flag
Atomic flags are boolean atomic objects that support two operations: test-and-set and clear.
Atomic flags are lock-free (this is the only type guaranteed to be lock-free on all library implementations).
Member Functions
(constructor) | Construct atomic flag |
test_and_set | Test and set flag |
clear | Clear flag |
Example:
// using atomic_flag as a lock #include <iostream> // std::cout #include <atomic> // std::atomic_flag #include <thread> // std::thread #include <vector> // std::vector #include <sstream> // std::stringstream std::atomic_flag lock_stream = ATOMIC_FLAG_INIT; std::stringstream stream; void append_number(int x) { while (lock_stream.test_and_set()) {} stream << "thread #" << x << '\n'; lock_stream.clear(); } int main () { std::vector<std::thread> threads; for (int i=1; i<=10; ++i) threads.push_back(std::thread(append_number,i)); for (auto& th : threads) th.join(); std::cout << stream.str(); return 0; }
Possible output (order of lines may vary):
thread #1 thread #2 thread #3 thread #4 thread #5 thread #6 thread #7 thread #8 thread #9 thread #10
Memory Ordering with memory_order
*
Used as an argument to functions that conduct atomic operations to specify how other operations on different threads are synchronized.
It is defined as:
typedef enum memory_order { memory_order_relaxed, // relaxed memory_order_consume, // consume memory_order_acquire, // acquire memory_order_release, // release memory_order_acq_rel, // acquire/release memory_order_seq_cst // sequentially consistent } memory_order;
All atomic operations produce well-defined behavior with respect to an atomic object when multiple threads access it: each atomic operation is entirely performed on the object before any other atomic operation can access it. This guarantees no data races on these objects, and this is precisely the feature that defines atomicity.
Still, each thread may perform operations on memory locations other than the atomic object itself: and these other operations may produce visible side effects on other threads. Arguments of this type allow to specify a memory order for the operation that determines how these (possibly non-atomic) visible side effects are synchronized among threads, using the atomic operations as synchronization points:
memory_order_relaxed
-
The operation is ordered to happen atomically at some point.
This is the loosest memory order, providing no guarantees on how memory accesses in different threads are ordered with respect to the atomic operation.
memory_order_consume
-
[Applies to loading operations]
The operation is ordered to happen once all accesses to memory in the releasing thread that carry a dependency on the releasing operation (and that have visible side effects on the loading thread) have happened.
memory_order_acquire
-
[Applies to loading operations]
The operation is ordered to happen once all accesses to memory in the releasing thread (that have visible side effects on the loading thread) have happened.
memory_order_release
-
[Applies to storing operations]
The operation is ordered to happen before a consume or acquire operation, serving as a synchronization point for other accesses to memory that may have visible side effects on the loading thread.
memory_order_acq_rel
-
[Applies to loading/storing operations]
The operation loads acquiring and stores releasing (as defined above for memory_order_acquire and memory_order_release).
memory_order_seq_cst
-
The operation is ordered in a sequentially consistent manner: All operations using this memory order are ordered to happen once all accesses to memory that may have visible side effects on the other threads involved have already happened.
This is the strictest memory order, guaranteeing the least unexpected side effects between thread interactions though the non-atomic memory accesses.
For consume and acquire loads, sequentially consistent store operations are considered releasing operations.
Problems with Multithreading
Multithreading improves the performance and utilization of CPU, but it is also subject to several problems:
- Deadlock
- A deadlock occurs when two or more threads are blocked forever because they are each waiting for shared resources that the other threads hold. This creates a cycle of waiting, and none of the threads can go ahead.
- Race Conditions
- A race condition occurs when two or more threads access shared resources at the same time, and at least one of them modifies the resource. Since the threads are competing to read and write the data, the final result depends on the order in which the threads execute, leading to unpredictable or incorrect results.
- Starvation
- Starvation occurs when a thread is continuously unable to access shared resources because other threads enjoy a higher priority, effectively preventing it from executing and making progress.
Thread Synchronization
In multithreading, synchronization is the way to control the access of multiple threads to shared resources, ensuring that only one thread can access a resource at a time to prevent data corruption or inconsistency. This is typically done using tools like mutexes, locks, and condition variables.
Mutexes with std::mutex
Mutex is a synchronization primitive that locks the access to the shared resource if some thread is already accessing it.
An example where a mutex is used to achieve synchronization:
// C++ program to illustrate the use of mutex locks to // synchronize the threads #include <iostream> #include <mutex> #include <thread> using namespace std; // data shared and lockable: double val = 0; // mutex: mutex m; int cnt = 0; void add(double num) { m.lock(); val += num; cnt++; cout << "Thread " << cnt << ": " << val << endl; m.unlock(); } // driver code int main() { thread t1(add, 300); thread t2(add, 600); t1.join(); t2.join(); cout << "After addition : " << val << endl; return 0; }
Output:
Thread 1: 300 Thread 2: 900 After addition : 900
or
Thread 1: 600 Thread 2: 900 After addition : 900
Note: In the above code after applying mutex we can get any of the two outputs as shown above. This is because, after applying mutex we have prevented both the threads from entering inside the add() function together, but, either thread t1 or thread t2 enters the add() function first and therefore the output varies with respect to that.
Condition Variables with std::condition_variable
The condition variable is another such synchronization primitive. It is mainly used to notify the threads about the state of the shared data. It is used with the mutex locks to create processes that automatically wait and notify the state of the resource to each other.
Example:
// C++ program to illustrate the use of condition variable #include <condition_variable> #include <iostream> #include <mutex> #include <thread> using namespace std; // condition variable and mutex to be locked condition_variable cv; mutex m; // shared resource int val = 0; void add(int num) { lock_guard<mutex> lock(m); val += num; cout << "After addition: " << val << endl; cv.notify_one(); } void sub(int num) { unique_lock<mutex> ulock(m); cv.wait(ulock, [] { return (val != 0) ? true : false; }); if (val >= num) { val -= num; cout << "After subtraction: " << val << endl; } else { cout << "Cannot Subtract now!" << endl; } cout << "Total number Now: " << val << endl; } // driver code: int main() { thread t2(sub, 600); thread t1(add, 900); t1.join(); t2.join(); return 0; }
Output
After addition: 900 After subtraction: 300 Total number Now: 300
Explaination:
In the foregoing program we first created two threads and then we tried to perform addition first followed by subtraction. But as we can see in the above program, we passed thread t2 first and then t1. Assuming thread t2 goes to the sub()
function first, it first locks the mutex and then checks the condition whether the val is 0 or not. As the val is 0 initially, the predicate returns false and as soon as it returns false, it releases the mutex and waits for the condition to be true i.e., val!=0. Now as the mutex is released, addition is performed in the add()
function and after that notify_one()
gets executed which notifies the waiting thread which in turn tries to get the lock and again checks the condition. This way the process continues.
One of the best use case of condition variable is Producer-Consumer Problem.
Promises and Futures with std::future
and std::promise
The std::future
and std::promise
are used to return data from a task executed on the other thread. The std::promise
is used to send the data and the std::future
is used to receive the data on the main process. The std::future::get()
method can be used to retrieve the data returned by the process and is able to hold the current process till the value is returned.
This method is generally preferred over the condition variable when we only want the task to be executed once.
Example
// C++ program to illustrate the use of std::future and // std::promise in thread synchronization. #include <future> #include <iostream> #include <thread> using namespace std; // callable: void EvenNosFind(promise<int>&& EvenPromise, int begin, int end) { int evenNo = 0; for (int i = begin; i <= end; i++) { if (i % 2 == 0) { evenNo += 1; } } EvenPromise.set_value(evenNo); } // driver code int main() { int begin = 0, end = 1000; promise<int> evenNo; future<int> evenFuture = evenNo.get_future(); cout << "My thread is created !!!" << endl; thread t1(EvenNosFind, move(evenNo), begin, end); cout << "Waiting..........." << endl; // getting the data cout << "The no. of even numbers are : " << evenFuture.get() << endl; t1.join(); return 0; }
Output
My thread is created !!! Waiting........... The no. of even numbers are : 501
In the program above, we try to find the number of even numbers in the given range. We first create a promise object and then we create a future object from that promise object. We send the promise object to the thread and then once we are ready with the value (after the function has been executed) ,we set the promise object. Then we create a future object from that promise. Finally we get the output from the future object to get our answer.