C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (2024)

In today'sPart 3 of this small series on Threading, Tasks, Async Code and Synchronization Techniqueswe are going to talk about:

  • Thread safety, race conditions & deadlocks
  • A small overview of what Synchronization is

Let's start our exploration by answering a question. What do we mean when we refer to the definition of thread safety?

Thread safety is a computer programming concept applicable to multi-threaded code. A program may execute code in several threads simultaneously in a shared address space where each of those threads has access to virtually all of the memory of every other thread. A piece of code or data structure is thread safe, when the outcome of the code and underlying resources do not create undesirable results (inconsistent data, exception etc.), because of multiple threads interacting with the code concurrently. That simply means:

  • All threads behave properly
  • Fulfill their design specifications
  • Do not have any unintended interactions, like Deadlocksor Race conditions (more on those later)

Let me ask you another question and pause a little bit to think about the answer: In the .NET framework, instance methods or static methods are thread safe?

The answer we are looking here is the static methods (e.g. DateTime.Now). We have to be careful here though. That does not imply that all static methods (the ones the developer writes) are thread safe. When we as developers are creating our own static methods, we must make sure they are thread safe.

Now that we understand what thread safety actually means, let's see which are the implementation approaches we follow, in order to secure that our code is indeed thread safe, when running in a multi-threaded environment context. The first one is called first class, which focuses on avoiding shared state and has the following approaches:

  • Re-entrancy: Writing code in such a way that it can be partially executed by a thread, re-executed by the same thread or simultaneously executed by another thread and still correctly complete the original execution. This requires the saving of state information in variables local to each execution, usually on a stack, instead of in static or global variables
  • Thread-local storage: Variables are localized so that each thread has its own private copy. These variables are thread-safe since they are local to each thread
  • Immutable objects: The state of an object cannot be changed after construction. This means that only read-only data is shared between different threads. Mutable (non-const) operations can then be implemented in such a way that they create new objects instead of modifying existing ones (e.g. The string implementations in C#)

The are also the second class implementation approaches, which are synchronization-related, and are used in situations where shared state cannot be avoided. Here we have the following approaches:

  • Mutual exclusion: Access to shared data is serialized using mechanisms that ensure only one thread reads or writes to the shared data at any time. Use of mutual exclusion needs to be well thought out, since improper usage can lead to side-effects like deadlocks and resource starvation
  • Atomic operations: Shared data is accessed by using atomic operations which cannot be interrupted by other threads. Since the operations are atomic, shared data is always kept in a valid state, no matter how other threads access it (e.g. Interlocked.Add operation in C#)

Let's see an example of how thread safety can be achieved in a C# program, by using a second class synchronization-related implementation approach and in particular mutual exclusion:

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (1)

In this example, we have a simple collection of type Dictionary<int, string> called "items" and a method called "AddItem", which first tries to add a new item in the dictionary and then read all the values inside it. We are also creating 5 new threads by calling Task.Factory.StartNew and we are giving each one of them the AddItem method to run. Beware that all properties and methods of this class are static (which means "global"), thus the items collection address space is actually shared between all 5 threads we have created. In these kinds of multi-threading situations, we need to ensure that our code runs in a thread safe manner. To achieve something like this here, we are creating a new object called "customLock", on which we use the "lock" command, in every place where we have a "critical" part of code that needs to be run. In this particular example, we have actually two critical parts of the code we need to protect. The first is the addition (items.Add(...)) and the second is the foreach loop that reads all values from the dictionary.

The output of the above program would be:

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (2)

Here what we are seeing is that, Thread 1 acquired the lock at the beginning and wrote one value inside the dictionary. During the time Thread 1 was inside the part of the code that did the addition and had acquired the lock, all other threads would block their execution on the lock(customLock) line and wait until Thread 1 released the lock. After Thread 1 released the lock, Thread 2 acquired it and also wrote another value inside the dictionary. Then it also released it, but as you can see it continued its execution and reached the second lock(customLock) command. This means, of course, that the OS decided to not give control to another thread and instead permitted Thread 2 to continue its execution (no time-slicing here). So, Thread 2 once again acquired the customLock and started reading the values inside the dictionary. The same process, continued after Thread 2 finished reading the dictionary values and released the customLock object. Then, as we can see Thread 1 once again acquired the customLock and also started reading the dictionary values etc. The interesting thing to note here is that, the output of this code is not deterministic, which means that if we ran the program once again, the output would probably be entirely different, as we can see in the image below:

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (3)

After having seen the above example, here is another question for you to pause and think about for a bit. Look at the below image. With the assumption that “key” is thread-safe, where should the lock be added?

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (4)

The answer we are looking for here is that it should wrap both read and write operations, due to the danger of race conditions. But what is this weird race condition?

Race condition is a scenario where the outcome of the program is affected because of timing. It occurs when two or more threads can access shared data and they try to change it at the same time. Like we have seen in the previous articles, based on OS scheduling and time-slicing, the threads may update the value in any order (like a race event). As a result, the final state of data can become unpredictable and the program can produce unexpected results.

See Also
Atomic locks

Let's drill a little bit more on race conditions and consider the following scenario. Assume that two threads each increment the value of a global integer variable by 1.

  • Good scenario: Ideally, the following sequence of operations would lead to the expected final value of 2

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (5)

  • Bad scenario: If the two threads run simultaneously without locking or synchronization, the outcome of the operation could be wrong. This occurs because the increment operations are not mutually exclusive. Mutually exclusive operations are those that cannot be interrupted while accessing some resource such as a memory location

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (6)

Let's now turn our attention to deadlocks, which are another danger we may encounter, whenever we are writing code that will run inside a multithreaded environment.

Recommended next reads

C# - Threading, Tasks, Async Code and Synchronization… Orestis Meikopoulos 2 years ago
C# - Threading, Tasks, Async Code and Synchronization… Orestis Meikopoulos 2 years ago
Multithreading in C# (Part 2) Ali Bayat 7 years ago

A deadlock in an operating system, occurs when a process or thread enters a never-ending waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process, like in the example of the image below.

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (10)

A deadlock in C# is a situation where two or more threads are holding locks on a “critical section” of code or a resource and waiting for other thread(s) to release their resource, so that they can in turn lock/use that. As a result, they come to a never ending standing state, by just waiting on each-other.

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (11)

We mentioned earlier that we can make our code thread safe, if we used synchronization techniques. But what exactly is synchronization?

In a multi-threaded environment, multiple threads can speed up work needed to be done and on the same time keep the main thread responsive (e.g. UI thread in Windows applications). In such scenarios, they can also access different resources like files, network connections, memory etc. as per the application needs. If done incorrectly, what may happen is multiple threads trying to use and/or update the same resource at the same time, unaware of each other. This as we have already seen, can result in unpredictable and inconsistent results. So, in multi-threaded applications, threads need to be synchronized, so that they do not work on or update the same resource at the same time.

The advantages of synchronizing our threads are that we are able to maintain consistency. Our system will never get to an invalid state or have an unpredictable outcome. Also, we have no interference from any other thread until the current thread finishes executing its task inside a critical part of our codebase.

We have the following ways for achieving synchronization:

  • Blocking constructs - block thread execution and make it wait for another thread or task to complete, e.g. Thread.Sleep, Thread.Join, Task.Wait
  • Locks - limits number of threads that can enter / access a “critical section” of code. In this category we have exclusive locks (allows only one thread) and non-exclusive locks (allows a limited number of threads)
  • Signals - thread can pause and wait until a notification is received from another thread
  • Nonblocking constructs - protects access to a common field
  • Thread safe or concurrent collections - collections that can be read or be written safely even when multiple threads are accessing them concurrently

Finally, to end up this article, I have one more question for you. Do blocked threads consume CPU? Before answering this question, let's also understand the difference between blocking and spinning.

Spinning, is a technique in which a thread repeatedly checks to see if a condition is true. For example, let's say we have the following command: while ( x < limit ). Here, if the condition is not met, we will continue to use CPU resources. Blocking on the other hand, is when a thread stops its execution until some event happens. Any time a thread is blocked, it lets off its time slice and the OS can work with other threads. Until the blocking condition is resolved, the thread consumes no CPU time. But, of course, it does consume memory.

You can find the above example on thread safety here: https://github.com/ormikopo1988/csharp-advanced-workshop/tree/master/Day%202/ThreadingAndSynchronization/ThreadSafety

That's all for today. Cheers!

C# - Threading, Tasks, Async Code and Synchronization Techniques - Part 3 (2024)
Top Articles
Latest Posts
Article information

Author: Zonia Mosciski DO

Last Updated:

Views: 6169

Rating: 4 / 5 (51 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Zonia Mosciski DO

Birthday: 1996-05-16

Address: Suite 228 919 Deana Ford, Lake Meridithberg, NE 60017-4257

Phone: +2613987384138

Job: Chief Retail Officer

Hobby: Tai chi, Dowsing, Poi, Letterboxing, Watching movies, Video gaming, Singing

Introduction: My name is Zonia Mosciski DO, I am a enchanting, joyous, lovely, successful, hilarious, tender, outstanding person who loves writing and wants to share my knowledge and understanding with you.