This time I want to discuss features that belong to the new System.Collections.Concurrent namespace in the.NET Framework 4. When you design parallel applications, you often need thread-safe data storage as well as some mechanism of sending messages between tasks. Once again, this post will touch on just the basics and the most common problems a beginner might encounter, but I’ll provide links for further reading.
This is the fourth post in the parallel programming series. Here’s a list of all the posts:
- Getting Started
- Task Schedulers and Synchronization Context
- Task Cancellation
- Blocking Collection and the Producer-Consumer Problem (this post)
To keep things short, I’ll start with the code that I have at the end of the Task Schedulers and Synchronization Context post. This is a small parallel WPF application with a responsive UI that has one Start button and displays the results of long-running operations in a text box.
But imagine that I’m designing a larger application and I need to store the results of the long-running parallel operations somewhere. (I don’t do this in the current version at all.)
Until .NET Framework 4, this task was challenging for C# developers: the collections in the System.Collections and System.Collections.Generic namespaces do not guarantee thread safety, and developers needed to design the locking and synchronization mechanisms themselves. But now generic thread-safe collections are a part of the .NET Framework. So, let me introduce the new namespace: System.Collections.Concurrent.
I’m going to use the BlockingCollection<T> class. This class can help you implement the well-known producer-consumer pattern, where items are produced and consumed by different operations at different rates. I will update my application to imitate the producer-consumer scenario, so that the compute task will become a producer, and the display task will become a consumer.
I’ll also update the compute task, so that instead of returning the result, it will add it to the results collection.
When all the results are ready, I’ll set the collection to the “completed” state, so the consumer will know that this collection won’t be updated anymore. For this purpose, I’ll call the CompleteAdding method, which will set the IsCompleted property of the results collection to true. The good place to perform this operation is the task that calculates the total time: it waits for all other tasks to finish, which is exactly what I need.
As you can see, the producer part is easy: all tasks can safely write to the results collection, and all locking and synchronization issues are managed by the .NET Framework and TPL.
Now let’s move to the consumer side. I’ll do a small refactoring: I’ll convert the display task into a consume task that will run the display method:
I want to start the consume task before I start any of the compute tasks so that the consumer can wait for the producer and I can see the real-time results. That’s why I put the above line right before the main for loop in the button’s event handler.
This is what the naïve first version of the display method might look like. (Don’t forget to convert the ui task scheduler into a field. It’s a local variable in the original code.)
This method checks for new elements in the collection, until it’s notified that the collection is completed, which means that it has finished adding new items. If it gets a new element from the collection, it immediately removes the item and prints the value into the UI threads. Did you notice that I copied item to currentItem? It’s all about closure again: you’ll get a list of zero’s otherwise.
This version works and you won’t get any exceptions. But if you run it on a dual-core computer like I did, you’ll discover that it takes twice as long as the version that doesn’t use the collection. In fact, it runs as if the application weren’t parallelized at all! This is just one of the problems that you might run into, so don’t forget to always measure the performance of your parallel applications: it’s easy to cancel out the benefits of parallelization.
Of course, the problem is in this line:
An empty loop is rarely a good idea. This was an attempt to implement some kind of messaging between the threads – the consumer is constantly checking the collection and starts working only if it can retrieve a value from it. And it does this over and over again, so one of my processors is fully occupied with this work and can’t compute the values anymore.
One simple trick is to make this loop to consume less processing power. It can be as easy as this (however, this is not a recommended way, but rather an illustration of the principle):
Now after each attempt the task simply waits for 200 milliseconds before trying again. And during those 200 milliseconds the processor can compute the results this task is actually waiting for. You can compile and run the code to make sure that the performance indeed improved.
However, it might be tricky to find the perfect wait time. Ideally, I need some kind of message from the collection notifying me that the value was added.
In a blocking collection, you can do this by using a foreach loop. The BlockingCollection class has the GetConsumingEnumerable method that can be used to enumerate through the blocking collection and consume its elements until the collection is completed. It might look like this:
Now the display method checks whether there is an item in the results collection and, if there is, consumes the item. When the collection is completed and empty, execution exits the loop. All the locking, synchronization, and messaging between the tasks are managed by the TPL.
The resulting application will probably still be a little slower than the version that didn’t use the collection at all, but of course writing to and reading from thread-safe data storage added some overhead.
If you got lost in all the changes, here’s the full code:
For now, this is the last post in my parallel programming series. I hope that I’ve provided enough information and examples and bumped into and recovered from enough problems to enable even beginners to continue on their own. (At least I asked fewer questions while writing this post than I did for the first one!)
I could not cover all the features provided by the TPL. If you want to see what else is available, here are some links:
- Data Structures for Parallel Programming. This MSDN topic lists .NET 4 classes that are useful for parallel programming, such as thread-safe collections, synchronization primitives, and lazy initialization classes.
- Introduction to PLINQ. Parallel Language Integrated Query, or PLINQ, enables quick and easy parallelization of LINQ queries.
Thanks to Dmitry Lomov, Michael Blome, and Danny Shih for reviewing this and providing helpful comments, to Mick Alberts for editing.
Latest posts by Alexandra Rusina (see all)
- What’s Next in C#? Get Ready for Async! - October 28, 2010
- Converting a VBA Macro to C# 4.0 - September 28, 2010
- Blocking Collection and the Producer-Consumer Problem - August 12, 2010