Blog Archives
Yield (advanced)
The yield statement is used in iterator blocks to yield a value or signal the end of an iteration. You don’t need to create instances of IEnumerable or IEnumerator, it is all implied in the yield statement. There are only two ways to use yield:
yield return returns the next value, yield break signals the end of values.
static IEnumerable<int> Range() { for (int i = 0; i < 100; i++) { yield return i; if (i == 6) yield break; } } // static public void Test() { foreach (int i in Range()) { Console.WriteLine(i); } } //
example output:
0
1
2
3
4
5
6
There is one issue here. I placed the line “if (i == 6) yield break;” after “yield return i;” on purpose. Only then you can see that the loop continues with the next iteration right after yield return. Let’s add some debug messages and see what happens:
static IEnumerable<int> Range() { Console.WriteLine("hello"); for (int i = 0; i < 100; i++) { yield return i; if (i == 6) yield break; Console.WriteLine("Last return value: " + i); } Console.WriteLine("good-bye"); } //
example output:
hello
0
Last return value: 0
1
Last return value: 1
2
Last return value: 2
3
Last return value: 3
4
Last return value: 4
5
Last return value: 5
6
yield break does not leave the for loop. It leaves the method.
There are certain rules for “yield” within try/catch blocks. yield must not be used in finally blocks. yield return can only be used in try blocks. And this only if there is no catch block.
yield cannot appear in anonymous functions.
The use of yield requires iterator blocks.
Using yield can be quite efficient. You do not have to iterate through all elements if the outer loop exits early. This is a nice approach for sequential operations. (((The Parallel class obviously dwarfs yield)))
static IEnumerable<int> Range2() { for (int i = 0; i < 100; i++) { Thread.Sleep(100); // long computation of something yield return i; } } // static public void Test2() { foreach (int i in Range2()) { if (i == 30) break; // remaining results not needed anymore Console.WriteLine(i); } } //
Exceptions (part 3, advanced)
For Windows Forms and WPF we have specific exceptions to deal with. Actually these are not really exceptions. They are events, which are raised whenever any unhandled exception occurs. “Unhandled” means there is nothing that catches exceptions and the entire stack did not come up with any solution. Nothing stopped the thread falling down to its lowest level, and even that level does not know what to do with the exception. You can eg. throw an exception on a button click to cause this behavior.
These events allow applications to log information about unhandled exceptions.
In Windows Forms the EventHandler is defined in System.Threading and is called ThreadExceptionEventHandler. If you do not define this event call then your application will crash when unhandled exceptions show up.
[STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.ThreadException += Application_ThreadException; Application.Run(new Form1()); } // static void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e) { Exception lException = (Exception)e.Exception; MessageBox.Show(lException.Message); } //
The approach in WPF is similar. We are talking in Domains. Therefore we subscribe to AppDomain.CurrentDomain.UnhandledException .
public MainWindow() { InitializeComponent(); AppDomain.CurrentDomain.UnhandledException += CurrentDomain_UnhandledException; } // void CurrentDomain_UnhandledException(object sender, UnhandledExceptionEventArgs e) { Exception lException = (Exception)e.ExceptionObject; MessageBox.Show(lException.Message + "\nIsTerminating: " + e.IsTerminating); } // private void button1_Click(object sender, RoutedEventArgs e) { throw new Exception("user exception thrown"); } //
Exceptions (part 2, advanced)
Season’s Greetings!
Today we have a very short post about custom exceptions, which provide more specific information. Of course the custom exception has to inherit from System.Exception. Adding several constructors including a parameterless one is good practice.
The suffix “Exception” in your exception name is convention (eg. “OutOfMemory
Exception“, “FileNotFoundException“). Adding the Serializable attribute can be useful when you work across application domains. Properties help providing extra information.
Don’t use the System.ApplicationException class.
http://msdn.microsoft.com/en-us/library/system.applicationexception.aspx states:
If you are designing an application that needs to create its own exceptions, you should derive custom exceptions from the Exception class. It was originally thought that custom exceptions should derive from the ApplicationException class; however in practice this has not been found to add significant value.
In other words: ApplicationException is a relic of the past, where Microsoft intended developers to inherit all their custom exceptions from. But this has never become practice and custom exceptions now derive from the Exception class.
[Serializable] public class UserNotFoundException : Exception { public string UserId { get; private set; } public UserNotFoundException(string xUserId) : base() { UserId = xUserId; base.HelpLink = "http://www.ohta.de"; } // constructor public UserNotFoundException(string xUserId, string xMessage) : base(xMessage) { UserId = xUserId; base.HelpLink = "http://www.ohta.de"; } // constructor public UserNotFoundException(string xUserId, string xMessage, Exception xInnerException) : base(xMessage, xInnerException) { UserId = xUserId; base.HelpLink = "http://www.ohta.de"; } // constructor protected UserNotFoundException(SerializationInfo xSerializationInfo, StreamingContext xStreamingContext) { UserId = xSerializationInfo.GetValue("UserId", typeof(string)) as string; } // constructor public void GetObjectData(SerializationInfo xSerializationInfo, StreamingContext xStreamingContext) { xSerializationInfo.AddValue("UserId", UserId, typeof(string)); } } // class
Exceptions (part 1, advanced)
Exceptions are not errors, they are exceptions. They should not be used for code that can deal with errors. Exceptions are for situations that cannot be solved like running out of RAM or hard disk space. They are pretty slow, because they deal with the entire stack trace. Here is a short benchmark program:
static double DoSomeCalc(double d) { return d * 1.1; } // static double DoSomeCalc2(Exception e) { throw e; } // static void Exceptions1() { const int n = 10000000; double d = 1.0; Stopwatch lStopwatch = new Stopwatch(); lStopwatch.Start(); for (int i = 0; i < n; i++) d *= 1.1; Console.WriteLine("benchmark ms " + lStopwatch.ElapsedMilliseconds); lStopwatch.Restart(); try { for (int i = 0; i < n; i++) d *= 1.1; } catch (Exception) { throw; } Console.WriteLine("efficient try/catch block ms " + lStopwatch.ElapsedMilliseconds); lStopwatch.Restart(); for (int i = 0; i < n; i++) { try { d *= 1.1; } catch (Exception) { throw; } } Console.WriteLine("inefficient try/catch block ms " + lStopwatch.ElapsedMilliseconds); Console.WriteLine(); lStopwatch.Restart(); for (int i = 0; i < n; i++) d = DoSomeCalc(d); Console.WriteLine("method call, benchmark ms " + lStopwatch.ElapsedMilliseconds); lStopwatch.Restart(); try { for (int i = 0; i < n; i++) d = DoSomeCalc(d); } catch (Exception) { throw; } Console.WriteLine("method call, efficient try/catch block ms " + lStopwatch.ElapsedMilliseconds); lStopwatch.Restart(); for (int i = 0; i < n; i++) { try { d = DoSomeCalc(d); } catch (Exception) { throw; } } Console.WriteLine("method call, inefficient try/catch block ms " + lStopwatch.ElapsedMilliseconds); Console.WriteLine(); Exception e = new Exception(); // only one instance, we exclude the creation time for the object in this test lStopwatch.Restart(); for (int i = 0; i < 100; i++) { try { throw e; } catch (Exception) { } } Console.WriteLine("100 exceptions thrown in ms " + lStopwatch.ElapsedMilliseconds); lStopwatch.Restart(); for (int i = 0; i < 100; i++) { try { DoSomeCalc2(e); } catch (Exception) { } } Console.WriteLine("method call, 100 exceptions thrown in ms " + lStopwatch.ElapsedMilliseconds); lStopwatch.Stop(); Console.ReadLine(); } //
example output:
benchmark ms 2227
efficient try/catch block ms 2179
inefficient try/catch block ms 2201method call, benchmark ms 2448
method call, efficient try/catch block ms 2436
method call, inefficient try/catch block ms 2431100 exceptions thrown in ms 603
method call, 100 exceptions thrown in ms 652
The first three results are in line with each other. The code optimizer does the job and there is no visible impact on the outcome. The difference is more likely to be a result of context switching (threading).
And when we call a method the slowdown is also regular, no big impact of the try/catch blog.
But when exceptions are thrown, the system considerably slows down. 600 ms for just 100 exceptions is a disaster. And it gets even worse when you call a method, because the stack trace becomes longer. Imagine what happens when you have nested methods involved.
The conclusion is that throwing exceptions for the programmer’s convenience is bad practice. Exceptions must be avoided by all means. You’d rather perform a quick zero check than raise a DivideByZeroException.
Do not reuse exception objects, this is not thread safe. In general try to avoid re-throwing exceptions. Anyway, let’s see how to re-throw exceptions in case you need it:
a) Re-“throw” without any identifier. This preserves the original exception details.
static void Exceptions2() { int lZero = 0; try { int i = 4 / lZero; } catch (Exception) { throw; } } // static void Exceptions3() { try { Exceptions2(); } catch (Exception e) { Console.WriteLine("Message: {0}", e.Message); Console.WriteLine("StackTrace: {0}", e.StackTrace); Console.WriteLine("HelpLink: {0}", e.HelpLink); Console.WriteLine("InnerException: {0}", e.InnerException); Console.WriteLine("TargetSite: {0}", e.TargetSite); Console.WriteLine("Source: {0}", e.Source); } } //
Message: Attempted to divide by zero.
StackTrace: at ConsoleApplication1.Program.Exceptions2() in ….\Program.cs:line 1155
at ConsoleApplication1.Program.Exceptions3() in ….\Program.cs:line 1160
HelpLink:
InnerException:
TargetSite: Void Exceptions2()
Source: ConsoleApplication1
b) Re-throw the original exception. Add some more information.
static void Exceptions4() { int lZero = 0; try { int i = 4 / lZero; } catch (DivideByZeroException e) { throw new DivideByZeroException("Division by Zero", e); } catch (Exception e) { throw new Exception("Any Exception", e); } // will not be thrown in this example } // static void Exceptions5() { try { Exceptions4(); } catch (Exception e) { Console.WriteLine("Message: {0}", e.Message); Console.WriteLine("StackTrace: {0}", e.StackTrace); Console.WriteLine("HelpLink: {0}", e.HelpLink); Console.WriteLine("InnerException: {0}", e.InnerException); Console.WriteLine("TargetSite: {0}", e.TargetSite); Console.WriteLine("Source: {0}", e.Source); } } //
example output:
Message: Division by Zero
StackTrace: at ConsoleApplication1.Program.Exceptions4() in ….\Program.cs:line 1134
at ConsoleApplication1.Program.Exceptions5() in ….\Program.cs:line 1140
HelpLink:
InnerException: System.DivideByZeroException: Attempted to divide by zero.
at ConsoleApplication1.Program.Exceptions4() in ….\Program.cs:line 1133
TargetSite: Void Exceptions4()
Source: ConsoleApplication1
Events (part 3, advanced)
Events: Let’s go multithreading! We want the crème de la crème. Well, multitasking is also fine.
The code does not need to run in a specific sequence. The required independence is given. Again, there are many ways to use multithreading. An easy approach is to start a task in each method that is called by the event.
C# does support BeginInvoke() for delegates. This method is not supported by the .NET Compact Framework though. We don’t care, because our hardcore programs are for serious applications, definitely not for mobile phone apps. Let’s see how good BeginInvoke() works. Maybe we don’t have to reinvent the wheel.
BeginInvoke() initiates asynchronous calls, it returns immediately and provides the IAsyncResult, which can be used to monitor the progress of the asynchronous call.
EndInvoke() retrieves the results. It blocks until the thread has completed.
You have the following options after calling BeginInvoke():
1) Call EndInvoke() to block the current thread until the call completes.
2) Obtain the WaitHandle from IAsyncResult.AsyncWaitHandle, use its WaitOne() and then call EndInvoke().
3) Poll the IAsyncResult to check the current state, after it has completed call EndInvoke().
4) Pass a callback to BeginInvoke(). The callback will use the ThreadPool to notify you. In the callback you have to call EndInvoke().
public event EventHandler OnChange; public void DoSomeWork(object sender, object e) { Thread.Sleep(2100); Console.WriteLine("Thread " + Thread.CurrentThread.ManagedThreadId + " mission accomplished!"); } // public void RunMyExample() { OnChange += new EventHandler(DoSomeWork); //OnChange += new EventHandler(DoSomeWork); //OnChange += new EventHandler(DoSomeWork); IAsyncResult lAsyncResult; Console.WriteLine("Choice 1"); lAsyncResult = OnChange.BeginInvoke(this, EventArgs.Empty, null, null); OnChange.EndInvoke(lAsyncResult); Console.WriteLine("Choice 2"); lAsyncResult = OnChange.BeginInvoke(this, EventArgs.Empty, null, null); lAsyncResult.AsyncWaitHandle.WaitOne(); Console.WriteLine("Choice 3"); lAsyncResult = OnChange.BeginInvoke(this, EventArgs.Empty, null, null); while (!lAsyncResult.IsCompleted) { Thread.Sleep(500); Console.WriteLine("work still not completed :("); } Console.WriteLine("Choice 4"); OnChange.BeginInvoke(this, EventArgs.Empty, (xAsyncResult) => { Console.WriteLine("callback running"); OnChange.EndInvoke(xAsyncResult); }, null); Console.WriteLine("press return to exit the program"); Console.ReadLine(); }//
example output:
Choice 1
Thread 6 mission accomplished!
Choice 2
Thread 6 mission accomplished!
work still not completed 😦
work still not completed 😦
work still not completed 😦
work still not completed 😦
Thread 10 mission accomplished!
work still not completed 😦
press return to exit the program
Thread 6 mission accomplished!
callback running
After having lived such a nice life, we have to face a major issue with BeginInvoke(). Uncomment “//OnChange += new EventHandler(DoSomeWork);”, don’t get annoyed now!
You will get an error message saying
“The delegate must have only one target”.
Although the delegate class can deal with multiple targets, asynchronous calls accept just one target.
So let’s try something else.
public event EventHandler OnChange; public void DoSomeWork(object sender, object e) { Thread.Sleep(2100); Console.WriteLine("Thread " + Thread.CurrentThread.ManagedThreadId + " mission accomplished!"); } // public void RunMyExample() { OnChange += new EventHandler(DoSomeWork); OnChange += new EventHandler((sender, e) => { throw new Exception("something went wrong"); }); OnChange += new EventHandler(DoSomeWork); EventHandler lOnChange = OnChange; if (lOnChange == null) return; // just to demonstrate the proper way to call events, not needed in this example foreach (EventHandler d in lOnChange.GetInvocationList()) { Task lTask = Task.Factory.StartNew(() => d(this, EventArgs.Empty)); lTask.ContinueWith((i) => { Console.WriteLine("Task canceled"); }, TaskContinuationOptions.OnlyOnCanceled); lTask.ContinueWith((i) => { Console.WriteLine("Task faulted"); }, TaskContinuationOptions.OnlyOnFaulted); lTask.ContinueWith((i) => { Console.WriteLine("Task completion"); }, TaskContinuationOptions.OnlyOnRanToCompletion); } Console.WriteLine("press return to exit the program"); Console.ReadLine(); }//
It seems that Microsoft has a serious bug here.This code does not execute properly each time. I guess it has to do with the asynchronous behaviour of StartNew(). It sometimes calls the wrong method DoSomeWork() three times and does not raise the exception.
It seems foreach overrides variable “d” before it is inserted in StartNew(). With a little tweak we can avoid this bug. We simply assign “d” to a new local variable. That way we have unique copies of “d”. Weird stuff, but that is the life of a coder sometimes.
public event EventHandler OnChange; public void DoSomeWork(object sender, object e) { Thread.Sleep(2100); Console.WriteLine("Thread " + Thread.CurrentThread.ManagedThreadId + " mission accomplished!"); } // public void RunMyExample() { OnChange += new EventHandler(DoSomeWork); OnChange += new EventHandler((sender, e) => { throw new Exception("something went wrong"); }); OnChange += new EventHandler(DoSomeWork); EventHandler lOnChange = OnChange; if (lOnChange == null) return; // just to demonstrate the proper way to call events, not needed in this example foreach (EventHandler d in lOnChange.GetInvocationList()) { EventHandler e = d; Task lTask = Task.Factory.StartNew(() => e(this, EventArgs.Empty)); lTask.ContinueWith((i) => { Console.WriteLine("Task canceled"); }, TaskContinuationOptions.OnlyOnCanceled); lTask.ContinueWith((i) => { Console.WriteLine("Task faulted"); }, TaskContinuationOptions.OnlyOnFaulted); lTask.ContinueWith((i) => { Console.WriteLine("Task completion"); }, TaskContinuationOptions.OnlyOnRanToCompletion); } Console.WriteLine("press return to exit the program"); Console.ReadLine(); }//
Example output:
press return to exit the program
Thread 11 mission accomplished!
Thread 10 mission accomplished!
Task completion
Task faulted
Task completion
This tiny tweak worked. You just have to know the compiler bugs. I hope this little notice saves you at least 3 hours of work.
You probably remember that we faced a similar issue in my post “Exiting Tasks (advanced)” https://csharphardcoreprogramming.wordpress.com/2013/12/11/exiting-tasks/ . I would suggest to only use Task.Factory.StartNew() with caution. Maybe it only happens in conjunction with lambda expressions.
The following code is also running very well. The variable “d” is not used directly in Task.Factory.StartNew().
... foreach (EventHandler d in lOnChange.GetInvocationList()) { Action a = () => d(this, EventArgs.Empty); Task lTask = Task.Factory.StartNew(a); lTask.ContinueWith((i) => { Console.WriteLine("Task canceled"); }, TaskContinuationOptions.OnlyOnCanceled); lTask.ContinueWith((i) => { Console.WriteLine("Task faulted"); }, TaskContinuationOptions.OnlyOnFaulted); lTask.ContinueWith((i) => { Console.WriteLine("Task completion"); }, TaskContinuationOptions.OnlyOnRanToCompletion); } ...
Events (part 2, advanced)
We are going to construct our custom event accessor now. It deals with additions and removals of subscriptions. Accessors do pretty much look like property definitions. But instead of set and get you have to use add and remove.
public class MyActionEvent4 { private object _Lock = new object(); // a new object simply to avoid lock conflicts private event EventHandler<MyArgs> _OnChange; private event EventHandler<MyArgs> OnChange { add { lock (_Lock) { _OnChange += value; } } remove { lock (_Lock) { _OnChange -= value; } } } // public void RaiseEvent() { lock (_Lock) { EventHandler<MyArgs> lHandler = _OnChange; if (lHandler == null) return; lHandler(this, new MyArgs(0)); } }// } // class
Now we have one big problem here. The RaiseEvent() method has to obtain a lock each time, which causes a serious impact on time sensitive programs. Luckily you do not need to care about changing subscriptions during the invocation. Delegate references are thread-safe, because they are immutable like strings. Let’s simply take the lock out of the RaiseEvent() method, et voilà!
public class MyActionEvent5 { private object _Lock = new object(); private event EventHandler<MyArgs> _OnChange; private event EventHandler<MyArgs> OnChange { add { lock (_Lock) { _OnChange += value; } } remove { lock (_Lock) { _OnChange -= value; } } } // public void RaiseEvent() { EventHandler<MyArgs> lHandler = _OnChange; if (lHandler == null) return; lHandler(this, new MyArgs(0)); }// } // class
It is clear that events are not delegates. They restrict access rights from outside of the event class. To describe events you could most likely say that they are wrappers around delegates.
Whenever an exception is thrown during an event call then all following calls will not be executed. And it is tricky to determine which calls were not executed, because the order of event calls is not guaranteed to be in sequence. In fact it does execute in sequence, but there is no guarantee.
static void EventExceptions1() { MyActionEvent3 lEvent = new MyActionEvent3(); lEvent.OnChange += (sender, e) => Console.WriteLine("Executed subscription 1"); lEvent.OnChange += (sender, e) => { throw new Exception("OMG!"); }; lEvent.OnChange += (sender, e) => Console.WriteLine("Executed subscription 3"); lEvent.RaiseEvent(); } //
So you have to deal with exceptions manually if you want to satisfy/execute as many event subscriptions as possible. You could add a try/catch block for each subscription. Or you could invoke the InvocationList yourself by calling the GetInvocationList() method [System.Delegate] and execute each item manually in a try/catch block. Let’s have a look at the following practical solution:
public class MyActionEvent6 { public event EventHandler OnChange = delegate { }; public void RaiseEvent() { List<Exception> lExceptions = new List<Exception>(); foreach (Delegate lHandler in OnChange.GetInvocationList()) { try { lHandler.DynamicInvoke(this, EventArgs.Empty); } catch (Exception ex) { lExceptions.Add(ex); } } if (lExceptions.Count > 0) throw new AggregateException(lExceptions); }// } // class static void EventExceptions6() { MyActionEvent6 lEvent = new MyActionEvent6(); lEvent.OnChange += (sender, e) => Console.WriteLine("Executed subscription 1"); lEvent.OnChange += (sender, e) => { throw new Exception("OMG!"); }; lEvent.OnChange += (sender, e) => Console.WriteLine("Executed subscription 3"); try { lEvent.RaiseEvent(); } catch (AggregateException ex) { foreach (Exception lException in ex.InnerExceptions) { Console.WriteLine(lException.InnerException.Message); } } } //
Exiting Tasks (advanced)
In the post “exiting Threads” (https://csharphardcoreprogramming.wordpress.com/2013/12/03/exiting-threads/) we were using a parameter class to define an exitFlag (boolean) and tell a thread when to end. There is a similar boolean for tasks supported by the Microsoft Framework. The class that contains this flag is called CancellationToken.
static void ArbitraryTask1(CancellationToken xCancellationToken) { while (!xCancellationToken.IsCancellationRequested) { Console.WriteLine(DateTime.Now.ToString("HH:mm:ss")); Thread.Sleep(1000); } Console.WriteLine("task says good-bye!"); } // static void CancelTask1() { CancellationTokenSource lCancellationTokenSource = new CancellationTokenSource(); CancellationToken t = lCancellationTokenSource.Token; Task lTask = Task.Factory.StartNew(() => ArbitraryTask1(t), t); Thread.Sleep(5000); lCancellationTokenSource.Cancel(); Thread.Sleep(1000); } //
The program starts a task, waits 5 seconds and then cancels the task. This is straight forward.
In the next example we exit a task by throwing an exception. This is a valid solution. But in general you should avoid exceptions as much as possible. As the name says it already: It is for exceptional situations and not for general solutions. If you use exceptions and catch() blocks frequently in your code, you will definitely slow down your performance. Throwing an exception is very time-consuming.
Nevertheless the TPL is extensively using exceptions. For instance in connection with PLINQ you will actually need to throw exceptions in order to properly report back to the user.
static void ArbitraryTask2(CancellationToken xCancellationToken) { ArbitraryTask1(xCancellationToken); xCancellationToken.ThrowIfCancellationRequested(); } // static void CancelTask2() { CancellationTokenSource lCancellationTokenSource = new CancellationTokenSource(); CancellationToken t = lCancellationTokenSource.Token; Task lTask = Task.Factory.StartNew(() => ArbitraryTask2(t), t); Thread.Sleep(5000); try { lCancellationTokenSource.Cancel(); lTask.Wait(); } catch (AggregateException e) { Console.WriteLine("Exception message: " + e.InnerExceptions[0].Message); } Console.ReadLine(); } //
And the following is important. In fact not throwing an exception causes unexpected behavior. Unless I misunderstood something here, this seems to be a serious issue.
static void CancelTask3() { CancellationTokenSource lCancellationTokenSource = new CancellationTokenSource(); CancellationToken lToken = lCancellationTokenSource.Token; Task lTask = Task.Factory.StartNew(() => ArbitraryTask1(lToken), lToken); lTask.ContinueWith((x) => { Console.WriteLine("task cancelled"); }, TaskContinuationOptions.OnlyOnCanceled); lTask.ContinueWith((x) => { Console.WriteLine("task completed"); }, TaskContinuationOptions.OnlyOnRanToCompletion); //Thread.Sleep(5000); lCancellationTokenSource.Cancel(); Console.ReadLine(); } //
example output:
task cancelled
The output is exactly what we expect. But now let’s uncomment “Thread.Sleep(5000);”. Suddenly we get something this:
example output:
18:07:37
18:07:38
18:07:39
18:07:40
18:07:41
task says good-bye!
task completed
The first 6 lines are fine. But then we get “task completed”. And all we did was adding one sleep() method in the code. If you have any idea what is causing it please leave a comment.
All I can say here is that throwing an exception solves the problem.
static void CancelTask3() { CancellationTokenSource lCancellationTokenSource = new CancellationTokenSource(); CancellationToken lToken = lCancellationTokenSource.Token; //Task lTask = Task.Factory.StartNew(() => ArbitraryTask1(lToken), lToken); // without throwing an exception Task lTask = Task.Factory.StartNew(() => ArbitraryTask2(lToken), lToken); // with throwing an exception lTask.ContinueWith((x) => { Console.WriteLine("task cancelled"); }, TaskContinuationOptions.OnlyOnCanceled); lTask.ContinueWith((x) => { Console.WriteLine("task completed"); }, TaskContinuationOptions.OnlyOnRanToCompletion); Thread.Sleep(5000); lCancellationTokenSource.Cancel(); Console.ReadLine(); } //
output example:
18:13:30
18:13:31
18:13:32
18:13:33
18:13:34
task says good-bye!
task cancelled
You can also comment the “Thread.Sleep(5000)” out. The returned task state does not change.
Btw. the equivalent of “ThrowIfCancellationRequested()” is
if (token.IsCancellationRequested) throw new OperationCanceledException(token);
And here is a quick example of using timeouts in case your tasks take longer than expected.
static void CancelTask4() { CancellationTokenSource lCancellationTokenSource = new CancellationTokenSource(); CancellationToken t = lCancellationTokenSource.Token; Task lTask1 = Task.Factory.StartNew(() => ArbitraryTask1(t), t); Task lTask2 = Task.Factory.StartNew(() => ArbitraryTask1(t), t); int i = Task.WaitAny(new Task[] { lTask1, lTask2 }, 5000); if (i == -1) { Console.WriteLine("tasks timed out"); lCancellationTokenSource.Cancel(); // you can comment this line out to see the change in behaviour if (!Task.WaitAll(new Task[] { lTask1, lTask2 }, 5000)) { Console.WriteLine("OMG, something went wrong again!"); } } Console.ReadLine(); } //
Concurrent collections (basics)
Accessing collections can be tricky in a multithreaded environment. You can use “lock()” to access data from different threads. C# 4.0 did introduce concurrent collections, they do not require explicit synchronization. Be careful using these. I conducted many performance tests and concurrent collections turned out to be slower quite often. In fact you have to benchmark your personal approach with the amount and frequency of data that you expect. Only then you can tell what kind of collection should be used.
BlockingCollection
Imagine some data is arriving at your PC and you would like to add that data to a queue, so that the buffer does not overflow. The data is then processed as soon as the CPUs have free capacities. BlockingCollection is a wrapper for other collections. The default collection type is ConcurrentQueue. The BlockingCollection waits (blocks) for data to arrive in case it is empty.
As you know I love writing short code pieces rather than explaining a lot. By looking at the results you can understand the mechanism easily.
public static void BlockingCollection1() { BlockingCollection<int> lCollection = new BlockingCollection<int>(); Task.Factory.StartNew(() => { for (int i = 0; i < 5; i++) { lCollection.Add(i); Thread.Sleep(200); } lCollection.CompleteAdding(); }); Task.Factory.StartNew(() => { try { while (true) Console.WriteLine(lCollection.Take()); } catch (Exception e) { Console.WriteLine("exception thrown: " + e.Message); } }); Console.ReadLine(); } //
example output:
0
1
2
3
4
exception thrown: The collection argument is empty and has been marked as complete with regards to additions.
Each number appears with a roughly 200 millisecond delay after the previous one. lCollection.CompleteAdding() then throws an exception and tells that no more elements are going to follow. This can be useful to end a task that is in a waiting-state due to Take().
public static void BlockingCollection2() { BlockingCollection<int> lCollection = new BlockingCollection<int>(); Task.Factory.StartNew(() => { for (int i = 0; i < 5; i++) { lCollection.Add(i); Thread.Sleep(200); } lCollection.CompleteAdding(); // comment it out for testing purposes }); foreach (int i in lCollection.GetConsumingEnumerable()) Console.WriteLine(i); Console.ReadLine(); } //
The output of above code is behaving the same way, just the exception is not thrown.
And when commenting “lCollection.CompleteAdding();” out, the behavior does not visibly change. But there is one big difference. The foreach does not know that you have finished adding to the BlockingCollection. Therefore “foreach” (in combination with .GetConsumingEnumerable()) acts like an endless loop and the program never arrives at Console.Readline().
Telling your program that you have completed adding to the collection is vital.
Let’s avoid “.GetConsumingEnumerable()” and see what happens:
public static void BlockingCollection3() { BlockingCollection<int> lCollection = new BlockingCollection<int>(); Task.Factory.StartNew(() => { for (int i = 0; i < 5; i++) { lCollection.Add(i); Thread.Sleep(200); } lCollection.CompleteAdding(); }); //Thread.Sleep(2000); foreach (int i in lCollection) Console.WriteLine(i); Console.ReadLine(); } //
Holy cow! The program does not wait for the collection to be filled. The program does not print any output. Uncomment “Thread.Sleep(2000);” and the collection is properly filled at the time the program prints the output. The numbers 1 to 4 are printed in one shot, no delay in between them.
ConcurrentBag
As there is no queue involved in the ConcurrentBag class, it can and does implement IEnumerabe. TryTake() does get the next value. The command is using an “out” parameter, which is pretty smart. You can test the validity of the result and get the result itself at the same time. That is quite convenient in multithreading. Otherwise you would have to first test and then take the value. But to do such, you would have to lock() the collection. And this is not needed here.
ConcurrentBag allows duplicate entries. The order is arbitrary, do not expect it to behave like a list. You can add numbers 0 to 5 and they can eg. print in reverse order. But this does not necessarily happen.
There is one method that is not very useful in a multithreaded environment. TryPeek() can be found in some collections. But when you use that method you cannot be sure that it is still there when you try to get the value. You would have to use lock() again. And this in fact contradicts the idea of concurrent collections.
public static void ConcurrentBag1() { ConcurrentBag<int> lCollection = new ConcurrentBag<int>(); Task.Factory.StartNew(() => { for (int i = 0; i < 5; i++) { lCollection.Add(i); Thread.Sleep(200); } }); //Thread.Sleep(500); //Thread.Sleep(2000); int lResult; while (lCollection.TryTake(out lResult)) { Console.WriteLine(lResult); } Console.ReadLine(); } //
The program does not wait for the collection to be filled. If you are lucky you will see a “0” printed on screen. TryTake() is not blocking (waiting) at all. Play with the comments and have a look at the results. The longer you wait the more numbers show up.
public static void ConcurrentBag2() { ConcurrentBag<int> lCollection = new ConcurrentBag<int>(); for (int i = 0; i < 10; i++) lCollection.Add(i); foreach (int r in lCollection) Console.WriteLine(r); Console.ReadLine(); } //
Above is a quick demonstration of the IEnumerable. Note that the order of the output can be arbitrary.
The basic behavior of concurrent collection has been explained now. Stacks and Queues are not new ideas in C#.
So I will just list the main points:
ConcurrentStack
– Push()/PushRange() to add data
– TryPop()/TryPopRange() to get data
– Last in first out (LIFO)
ConcurrentQueue
– Enqueue() to add data
– TryDequeue() to get data
– First in first out (FIFO)
ConcurrentDictionary
Also the Dictionary is well known in C#. The concurrent version of it can atomically add, get and update entries. Atomic means that operations start and finish as single steps and without interferance of any other threads.
TryAdd returns true if the new entry was added successfully. If the key already exists, this method returns false.
TryUpdate compares the existing value for the specified key with a specified value, and if they are equal, updates the key with a third value.
AddOrUpdate adds an entry if the key does not already exist, or updates if the key already exists.
GetOrAdd adds an entry if the key does not already exist.
public static void ConcurrentDictionary1() { ConcurrentDictionary<string, int> lCollection = new ConcurrentDictionary<string, int>(); if (lCollection.TryAdd("a", 1)) Console.WriteLine("successfully added entry: a/1"); PrintConcurrentDictionary(lCollection); if (lCollection.TryUpdate("a", 2, 1)) Console.WriteLine("updated value to 2"); PrintConcurrentDictionary(lCollection); lCollection["a"] = 3; PrintConcurrentDictionary(lCollection); int x = lCollection.AddOrUpdate("a", 4, (s, i) => i * i); PrintConcurrentDictionary(lCollection); int y = lCollection.GetOrAdd("b", 5); PrintConcurrentDictionary(lCollection); Console.ReadLine(); } //
example output:
successfully added entry: a/1
—————————————–
a = 1
updated value to 2
—————————————–
a = 2
—————————————–
a = 3
—————————————–
a = 9
—————————————–
b = 5
a = 9
PLINQ (basics)
Language-Integrated Query (LINQ) has been implemented into C# to perform queries over any kind of data like eg. objects or databases. Parallel Language-Integrated Query (PLINQ) is a further approach to access objects. As the name tells us already, it has to do with parallelism. The evolution from sequential queries (LINQ) to parallel queries (PLINQ) was predictable. Extension methods for PLINQ are defined in the class System.Linq.ParallelEnumerable.
public static void PLINQ1() { int[] range = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; Console.WriteLine("old school"); for (int i = 0, n = range.Length; i < n; i++) { if (range[i] % 2 == 0) Console.WriteLine(range[i]); } Console.WriteLine("LINQ"); var linq = from i in range where (i % 2 == 0) select i; foreach (int i in linq) Console.WriteLine(i); Console.WriteLine("LINQ2"); var linq2 = range.Where(i => i % 2 == 0); foreach (int i in linq2) Console.WriteLine(i); Console.WriteLine("PLINQ1"); var plinq = from i in range.AsParallel() where (i % 2 == 0) select i; foreach (int i in plinq) Console.WriteLine(i); Console.WriteLine("PLINQ2"); var plinq2 = range.AsParallel().Where(i => i % 2 == 0); foreach (int i in plinq2) Console.WriteLine(i); Console.WriteLine("PLINQ3"); var plinq3 = range.AsParallel().Where(i => { Thread.Sleep(1000); return (i % 2 == 0); }); foreach (int i in plinq3) Console.WriteLine(i); Console.WriteLine("PLINQ3 sorted"); var plinq3sorted = range.AsParallel().AsOrdered().Where(i => { Thread.Sleep(1000); return (i % 2 == 0); }); foreach (int i in plinq3sorted) Console.WriteLine(i); Console.ReadLine(); } //
Interestingly the runtime decides by itself if it makes sense to execute your query as parallel. Thus parallel execution is not guaranteed unless you specify WithExecutionMode(ParallelExecutionMode.ForceParallelism).
You can even get more specific by using WithDegreeOfParallelism() and limit the number of processors being used.
public static void PLINQ2() { var range = Enumerable.Range(0, 25); var result = range.AsParallel().WithExecutionMode(ParallelExecutionMode.ForceParallelism).Where(i => i % 2 == 0); //var result = range.AsParallel().WithDegreeOfParallelism(4).Where(i => i % 2 == 0); //var result = range.AsParallel().WithExecutionMode(ParallelExecutionMode.ForceParallelism).WithDegreeOfParallelism(4).Where(i => i % 2 == 0); foreach (int i in result) Console.WriteLine(i); Console.ReadLine(); } //
And now we are facing something weird. We created a sorted result, but somehow the sorting gets lost! The reason is that unlike “foreach” ForAll() does not wait for all results before it starts executing.
Hence we see an unexpected result.
public static void PLINQ3() { var range = Enumerable.Range(0, 25); var plinq2sorted = range.AsParallel().AsOrdered().Where(i => { Thread.Sleep(1000); return (i % 2 == 0); }); Console.WriteLine("sorted"); foreach (int i in plinq2sorted) Console.WriteLine(i); Console.WriteLine("something is going wrong?"); plinq2sorted.ForAll(i => Console.WriteLine(i)); Console.ReadLine(); } //
example output:
sorted
0
2
4
6
8
10
12
14
16
18
20
22
24
something is going wrong?
6
0
4
2
14
10
12
8
22
18
16
20
24
Parallel queries can fail and throw exceptions. These exceptions will not stop the execution of the queries. They are collected and returned in one single Exception that can be caught by the usual try/catch block.
public static void PLINQ4() { var range = Enumerable.Range(0, 25); try { var plinq2sorted = range.AsParallel().AsOrdered().Where(i => { Thread.Sleep(500); if (i > 15) throw new ArgumentException("exception for number " + i); return (i % 2 == 0); }); foreach (int i in plinq2sorted) Console.WriteLine(i); } catch (AggregateException e) { Console.WriteLine("Exception thrown for " + e.InnerExceptions.Count + " results"); foreach (var innerException in e.InnerExceptions) Console.WriteLine(innerException.Message); } Console.ReadLine(); } //
example output:
Exception thrown for 8 results
exception for number 16
exception for number 20
exception for number 17
exception for number 21
exception for number 19
exception for number 23
exception for number 22
exception for number 18
Exiting Threads
You can use the Thread.Abort() method to kill a thread. This throws a ThreadAbort-Exception. The problem with this is that you do not know where exactly your program stops executing. It could be in the middle of an important calculation that makes a proper resource cleanup impossible. A more proper way would include a shared variable. By testing that variable you can exit your thread at predefined code positions.
private class parameters { //public double a; //public string s; public bool exitFlag = false; }// static private void myThread(object xParameters) { parameters p = xParameters as parameters; int i = 0; while (!p.exitFlag) { Console.WriteLine("thread loop number " + i++); Thread.Sleep(1000); } Console.WriteLine("good bye!"); Thread.Sleep(2000); } // static void Main(string[] args) { Thread t = new Thread(new ParameterizedThreadStart(myThread)); t.IsBackground = true; t.Name = "MyBackgroundThread"; t.Priority = ThreadPriority.Normal; parameters p = new parameters(); t.Start(p); Console.WriteLine("press return to exit the program"); Console.ReadLine(); p.exitFlag = true; t.Join(); } //
You can use a global static boolean to shut down a thread, but I think you will appreciate my solution using parameters as you can then reuse standard parameter interfaces and make your code more generic. You surely don’t want to deal with global variables when you have 100 threads running.
Exiting your application
In Windows.Forms applications you can use Application.Exit()
. This method terminates all message loops and closes all windows. You can eg. execute your cleanup code in the Form.OnClose events.
This method does not terminate your threads. If you do not deal with all your Foreground threads properly, then these threads keep running. Therefore you must take measures to kill your threads.
The
Environment.Exit()
method is not part of Windows.Forms. It kills the process. Unsaved changes in forms may get lost. Again, you have to clean up properly and free resources.