My repro environments are a PC running Windows 8.1 (to show the correct behavior of the Microsoft CLR) and a virtual machine running Ubuntu Server 14.04 and Mono 3.12.1 (to show the incorrect behavior of the Mono CLR).
I have a repository with repro code and steps: https://github.com/bradwilson/mono-repro.xunit
The current SynchronizationContext on Microsoft CLR is stored in the ExecutionContext, which flows through the task execution system. When a user sets the sync context, they are updating a copy-on-write context so that the setting of that sync context does not leak inappropriately back into the calling method (setting a sync context should only affect yourself and the things that you call, not your caller).
It appears that the Mono CLR does not have the same behavior, and it caused a hanging issue with xUnit.net (https://github.com/xunit/aspnet.xunit/issues/6). It was discovered in the context of ASP.NET 5, but can be reproduced just using MCS and not involving ASP.NET at all.
This could have potentially dangerous and unexpected behavior, because the remainder of the calling method is now running in a context that it should have no knowledge of. Luckily for xUnit.net, this just resulted in a hang as we tried to do a Thread.Join on the calling thread, and was fairly simple for us to work around, but it's definitely incorrect behavior, and could cause unexpected issues for people using tasks and custom synchronization contexts.
There are two hacks necessary to prevent Mono from hanging when disposing the sync context (which this sample does not do, but you could add code to show the problems...I wanted to simplify the core repro so that it showed the original issue, not the resulting hacks that were required to work around it).
In xUnit.net, the original caller tells the worker object to clean itself up now that it's no longer in use. the worker object calls Dispose on the sync context, which does a Thread.Join across all the worker threads. When running on Mono, the caller is running on one of those threads (because it's in the sync context), so the Thread.Join call hangs trying to join itself.
So this workaround was the first attempt to fix the bug referenced above. It then revealed the second problem...
Since the caller is still running inappropriately on the sync context--which has now been disposed--the first fix ended up causing more async code in the caller to re-schedule work on the now-disposed sync context, which was firing ObjectDisposed exceptions for the events. This workaround takes the dangerous approach of simply ignoring the desire to run things asynchronously and runs them synchronously. We know this is "safe" only in the xUnit.net case because we know that we're down a single logical task of execution at the time that this occurs, because the rest of the test runner machinery had shut down. A safer workaround could be to use QueueUserWorkItem to push the new work onto the thread pool, thereby simulating the default behavior.
I think this is by design.
I used https://github.com/bradwilson/mono-repro.xunit repro if you have better one please attach it. From the repro the important aspect is that CLR continues on different thread once await returns where mono reuses one of existing.
>> FINISHED: Worker 4 on 12
Current thread: 8; Sync context: (null)
>> FINISHED: Worker 9 on 4
Current thread: 4; Sync context: MaxConcurrencySyncContext
If you force mono to continue on a new thread you get same behaviour as .net similarly if you force .net to continue on one of threads where synchronization context was set you get same behaviour as mono.
How could this be "by design" when you're significantly and dangerously deviating from the behavior defined by the CLR?
Let's be clear on what this bug is:
- Function A calls Function B.
- Function B starts using a custom sync context.
- After returning from the invocation, Function A should not be running on Function B's sync context.
This is outright broken behavior, and contrary to the reference implementation (the CLR).
Fixed in master