Bug 15987 - Floating point results differ between .NET and mono
Summary: Floating point results differ between .NET and mono
Alias: None
Product: Runtime
Classification: Mono
Component: JIT ()
Version: 3.2.x
Hardware: PC Windows
: --- normal
Target Milestone: ---
Assignee: Bugzilla
Depends on:
Reported: 2013-11-07 06:29 UTC by Karl Bergström
Modified: 2013-12-11 11:15 UTC (History)
4 users (show)

Is this bug a regression?: ---
Last known good build:

Notice (2018-05-24): bugzilla.xamarin.com is now in read-only mode.

Please join us on Visual Studio Developer Community and in the Xamarin and Mono organizations on GitHub to continue tracking issues. Bugzilla will remain available for reference in read-only mode. We will continue to work on open Bugzilla bugs, copy them to the new locations as needed for follow-up, and add the new items under Related Links.

Our sincere thanks to everyone who has contributed on this bug tracker over the years. Thanks also for your understanding as we make these adjustments and improvements for the future.

Please create a new report on GitHub or Developer Community with your current version information, steps to reproduce, and relevant error messages or log files if you are hitting an issue that looks similar to this resolved bug and you do not yet see a matching new report.

Related Links:

Description Karl Bergström 2013-11-07 06:29:20 UTC
Our issue is that we're unable to reproduce floating-point sensitive simulations across runtimes.

I initially created a shorter program that produced differing floating point results between mono and .NET x86, but after further testing realized that Microsoft's .NET x86 produced different results depending on whether the debugger was attached or not. This did not happen with .NET x64, which produced the same results as mono x86 on Windows.

The mono versions I used were 3.2.3 on Linux (built from source) and the official 3.2.3 x86 binary on Windows.

I now have a longer program that most definitely produces different results between all of .NET x86, .NET x64 and mono x86 on Windows. On linux, results varied on mono between x86 and x64. 


What we would like is a floating point behaviour that can be reproduced between mono on linux and .NET. Mono x86 produces the same results on Windows and linux, but x86 mono on linux does not produce the same results as x64 on linux.

We'd like to have behaviour that matches between .NET and mono, but results vary between running .NET x86 with and without a debugger attached in Visual Studio. Results match when running .NET x64 with/without debugger.

Because of this, it seems hopeless to create a behaviour on mono x86 that can match .NET. Instead, we can look at x64 behaviour and try to match that across platforms, but x64 on Windows is not listed as officially supported according to the mono website and no official binaries are available.

In short, we'd like mono to match the floating-point behaviour of .NET.


My initial short program can be found here:

It reproduces different behaviours on .NET x86 with a debugger attached and without. I'm not entirely familiar with the standard, but my understanding was that conv.r4/conv.r8 instructions (which wrap the calls to math libraries) would force the runtime to narrow the values(fetch the values from FPU registers and store into 32/64 bit registers), disallowing different behaviours across runtimes. I suppose different rounding modes could be an issue here? I'm not familiar enough with this, but would be interested in an explanation.
Comment 1 Zoltan Varga 2013-12-06 14:17:34 UTC
A minimal testcase:
using System;
    class Program
        static void Main(string[] args)
			byte[] arr = new byte [] { 100,  124,  138,  182,  177,  43,  240,  63 };
			double d = BitConverter.ToDouble (arr, 0);

			foreach (var b in BitConverter.GetBytes (1f / (float)d))
				Console.Write ("" + b + " ");
			Console.WriteLine ();
This prints 69 76 125 63 on x86, and 70 76 125 63 on amd64.
Comment 2 Rodrigo Kumpera 2013-12-11 11:15:54 UTC
Fixed in master.