Ok! Quick short post.
There are many ways to write even trivial functions. For example, f(x) = (5.f * x) can be written in C++ in ways that will produce different output assembly. Shown here is the Compiler Explorer view of a few of those functions:
Notice that despite them all being simple only a few implementations are transformed to the single 'mul' operation. This is even when compiled with the '-fp:fast' fast-math flag which removes the restrictions which should allow this transformation.
Since small operations like this are common in most applications this is a problem effecting most programs that are being built with these modern C++ compilers. (Disclaimer: Intel seemed to handle this better).
Our latest research into function generation includes a step that can resolve this problem. when passed a function such "Five_b" above, it can correctly identify at build time using specific machine learning approaches that this is simply "f(x) = (5.f * x)" and return the source code to replace it (admittedly, pretty ugly at the moment).
More interestingly we can identify sub-expressions of larger functions and return replacements or approximations of those sub-expressions.
A very impressive result of the system so far is that it is able to recognise complex functions such as the sum of integers below 'x' and replace with the simple form of "f(x) = (x(x+1))/2". The same can be done for replacing Fibonacci sequences with approximations, and replacing very expensive functions with table look-ups.
The current training set is limited, but each test adds to our database and improves the efficiency of the system.
We will be using this system going forward for automated approximate function generation and to aid in our exploration of the accuracy-performance design space.