Integer vs single precision floating points

Former Member
Former Member
I have an app that I run on the Fenix 3 and makes extensive use of floating point operations. Every 5 seconds the computation gets intensive enough as to delay the refreshing of the screen output. I was thinking of converting as many operations as I could to integer operations to decrease CPU load. This is a time consuming task requiring careful though and choice of units in order to achieve the desired precision throughout the code. Before embarking on this task I decided to first write a benchmark app to compare operations per second between integers and float for addition, multiplication and division. It turns out the difference for the K65 processor on the Fenix 3 is negligible. Concerting the code won't decrease CPU load. Here is a screenshot of the benchmark app running on the Fenix 3:



Now, I am wondering about energy? Might there be any difference or will I have to write a benchmark to find out ?

Thanks
Jesus