TexasEngineer
New Member
- Joined
- Feb 16, 2005
- Messages
- 3
My spreadsheet has a bunch of values that I input manually.
I then ask it to perform calculations, which are equations containing of these numbers as the variables. Each equation produces a result which is used as a new variable for a following equation. and so on and so forth six times over.
I want Excel to do these calculations, but I want it to only use single floating point precision so that I can observe the round off error that will occur as a result of using all singles vs doubles.
If these equations are done default in Excel the answers are very precise, but then when I run the same calculations in an embedded software program on a microprocessor, the answers are skewed because the microcontroller only supports single floating point. I would like to use Excel to simulate calculations done in the microprocessor.
Any suggestions?
I then ask it to perform calculations, which are equations containing of these numbers as the variables. Each equation produces a result which is used as a new variable for a following equation. and so on and so forth six times over.
I want Excel to do these calculations, but I want it to only use single floating point precision so that I can observe the round off error that will occur as a result of using all singles vs doubles.
If these equations are done default in Excel the answers are very precise, but then when I run the same calculations in an embedded software program on a microprocessor, the answers are skewed because the microcontroller only supports single floating point. I would like to use Excel to simulate calculations done in the microprocessor.
Any suggestions?