I think I might have stumbled over the problem, it seems to work but it would be nice to have confirmation
You made a number of misstatements, and your use of CDec is unnecessary.
If you want to emulate Excel INT using VBA Int, the following demonstrates a straight-forward and reliable way to do it.
VBA Code:
Sub doit()
Dim a As Long, b As Double, c As Double
a = 559
b = 1.118
c = a / b
MsgBox Int(c) & vbNewLine & Int(c & "")
End Sub
The first Int displays 499. The second Int displays 500.
-----
The point is: Excel INT rounds its argument to 15 significant decimal digits before truncating to an integer.
Converting a numeric value to a string (e.g. c & "") is one way to round to 15 significant digits.
Although that provides the desired result in this particular case, it is actually a defect, IMHO.
For example, a typical method of randomly selecting from an array n items indexed from zero is to calculate the index =INT(n*RAND()).
We expect the result to be 0 to n-1 because RAND() is less than 1.
But if n=20 and RAND() returns 0.999999999999997+4.44E-16 or greater, for example, INT(n*RAND()) returns 20, not 19.
(I actually entered that error with RAND in Excel 2003. It is unclear whether RAND in Excel 2010 and later returns a value that large. But of course, it is the principle, not the example, that matters.)
-----
VBA Int does not round its argument to decimal . It relies on the full precision of 64-bit binary floating-point.
(In fact, in some cases, it relies on the full precision of 80-bit binary floating-point, which is used internally in Intel-compatible CPUs.)
-----
I am wrapping them inside convert decimal function to retain the extra precision I believe as they then are treated as decimal datatype instead of double
That is incorrect. Ironically, your use of CDec works because it reduces the precision, not increases it.
First, the sub-expression a/b is evaluated as type Double because its terms are type Long and type Double.
"a" and "b" are not converted to type Decimal before performing the division.
Second, CDec effectively rounds a type Double argument to 15 significant digits, persumably by converting it to a string.
To demonstrate, note that the exact decimal presentation of binary result of a/b is 499.99999999999994315658113919198513031005859375, which we can approximate by 499.99999999999994, the first 17 significant digits (rounded), with no loss of binary precision.
VBA Code:
Sub doit()
Dim a As Long, b As Double, c As Double
a = 559
b = 1.118
c = a / b
MsgBox (a / b = val("499.99999999999994")) & _
vbNewLine & a / b & "" & _
vbNewLine & CDec(a / b) & _
vbNewLine & CDec("499.99999999999994")
End Sub
displays:
True
500
500
499.99999999999994
-----
1 / 1.118 for example = 0.894454382826476, which is 15 decimal places but double datatype is rounding to 14 decimal places max
That is incorrect.
First, 1/1.118
displays 0.894454382826476 because both Excel and VBA arbitrarily limit formatting to the first 15 significant digits (rounded).
But the actual precision can be much more.
In this case, 1/1.118 is exactly 0.894454382826475
7134860474252491258084774017333984375, which can be approximated by 0.894454382826475
71 with no loss of binary precision.
Second, neither Excel nor VBA rounds to 14 significant digits automatically.
Finally, the issue has nothing to do with the binary representation of 1/1.118
per se.
Instead, it has to do with the binary representation of 1.118.
1.118 cannot be represented exactly in 64BFP. Instead, it must be approximated in binary by the sum of 53 consecutive powers of 2 ("bits").
Consequently, the exact decimal presentation is 1.11800000000000
01048050535246147774159908294677734375.
Since that is infinitesimally larger than 1.118, 559/1.118 is infinitesimally smaller than 500.