Setting a variable equal to a number containing one or more digits after the decimal point will set the data type of that variable to Float.
Presumably, this mean that any subsequent calculation involving that variable will be carried out to the maximum achievable precision. Does this mean I don’t need to use the
float( function in those calculations?
For instance, if I have
f = 1/298.2572221
f is subsequently used in a calculation, will I get a different result if I had used
f = float(1)/298.2572221
Does it matter that, in the first expression, the first number encountered in the calculation is not floating point?