In general, my thinking is, we can convert a float to an integer. But we cannot convert an integer to float.
So its like providing the highest possible facility so that it can be used later ( in any case).
I would like to know why they picked float for MSSQL. I would think that some type of integer would have been WAY more efficient with better performance. As a straight 100% guess, I wonder if they thought at some point they would store the actual value instead of the implied decimal position.