Install Steam
sign in
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem

Sao Paulo, Brazil



If you already know why, safely skip this part and go right to the next one.
When we study programming we learn that there are two numeric types — the integer one, mostly used for counting and the floating point one, like float, intended for measuring. Integers don’t have a decimal part, so when you need whole numbers, like the number of cows, you use integers. If you need a decimal part, like a cow’s tail length, you use floats. So far so good.
Things get tricky when you get to money. Since you count money and not measure it, theoretically you should use integers. But then you have those pesky currencies with decimals, like dollars, euros and pounds. And an inexperienced programmer switches to floats since it seems natural to him. Unfortunately floats are not exact in some circumstances.