This content originally appeared on DEV Community and was authored by Vipul Kumar
At first glance, it feels like Java is broken. But the real culprit is floating-point precision.
System.out.println(0.1 + 0.2 == 0.3); // false ❌
Here’s why:
👉 Java’s double follows the IEEE 754 standard.
👉 Numbers like 0.1 and 0.2 can’t be represented exactly in binary.
👉 So 0.1 + 0.2 evaluates to 0.30000000000000004, not exactly 0.3.
Real-world implications:
👉 Financial apps don’t use double for money. Instead, they rely on BigDecimal for precise arithmetic:
BigDecimal x = new BigDecimal("0.1");
BigDecimal y = new BigDecimal("0.2");
System.out.println(x.add(y).equals(new BigDecimal("0.3"))); // true ✅
👉 Lesson:
Floating-point is great for scientific calculations, but never for money.
If you’re handling currency, billing, or financial logic, reach for BigDecimal, not double.
Have you ever been bitten by a floating-point bug in production?
I am @vipulkumarsviit and lets stat in touch: https://www.linkedin.com/in/vipulkumarsviit/
This content originally appeared on DEV Community and was authored by Vipul Kumar
Vipul Kumar | Sciencx (2025-10-08T10:19:20+00:00) Why is 0.1 + 0.2 != 0.3 in Java?. Retrieved from https://www.scien.cx/2025/10/08/why-is-0-1-0-2-0-3-in-java/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.
