Mutation and the Risk of Bugs
with No Comments

Why choose immutable types when mutable types are more powerful?

Performance is often the reason we choose to use mutable objects. Another would be convenient sharing, two parts of a program could communicate better by sharing a common, mutable data structure... (but excuse me that's a global variable...)

Mutable Types are Advanced!

For example, a string is an example of an immutable type. A string object always represents the same string. The Java StringBuilder is an example of a mutable type. It has methods to delete, insert, or replace characters. It is easier to use StringBuilder when you need to make a lot of changes to a string. Write a line of code, and done. Because otherwise you would need to rebuild new strings by copying strings and using for loops. Featurewise, it would sound like a knife vs. light saber kind of fight, right?

The truth is that immutable types are safer from bugs, easier to read, and more ready for change. Mutable types make it harder to understand what the program is doing. They make it harder to enforce contracts as well. Bugs that arise from a mutation are harder to trace as well.

Risky Example

Would you expect sum(data) to be -17, or 17? If you only knew about main(), would you have suspected the rogue mutation in sumOfAbsValues()? By the name of the method, most would only expect it to return a sum of absolute values.

The Risks of Mutations

Having many aliases for the same mutable object is exactly what will cause mutable types to be risky. Yes, they are safe when used locally and with one reference to the object. But they can be bad for performance. To avoid bugs, we would need to copy defensively. Yet, this forces the program to do more computations and use more memory for every client. If we used an immutable type instead, then different parts of the program could share the immutable object in memory. Immutable types are more efficient and never need to be defensively copied.

But mutable objects are ready for change! Yes, change is in the definition of a mutation. But, it is not the right kind of "change" when we talk about readiness for change. The question is whether we can alter the code without extensive rewriting or introducing bugs.

Suffice to say, if the spec does not need us to mutate an input, then it is best to assume input mutation is not allowed. Surprise mutations lead to hard-to-find bugs.