2

I really like the pure function approach for many reasons, however my main concert is if it has a considerable performance penalty specially in case of heavy arguments.

For example, instead of this:

// no pure function
import bigState from './somewhere' // imagine a whole state with lots of values.

const impure = () => {
  // do something with bigState
}

// pure function
const pure = (bigState) => {
  // do something with bigState
}

Would an app doing considerable work with this kind of functions and arguments see a penalty?

CC BY-SA 4.0
5

2 Answers 2

5

If you want to use pure functions you have to pass in a read-only object that you cannot modify. To modify state your pure function will have to return a copy of your state with modifications, not update the original.

The size of your state is not really relevant as far passing parameters is concerned. You pass in a (small) reference to the large state object(s), not the actual objects themselves. The parameter is the same size no matter how big your state is.

If you have a very large state then using strictly pure functions will cause a lot of copying the state each time you make a modification.

To get around this you could look at using state management tools such as immutable.js or redux to provide performance optimisations on your behalf.

UPDATE

I should add that the important thing about a pure function is not so much how it gets its input from the calling environment, but what it does with that input. If a function has side effects then it is not pure, it does not matter whether it used a reference parameter or a global variable to cause the side-effect.

In functional languages (such as Haskell) the compiler will help prevent you from creating impure functions. Languages such as javascript do not have direct functional programming support, you have to ensure repeatability and avoid side-effects by writing functional code yourself.

One of the easiest ways to help enforce this in javascript/typescript is to use immutable state*. If your state is immutable then it makes it harder for you to accidentally change it during the function's execution. It will also help ensure repeatable results when passing reference variables. If the object being referenced is immutable then the function can be relied upon to give consistent results if a reference to that same object passed into the function again.

*Of course - very little in javascript is actually read-only. There are always ways for a developer to subvert your read-only intentions if they try hard enough.

CC BY-SA 4.0
2
0

I would like to add that, because immutability makes things so predictable, an immutable data structure can be shared.

The simplest example is a singly linked list:

const c = {head: "c", tail: null};
const bc = {head: "b", tail: c};
const abc = {head: "a", tail: bc};

// zbc and abc share bc
const zbc = {head: "z", tail: bc};

const join = ({head, tail}) =>
  tail == null
  ? head
  : head + join(tail)
;

join(abc) // "abc"
join(zbc) // "zbc"

If you use, as suggested, immutable data structures, they will come with structural sharing.

When you come from the imperative world, a natural interpretation of immutability is that you have to make a deep copy of things before you can safely touch them with mutations, but it's the other way round: because there are no mutations, you don't need to make a deep copy.

CC BY-SA 4.0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.