- What is it?
- Why do I need it?
- How can I use it?

I have a soft spot for abstract mathematics so the growing hype around functional programming (FP) allows me to scratch two itches with one, gnarly stick. While not technically necessary to writing functional code, an understanding of Category Theory (CT) will make some documentation easier to digest (staring at you, Haskell) and, more importantly, the 'what' of what you're doing. Let's dive in with some high(ish)-level concepts and remove the 'wtf'-ness of the topic!

Category Theory is a branch of abstract mathematics that specifically deals with the formalization of mathematical structures (groups, sets, rings, etc) via objects and morphisms which, together, are grouped into something called a 'category'. The level of abstraction we're talking about here is high. So high, in fact, that some have argued that CT is the framework with which all other branches of mathematics can be built.$^1$ The level of generalization that CT embodies allows it to be a model for nearly anything, not just other mathematical topics and not just in the context of FP. Whether we're talking about software, algebraic structures, or linguistics, CT proves useful. But wait a minute... that was still 'academic'. Wtf is a 'ring'? What does 'formalization' mean here? What's a 'functor'? Let's simplify it even more.

CT is the discussion surrounding object generalizations, and 'mappings', technically morphisms, between abstract objects. In this sense a morphism is roughly a way to get from one object (the source) to another object (the target).

An example of this would be converting an integer to a string. Where the moprhism, or in this case function, would be

`someNumber.toString()`

. The result of which is a mapping from the object of integers to the object of strings via the composable morphism`toString`

. This ecosystem of functions and objects defines a category.

This is very general (and purposely so), but it allows us a great deal of freedom in our definitions of what it means to 'map to another object'. CT is one giant academic discussion on object generalizations and the transformations between them, but also the transformations between categories themselves (this is where functors come in)!. More formally, a category is a collection of three 'mathematical objects' which the contain the objects that we generalize on, i.e. games, types, languages, etc. These three mathematical objects are:

- A collection of objects
- A collection of morphisms
- A binary operation (with notation: $ \circ $)

The objects piece is simple enough to understand. It's just a group (in the non-mathematical sense) of things. They could be strings, languages, whatever. I discussed morphisms above enough to get the point across, but when you see that terminology in the context of CT you should think of it as being synonymous with the term 'transformation'. The binary operation is a little more interesting and requires getting into some math so forgive me (see my last ELI5, it might help a little). This is where CT's applicability to FP really starts to come through.

Technically in CT the 'binary operation' is called 'composition of morphisms' and it must satisfy both associativity and identity (being pedantic, the identity requirement is actually a synergy between the morphism and the object). Let's say we have four objects `a`

, `b`

, `c`

, and `d`

and three morphisms `f`

, `g`

, and `h`

that map like so:
$$
f: a \rightarrow b
$$
$$
g: b \rightarrow c
$$
$$
h: c \rightarrow d
$$

Keep in mind that $f: a \rightarrow b$ is a more formal way of writing $f(a) = b$. And just as a quick refresh to your algebra: a function composition ($f \circ g$) is also noted as $f(g(x))$, where we read $f \circ g$ from right to left, or in other words, run $g$ and feed its output into $f$.

With that refresher in mind our morphism ($ \circ $) by definition must satisify *both* associativity and identity in order for our trio of mathematical objects to satisfy the definition of a category. The associativity of this composition is achieved if$^2$:

$$ h \circ (g \circ f) = (h \circ g) \circ f $$

Identity is a little easier to grasp. It's something we're introduced to very early on in school. In arithmetic this is simply multiplication by 1 when we're working with the natural numbers, i.e. `n >= 0`

. In the CT world we need a morphism such that for every object we get that same object back:

$$ i_a: a \rightarrow a $$

If we have these requirements met by both our object(s) and our morphism(s) then, congrats, we have a true category. If that's still a little too abstract let's cap off the definition of category with a more relatable diagram that directly relates to programming:

Every arrow (line) in this image is a morphism, and every blob is an object (Int, String, and Float). With a simple glance at it we can see that it seems to satisfy the definition of a category. We have our composable morphisms which would seem to obey associativity (I can get from Int to Float in various ways), and we have our identities (the circular arrows). Cool. We're done.

But, wait! There's one more thing...

What's a functor? A functor is a higher-order transformation from one category to another. Note the slight difference in language. I did not say a transformation from object to object (which exists *within* a category). No. I said a transformation *between* categories. You know, the things that contain objects. Yeah. Those things. I'll give you a moment to pick up the pieces of your exploded skull...

A functor is a fancy term for another fancy term, 'homomorphism'. What this is is a mapping from one category to another, written: $F: A \rightarrow B$, where $F$ is our functor and $A$ and $B$ are our categories. By definition the functor must satisfy these rules:

- every object in category $A$ must map to category $B$
- every transformation in $A$ must map to a transformation in $B$ (that includes the identity transformations)

in short, the functor must preserve the structure of the source category. There's slightly more nuance to them than that, but this overview serves as a good introduction.

It might not be immediately apparent what the hell any of this has to do with programming, or anything at all for that matter, but bare with me. I'll tie it together (at least with programming).

You could make the case that you don't need to know the ins and outs of CT to get your FP style flyin' high, and I wouldn't argue with it. But consider this:

An abstract datatype f a, which has the ability for it's value(s) to be mapped over can become an instance of the Functor typeclass. That is to say, a new Functor, f b can be made from f a by transforming all of it's value(s), whilst leaving the structure of f itself unmodified. Declaring f an instance of Functor allows functions relating to mapping to be used on structures of type f a for all a. Functors are requred to obey certain laws in regards to their mapping. Ensuring instances of Functor obey these laws means the behaviour of fmap remains predictable.

That's taken directly from the Haskell documentation on Functors. Did it make any sense to you at all? Maybe it did and I'd argue that it did because either a) you understand these concepts already, or b) you learned a little from the brief adventure into CT above. This sort of language is peppered throughout articles and discussions on FP and it's nearly impossible to get to the meat of what anyone is trying to say without a tangential adventure into Wikipedia. That is, unless you already understand some high-level CT concepts.

I'm going to cop-out (just a little) here and point to the preceeding section for the 'usefulness' factor of CT. This particular topic is so abstract and so academic that concrete examples that encompass 'Category Theory' as a whole would be impossible to establish. Nevertheless, let's see an example in JavaScript that touches on some of the CT topics:

```
const f = function(x) {
return x * 2;
}
const a = [1,2,3];
const b = a.map(f);
console.log(b); // [2,4,6]
```

Here we have an array of numbers and we run a function on every item in that array, which results in a new array of numbers. Let's look at this from the perspective of CT and use its language:

We have a group of objects that, with a transformation (morphism) applied, gives us a different group of objects:

$$ f: a \rightarrow b $$

While the above isn't demonstrating 'how you can use' CT in a strict sense, it does demonstrate how knowledge of CT can make reasoning about code more straightforward. You might be able to get through the docs on FP or FP-centric languages (Scala, Haskell, etc.) without a fundamental understanding of CT, but that would be -in my eyes- miraculous. FP is so tightly coupled with mathematical vernacular that to understand the former is to flirt with the latter.

Functional Programming is not new, nor are languages, like Haskell, that strictly adhere to its dogma. Having said that, FP seems to be gaining more adherents. Thanks to JavaScript libraries like Rambda we see a different segment of the software world becoming exposed to functional concepts and its benefits. To be sure, FP is not a panacea, it's just a tool for solving problems. But the way in which it approaches problems allows for easier testing and less side effects. Why? Because FP, which derives its approach from the abstract world of CT, is all about mapping one group into another with composable transformations. And while those are tough requirements to implement in code it means with each transform is predictable and thus cleaner.

Want to apply a function to every person's age in your array of site visitors to establish if they're over 21 and return a new array of booleans? You're actually talking about a transformation between a group of ints to a group of booleans.

Category Theory: it was in front of you all along, you just didn't know it was there.

_{1) Assuming axiomatization of all mathematics is possible. Did that pique your interest? If so, look up some work on Philosophy of Mathematics (Platonism vs. Formalism or Gödel vs. Hilbert)}

_{2) See this page if you're interested in the proof}