Monolateral categories

I tried something. It’s like categories, but instead of having objects and morphisms, you have objects and… elements of objects. I call these monolateral categories.


I’ve learned category theory because I wanted to understand more about linear logic. Linear logic is a system where you have formulas and proofs. A proof goes from a formula to another one. If a proof goes from A to B, we say that it is a proof of A \vdash B. It is very similar to category theory. But it is pure syntax. You can write formulas and you can write proofs from one formula to another one using sequent calculus. I will not talk too precisely of it because I don’t know how to draw proofs in sequent calculus here. But anyway, that’s mainly for the motivation.

In the sequent calculus of linear logic, there are inference rules which allows one to build the proofs. For instance one of these inference rules say that for every fomula A, you have a proof of A \vdash A which is named the axiom. Another one says that if you have a proof of A \vdash B and a proof of B \vdash C, then you obtain a proof from A \vdash C. It is named the cut rule.

These formulas and proofs form a language, and we want to interpret this language, in a mathematical universe. We will interpret them in a category \mathcal{C}. We need to choose an object [A] \in \mathcal{C} for every formula A of linear logic. To every proof \pi of A \vdash B, we associate a morphism [\pi] from [A] to [B]. We interpret the axiom on A by the identity of [A]. If we have a proof \pi_1 of A \vdash B and a proof \pi_2 of B \vdash C, then we interpret the proof obtained by applying the cut rule to \pi_1 and \pi_2 by the composite morphism [\pi_1];[\pi_2]:[A] \rightarrow [C].

There are also inference rules which work like products, others like coproducts, others like the tensor product of a monoidal category etc… Thus we need category with a lot of structure if we want to interpret the whole linear logic.

The link between linear logic and category theory is clear. But we can present linear logic in another way, and then the link is less clear. What I was talking about before is named bilateral sequent calculus. But there is also the monolateral sequent calculus. In the monolateral sequent calculus, we still have the same formulas but we no longer prove something of the type A \vdash B, we only prove things of the type \vdash A, with only one formula on the right of \vdash.

If we interpret the bilateral sequent calculus of linear logic in categories (by the way it existed before linear logic, you can interpret intuitionnistic logic in cartesian closed categories), in what should we interpret the monolateral sequent calculus? People usually interpret them in categories also, but I think it’s weird. I want to propose the idea of monolateral categories to interpret these proofs but also just for themselves. Now, you can forget about linear logic. I’m going to define monolateral categories. Like categories you still have objects, but the morphisms are replaced by things which instead of going from an object to another just go… into an object.

So we will have objects and stuff which goes into these objects, that we will call elements. Usually we say that category theory is about morphisms while set theory is about elements. Are we reinventing set theory? It’s a bit different, definitely more set-theoretic than category theory, but still very categorish. Category theory is about typed morphisms: morphisms go from an object, the first type, to another object, the second type. Monolateral category theory will be something like a typed set theory. You have types, the objects, and elements of these types. And an object can be more than a set, it will be likely a set with some structure, exactly like in (bilateral) category theory.

Monolateral categories


A monolateral category \mathcal{C} is the data of:

  • a set Ob(\mathcal{C}) (the objects of \mathcal{C})
  • for every A \in Ob(\mathcal{C}), a set |A| (the elements of \mathcal{C})

Later on, we will write A \in \mathcal{C}, instead of A \in Ob(\mathcal{C}), as is usual in (bilateral) category theory.


\mathbf{Set} is the monolateral category whose objects are sets, and such that if X \in \mathbf{Set}, then |X| = X.


\mathbf{Vec_k} is the monolateral category whose objects are k-vector spaces, and such that if E \in \mathbf{Vec_k}, then |E| is the underlying set of E.

Of course a vector space is not constituted only of its underlying set. The objects of Vec_k are thus of the form E=(|E|,+,0,.) such that the usual axioms are verified.

We no longer have morphisms from the start. But it just as in set-theory. The functions are defined from the elements in set-theory, they are not a primitive object.


Every concrete category ie. a category \mathcal{C} together with a faithful functor U:\mathcal{C} \rightarrow \mathbf{Set}, is a monolateral category, by defining for every A \in \mathcal{C}, |A|:= U(A).


A functor F:\mathcal{C} \rightarrow \mathcal{D} between two monolateral categories is the data of:

  • a function F:Ob(\mathcal{C}) \rightarrow Ob(\mathcal{D})
  • for every A \in Ob(\mathcal{C}), a function F_A:|A| \rightarrow |F(A)|


For every monolateral category \mathcal{C}, there is a functor U:\mathcal{C} \rightarrow \mathbf{Set} obtained by defining U(A)=|A| and U:|A| \rightarrow |A| is the identity function.


A natural transformation \alpha:F \Rightarrow G between two parallel functors F,G:\mathcal{C} \rightarrow \mathcal{D} is given by the data of for every object A \in \mathcal{C} of a function \alpha_A:|F(A)| \rightarrow |G(A)|, such that this diagram commutes:


In a monolateral category \mathcal{C}, a product of A and B is given by:

  • an object A \times B
  • a function p_1:|A \times B| \rightarrow |A|
  • a function p_2:|A \times B| \rightarrow |B|

such that for evey x \in |A| and y \in |B|, there exists a unique (x,y) \in |A \times B| verifying p_1(x,y)=x and p_2(x,y)=y.


In \mathbf{Set}, a product of A and B is given by the usual cartesian product of A and B. We define p_1 and p_2 by p_1(x,y)=x and p_2(x,y)=y.

In \mathbf{Vec}, a product is given by the direct product.

Note that something odd with the definiton of product is that the functions p_1,p_2 don’t need to preserve any structure.

The monolateral category of monolateral categories

Like for bilateral categories, there is a monolateral category of all monolateral categories, it is defined by stating that:

  • Ob(\mathbf{Cat}) is the set of all monolateral categories
  • If A \in \mathbf{Cat}, then |A|=Ob(\mathcal{C})

This category is equipped with products.

If \mathcal{C},\mathcal{D} are two monolateral categories, then \mathcal{C} \times \mathcal{D} is defined by:

  • Ob(\mathcal{C} \times \mathcal{D})=Ob(\mathcal{C}) \times Ob(\mathcal{D})
  • If (A,B) \in Ob(\mathcal{C} \times \mathcal{D}), then |(A,B)| = |A| \times |B|

The projections are given by p_1:|A|\times|B| \rightarrow |A| is given by p_1(x,y)=x and p_2(x,y)=y.

Monoidal monolateral categories

A monoidal monolateral category is a monolateral category \mathcal{C}, together with:

  • a functor - \otimes -:\mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C}
  • an object I \in \mathcal{C}

Note that from that, we can define a functor (- \otimes -) \otimes - : (\mathcal{C} \times \mathcal{C}) \times \mathcal{C} \rightarrow \mathcal{C} given on objects by (A \otimes B) \otimes C and the function from (|A| \times |B|) \times |C| to |(A \otimes B) \otimes C| is given considering the function from A \times B to |A \otimes B|, taking the cartesian product with the identity of |C|, to obtain a function from (A \times B) \times C to |A \otimes B| \times |C| and then postcompose by the function from |A \otimes B| \times |C| to |(A \otimes B) \otimes C|. In the same way, we can define a functor - \otimes (- \otimes -): \mathcal{C} \times (\mathcal{C} \times \mathcal{C}) \rightarrow \mathcal{C}.

We now continue the definition of a monoidal monolateral category. We also require:

  • a natural tranformation \alpha:(- \otimes -) \otimes - \Rightarrow - \otimes (- \otimes -)

and more stuff… I will continue another day. Note that an interesting point is that the functor - \otimes -: \mathcal{C} \times \mathcal{C} \rightarrow \mathcal{C} defines in particular a function A \times B \rightarrow |A \otimes B| for every objects A,B \in \mathcal{C}. In the (monolateral) category of vector spaces it will be the function which sends (x,y) to x \otimes y.

We could think also about enriched monolateral categories, monolateral monads, monolateral monoids etc… and something important: duality. Proofs of A \vdash B in the bilateral sequent calculus of linear logic are replaced by proofs of \vdash A^* \otimes B in the monolateral sequent calculus. And moreover, given a proof of \vdash A^* \otimes B and a proof of \vdash B^* \otimes C, the cut rule provides a proof of \vdash A^* \otimes C. Therefore, the composition of morphisms could come back into the party by defining compact closed or star autonomous monolateral categories… When you work in continuum mechanics or some fields of physics, instead of linear maps you deal with “tensors”. A tensor of type (n,p) is an element of a vector space (A^*)^{\otimes n} \otimes (A)^{\otimes p}, and you can represent all kind of information with the tensors. In fact you can stock much more information in a tensor than in a linear map. The linear map are just the tensors of type (1,1). Same thing here, we no longer have maps, but with duality we could lot of things.

To be continued…

1 Like

My feeling is that a lot of “monolateral categories” could be encompassed by the lax slice 2-category \mathsf{Cat}/\mathsf{Set}. An object is a category equipped with a copresheaf on it, and then a morphism is a diagram

The slice category \mathsf{Cat}/\mathsf{Set} shows up in Lawvere’s thesis on algebraic theories, where he thinks about the category of models for a theory as a category with a forgetful functor to \mathsf{Set}, which is a natural thing to think about because a model has elements!

And it’s dual to his theories, which are in \mathsf{FinSet}/\mathsf{Cat} (or rather, the subcategory of this which preserves coproducts and is bijective on objects). I encourage you to read the thesis; it has a lot of gems which don’t show up in the typical presentation of algebraic theories.

Also, welcome to localcharts!

It feels to me like there isn’t enough in the definition. Certainly you can define linear maps of vector spaces in terms of how they operate on underlying sets, but if that condition isn’t part of the monolateral category, it seems like you’ve thrown away too much. Specifically, the monolateral category of vector spaces seems to be basically the same as the discrete category of vector spaces, with no morphisms specified, together with the restriction of the functor to \mathbf{Set}. But presumably we’d like the monolateral category to be able to recover the ordinary category, no?

I suspect you can make this work a bit better if you restrict to closed categories, since then you can in principle recover the homs in the original category from points of the hom-object.

Anyway, it’s an interesting idea! I’ve long wanted to figure out a good story for a somewhat similar sort of branching of category theory, namely where the domain and codomain don’t need to be ordered (mainly for presenting groupoids.) It’s fun to think about this kind of thing.

Thank you, I didn’t know about \mathbf{Cat}/\mathbf{Set}.

I was trying to think to an example which doesn’t come from a category in the obvious way. Consider topological spaces. Let’s say that the objects of my “monolateral category” are the topological spaces. Then, I can chose |X| equal to the underlying set of the topological space X. But I can also chose |X| := \mathcal{O}_X the set of all open sets.

Of course, the definition of monolateral category is so simple that it doesn’t have much value to find examples. But we can verify if we have more of the stuff I defined after that. I hoped that we will have products.

I define X \times Y as usual. |X \times Y| = \mathcal{O}_{X \times Y}. We then want a function p_1:\mathcal{O}_{X \times Y} \rightarrow \mathcal{O}_X and a function p_2:\mathcal{O}_{X \times Y} \rightarrow \mathcal{O}_X. This function is obtained by noting that the projections X \times Y \rightarrow X and X \times Y \rightarrow Y are open maps ( Thus for an open set U of X \times Y, the first projection of U on X is an open set of X and that’s how I define my p_1 and p_2.

But the universal property is not verified. Given U \in \mathcal{O}_X and V \in \mathcal{O}_Y, most of the time, we have several open sets W \in \mathcal{O}_{X \times Y} such that p_{1}(W)=U and p_{2}(W)=V. Example in \mathbb{R}^2: an open square and the biggest open disk inside it. So it doesn’t work…

Yes, I hope we can reobtain composition by defining “compact closed monolateral categories” or “star-autonomous monolateral categories”. In the monolateral sequent calculus of linear logic, that’s how you compose. If you have a map f:E \rightarrow F between two finite-dimensional vector spaces, you can also see it as an element of E^* \otimes F. You have a natural isomorphism FVec[A,B] \cong A^* \otimes B. And it also preserve composition somehow. You can compose in the second view by using the map F \otimes F* \rightarrow k given by the compact closed structure. If you have u \in E^* \otimes F and v \in F^* \otimes G, you have u \otimes v \in E^* \otimes F \otimes F^* \otimes G and then you get something in E^* \otimes I \otimes G and then in E^* \otimes G, which is what correspond to u;v.

You can also do the same to compose relations, because \mathbf{Rel} is compact closed, although I have less intuition for that case.

As to closed categories, good idea! But do you think to monoidal closed categories or the general notion of closed category? I don’t know much about this second one. I think you can do something with monoidal closed categories, because it’s the semantics of Multiplicative Intuitionnistic Linear Logic.

1 Like

Monoidal closed categories are probably fine. Plain closed categories are a bit weird, although sometimes more elegant.