Just to set the record straight, I said I didn’t know an example of a “dependent copara” or “dependently graded monad”. What you have here is what’s known as a graded monad, which is just a lax functor into the monoidal category of endofunctors (what’s the problem?). In particular, it is essentially the lax monoidal functor \mathcal{P}(E) \to \mathsf{Fun}(\mathsf{Type}, \mathsf{Type}) sending S to X \mapsto S + X with laxator S + T + X \to (S \cup T) + X.
You are totally right that the Kleisli construction for graded comonads coincides with the copara construction for an appropriate lax action (in this case, of the power set of E on types).
I guess since the cat is out of the bag, I should say what a dependently graded monad is. This is an idea that @mattecapu and I had as part of our abstract approach to the Para construction (note that since we talk about Para, things are dualized and we end up with comonads). We define a “fibred action” to be a pseudo-monad in a tricategory of spans with left leg a cartesian fibration and where the span morphisms are colax on the right leg; this turns out to be essentially the same thing as a “dependently graded comonad”. The dual notion — where the right leg is a cocartesian fibration and the span morphisms are lax on the left leg — gives you a “dependently graded monad”. I can describe it explicitly in Agda-like syntax:
record DepGrMonad : Type where
E : Functor(Type, Type)
i : {X : Type} -> E X
M : (X : Type) -> E X -> Type
_m_ : {X : Type} (e : E X) -> E (M X e) -> E X
return : {X : Type} -> X -> M X i
bind : {X : Type} {e : E X} {Y : Type} {f : E Y}
-> M X e -> (g : X -> M Y f)
-> M Y (f m (map E g e))
In this case, E
is a functor which we could think of as assigning a type X
to the type E X
of exceptions which computations that yield a value of type X
could throw. Given an exception e : E X
, we can form the type M X e
of computations of values of X
that might throw the exception e
. We have “no-exception” i : E X
and a way to combine an exception e : E X
with an exception f
that could be thrown after e
and therefore on type M X e
— that is, f : E (M X e)
— and this will give another exception on X
, namely e m f : E X
. We then have the usual monadic return
(or pure
) and bind
; we can return
the value x : X
with no error, giving return x : M X i
, and we can bind
the possible error m : M X e
to the variable x
in a lambda expression \ x -> t : X -> M Y f
which might throw an error of kind f
and get the value bind m \ x -> t : M Y (f m (map E e))
which computes a value of Y
unless t
through an error of kind f
or m
contained an error of kind e
, where in the latter case we must push forward e
using the functoriality of E
to ensure that it is a kind of error that computations of type Y
can throw.
I don’t know a concrete example of that. One “problem” is that the dependency is on the codomain; naively, we would expect something to go wrong in a way that depends on the input rather than the output.
What I do know examples of are dependently graded comonads, which for the record are defined as:
record DepGrComonad : Type where
E : ContravariantFunctor(Type, Type)
i : {X : Type} -> E X
C : (X : Type) -> E X -> Type
_m_ : {X : Type} (e : E X) -> E (M X e) -> E X
extract: {X : Type} -> C X i -> X
extend: {X : Type} {e : E X} {Y : Type} {f : E Y}
-> (g : C X e -> Y)
-> C X (e m (map E g f)) -> C Y f
There are a lot of examples of these, including things which you might imagine should be more like monadic error handling (where the dependency is on the domain i.e. input) Here’s one — the dependent Maybe comonad — in faux syntax where lacking easy unicode I’ll just write Pair
for Sigma types (and take unit : Unit
to be the unit type and Empty
to be the empty type):
If : Bool -> Type
If true = Unit
If false = Empty
_and_ : (x : Bool) -> (If x -> Bool) -> Bool
true and y = y unit
false and y = false
GrMaybe : DepGrComonad
GrMaybe.E X := X -> Bool
GrMaybe.i := const true
GrMaybe.C X c := Pair X \x -> If (c x)
c GrMaybe.m d := \x -> (c x) and \ cx-is-true -> d (x, cx-is-true)
GrMaybe.extract (x, _) := x
GrMaybe.extend g (x, p) := (g (x , first p), second p)
where
first : {x : Bool} {y : If x -> Bool} -> If (x and y) -> If x
first {true} _ = unit
first {false} xy = xy
second : {x : Bool} {y : If x -> Bool} (p : If (x and y)) -> If (y (first p))
second {true} p = p
second {false} p = Empty-elim p
Ok that was a lot and I hope I didn’t make any mistakes there. A Kleisli map g : C X c -> Y
is a partial function; it can be curried into curry g : (x : X) -> If (c x) -> Y
, so we can see it as a function that takes and x : X
and yields an element of y
if the condition c x : Bool
is satisfied; if it isn’t, then If (c x)
is empty and we can use empty-elimination to get ourselves out of this jam. This is a “dependent” version of the Maybe monad (note that Maybe Y
is equivalent to Pair b \ b -> If b -> Y
, assuming univalence); but it’s a comonad and not a monad! What? The Kleisli category of this dependently graded comonad is equivalent to the Kleisli category of the Maybe monad — but here the type signatures help you express the precise condition that you need to be true. This is a well known pattern in dependent type theory (presumably, I don’t know where to cite it from but obviously the Cubical people knew about it since Cubical sort of runs off this pattern).
Anyway, there are a lot of these. You can finagle any Sigma-closed subuniverse into a dependently graded comonad like above; it also curries nicely into something more monad looking. If it is a subuniverse of propositions (i.e. a dominance), then the Kleisli category for this dep. gr. comonad is the category of partial maps with open domain.
As for another example, here’s how you can do a “cost analysis” dependently graded comonad. Suppose we have a monoid C
, +
, 0
of costs. Define the following (with lazier faux syntax):
E X = X -> C
i = const 0
C X c = X
c m d = \x -> c x + d x
extract x = x
extend g x = g x
This only adds information to the type signature and that info could be happily erased; namely, g : C X c -> Y
is just a function g : X -> Y
, but we stipulate that it costs c x
on input x : X
. When you do Kleisli composition, you just add the costs of each component function. By definition, the Kleisli composition of g : C X c -> Y
and h : C Y d -> Z
is h . (extend g) : C X (\x -> c x + d (g x)) -> Z
which, as you can see, just aggregates the costs.