Out now, a preprint with Christian Sattler: arxiv.org/abs/2211.14801

This is advancing our knowledge about the homotopical character of cubical type theories by one data point. For each extant cubical type theory, there's a model in presheaves over a cube category, and each goes hand in hand with a Quillen model structure. We found that the model structure for the type theory whose cubes have {faces + degeneracies + permutations + diagonals + one kind of connection} presents ∞-groupoids.

Doing it any other way is just asking to get owned by pullbacks

So uhhhh, I think that the definition of internal category as a monad in span might the best definition for proof assistants 😅

Always a bittersweet feeling to find ideas you thought were novel in a paper from the 90s

I really, really want to like 2-categorical stuff because I _know_ it's important, but on the other hand doing 2-category theory is just... not fun lol

Various statements from @pigworker has helped me understand what most monad tutorials get "wrong": they never talk about the difference between "representations of (the eventual value of) a computation" and data-structures.

The 'Maybe Monad' is the wrong way to think about it. It's the "Pointing Monad". Just because you can (ab)use Maybe to represent the effect of a potentially failing computation doesn't mean that it's a good idea to overload things quite that much.

@totbwf Do you have something like Uemura's representable map categories in mind? (arxiv.org/abs/1904.04097, eprints.illc.uva.nl/id/documen)

I’d be surprised if such a thing existed and it would almost certainly be an awful mess, but it’s fun to think about nonetheless!

So vague question: we can regard a model of a dependent type theory as being some structure we place on a category (for instance; a natural model). Has anyone studied some sort of classifying widget for models of type theory?

Finally re-reading Bénabou’s “Fibered Categories and the Foundations of Naive Category Theory” and its so slick

If you want to read more, arxiv.org/pdf/1703.08693.pdf goes into some more detail on the extension system view of KZ-doctrines, along with some other related machinery for doing formal category theory.

Now, how is this the same as a lax-idempotent 2-monad? Well, we've done the Haskell trick, where we encode the join of a monad via bind! Our extensions _* : Hom(A, P B) -> Hom(P A, P B) have literally the same signature as >>=, よᶜ is pure, and you can read off the Haskell-style monad laws from the laws I presented!

More generally, this idea of using bind instead of join has a name: It's called an **Extension System**. It's a good trick to know for unpacking definitions that seem opaque!

KZ-doctrines encode the relevant yoneda-adjacent structure on our 2-category K by axiomatizing the
idea that the objects (read: categories) in K can be freely co-completed.

The normal definition of a KZ-doctrine is as a lax-idempotent 2-monad, which is not the most intuitive definition (and is also a scary name!). Instead, we can define it as so:

A KZ-doctrine consists of:
* A 2-functor P : K → K. The idea is that P C for C : K should act like the category of presheaves on C.
* A family of 1-cells (read: functors) よᶜ : C → P C for each C : K. These are meant to act like the yoneda embedding of C.
* For any f : C → P D, an left kan extension f* : P C → P D of f along よᶜ. This is what encodes free co-completeness! Intuitively, if we have a functor into a presheaf category, we can extend it to a functor that's *out* of a presheaf category as well, as we can handle the extra colimits in P C by using the colimits in P D.
- The extension of よᶜ along itself is id. This ensures that the free-cocompletion of a free-cocompletion doesn't add anything extra.
- The extension of f* . g is (f . g)*. No real intuition here; just ensuring that extensions are nicely closed under composition.

First, why we even care about these weird KZ-doctrine things?

We can all (hopefully!) agree that category theory is useful. However, at the very least we can all agree that category theory is *hard*. This is especially true when you are a type theorist and you care about problems that are fundamentally 2-categorical.

Now, we can define a lot of the normal definitions from 1-category theory *in* a 2-category. General 2-category theory handles this fine, but we can't totally account for all categorical structure... most importantly, we are missing the Yoneda lemma! This means that we can't generalize + re-use any sort of proof that involves it's use, which is a *lot* of results.

This is where formal category theory comes in: we want to try and describe 2-categories that have things inside them that encode some sort of yoneda-ish thing.

KZ-doctrines are *one* way of encoding this extra bit of structure.
There other ways doing this (Yoneda Structures, Proarrow Equipments, Virtual Equipments, Framed Bicategories, etc), but I'll talk about those another time :P

Wow the extension system version of KZ doctrines makes 10000x more sense

If someone makes a proof assistant for formal category theory they better rename the REPL to the RAPL

The study of synthetic category theory boils down to seeing how messed up you can make your contexts

If composition requires taking pullbacks then that’s an immediate red flag for me 🚩

Refinement is the process of turning Chk goals into syn goals.

So what's the deal with things holding "on the nose"? 🏀🦭?

Show older

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.