Pinned toot

@hallasurvivor @de_Jong_Tom

The idea is quite simple and beautiful. As Tom says, you can
analogously use Cayley's theorem for groups or monoids. Functional
programmers use the same trick, there called "difference lists".

Take an expression
((id o (f o g)) o h)
where f,g,h are morphisms in a category 𝓒.
This (up to quotienting) is a morphism in the free category on the
underlying graph of 𝓒, call this F (U 𝓒).

The UMP of the free category says we can define a functor F (U 𝓒) → 𝓓 if
we give a homomorphism of the underlying graphs U 𝓒 → U 𝓓.

So we take 𝓓 = 𝓟o 𝓒, the presheaf category on 𝓒. We define a functor
F (U 𝓒) → 𝓟o 𝓒 by defining it on the graph: U 𝓒 → U (𝓟o 𝓒). To do that
we just use the underlying homomorphsm of the Yoneda embedding, which
we recall is defined on morphisms as
Y(f)(x) = f o x
Then we get a functor ⟦_⟧ : F (U 𝓒) → 𝓟o 𝓒 satisfying
⟦ f ⟧(x) = f o x.

Now let's look at what happens when we take ⟦_⟧ on that expression
from before, and note that all of the following equations are "by
definition" rather than "by an axiom of a category":
⟦ ((id o (f o g)) o h) ⟧(x)
= ⟦ (id o (f o g)) ⟧ (⟦ h ⟧ x)
= id (⟦ (f o g) ⟧ (⟦ h ⟧ x))
= id (⟦ f ⟧ (⟦ g ⟧ (⟦ h ⟧ x)))
= f o (g o (h o x))

So we get that interpreting the morphism in the presheaf category
"automatically" re-associates all of the compositions and removes any
ids.

Finally, we can extract from this a morphism in the original category
by applying this to id:

⟦ ((id o (f o g)) o h) ⟧(id) = f o (g o (h o id))

And we see that what we get back is what we would have defined as a
"normal form" for category expressions.

Pinned toot

Here are some different approaches to model the judgments of simply type
theory/simply typed lambda calculus. Each one can express the
connectives as well with varying levels of convenience.

1. Cartesian categories (always models product types, others work for any subset of connectives)
2. Cartesian multicategories
3. Simple Category with Families
4. A slight variant of (3): A cartesian category C_c, a set C_t of
"types", a presheaf Tm(A) on C_c for each type A such that Tm(A) is
representable.
5. C-T structure: A cartesian category C_c with a subset of its
objects C_t which are the types.
6. A reformulation of (5): A cartesian category C_c, a category C_t
and a fully faithful functor from C_t to C_c.

4-6 should all be equivalent. 3 is technically the most general
because you don't assume the category of contexts is actually a
cartesian category, only that you have a terminal object and products
with types.

(2) is less general than 4-6 because you require the category of
contexts to be freely generated from terms. I'm not sure what this
buys you exactly, it seems to me a little like the syntax infecting
the semantics.

(1) is the least general because it requires your interpretation of
types to have products. But this is the most common situation in
mathematical (i.e., non-syntactic) models, and adding products is
conservative anyway (bc Yoneda/sheaves) so there is some good
justification for only studying these models. Of course, the same
could be said to be true of (pre)-sheaf categories, so this really
adds a 0: Grothendieck toposes, analogous to the completeness of
Kripke semantics. Additionally, you only need presheaf categories
rather than sheaves if you stick to the negative fragment of the
lamdba calculus (products, unit and functions).

Pinned toot

A presheaf P on a category C behaves a lot like a "predicate" on C. In my category and types class I introduced presheaves under the name "predicators" (since we'll never get to sheaves anyway). P is a functor C^op -> Set.

Why is a functor P : C^op -> Set like a predicate? Well we think of an element of P(a) as a "witness" that P(a) is true. But the objects of a category are connected by the morphisms, so we additionally have that for any f : a -> b and proof p : P(b) we can "compose" the proof p o f : P(a). These have to satisfy some natural looking rules as well.

The reason presheaves are so central in category theory is that we use them to define *universal properties* by asking for the presheaf to be *representable* by an object. A representation of a presheaf P is an object a in C and an isomorphism Y a =~ P between the Yoneda embedding of a and P.

Does this notion of representability generalize something about predicates? Yes! First, the Yoneda embedding: the baby Yoneda embedding of an element a in a set A is a predicate Y a on A that is defined as Y a b := (a = b). So it defines a very small subset: just the element a itself.

Then we would say that a predicate P on a set A is representable by an element a in A when Y a = P, i.e., they define the same predicate, i.e., P(b) is true if and only if a = b. So at first glance this seems like a strange notion: representability of a predicate says that it is true of exactly one element?

But if we think of this in terms of logic, this is a familiar notion: saying a predicate P on A is representable says that there exists a *unique* element that satisfies it. So representability of presheaves is in fact a generalization of the idea of the unique-existential quantifier of logic. In fact if you look at the wikipedia page for unique existential quantifier [1] the 2nd and 4th formulation are direct analogues of two equivalent formulations of representable presheaves.

[1]: en.wikipedia.org/wiki/Uniquene

Pinned toot

The notion of model of intuitionistic propositional logic (IPL) I'm
most familiar with (being a category theory/PL nerd) is a Heyting
algebra (HA): a poset with finite meets, finite joins and a Heyting
implication. If you add propositional variables, the model includes a
choice of how to interpret them as elements of the HA and if you
include axioms, their interpretation must be true in the HA.

Then you can interpret every proposition of IPL as an element of the
HA where you interpret conjunctions as meets, disjunctions as joins
and implication as the Heyting implication. If G |- A is provable then
/\ [G] |- [A] is true.

If you look up what a model of IPL is though, you'll likely find
Kripke models, and they look at first a bit different: It's a pair of
a poset W, and a "forcing relation" ||- between elements of W and
propositions of IPL satisfying a bunch of properties:

1. for variables x, if w ||- x and v <= w then v ||- x
2. w ||- p /\ q iff w ||- p and w ||- q
3. w ||- true is true
4. w ||- p \/ q iff w ||- p or w ||- q
5. w ||- false is false
6. w ||- p => q iff for any v <= w if v ||- p then v ||- q

Despite the cosmetic difference, Kripke models are instances of the HA
model construction. Given a poset W, the HA in question is
Prop^(W^op), the poset of antitone functions from W to Prop
(classically, Prop is just the boolean order 0 <= 1). This has the
pointwise ordering: f <= g if for every w. f w implies g w.

This is a Heyting Algebra:

top w iff true
(f /\ g) w iff f w and g w
bot w iff false
(f \/ g) w iff f w or g w
(f => g) w iff for any v <= w. if f w then g w

Then an interpretation of the propositional variables in this HA would
be an assignment for each variable X and w a proposition [x] w that is
antitone in w: if [x] w and v <= w then [x] v. Then the model HA
interpretation defines exactly our Kripke forcing semantics. It
defines for each proposition of IPL p a function from W to Prop that
is antitone, i.e., [p] : W^op -> Prop. Then the definition of Kripke
model is just an infix notation for this semantics:

w ||- p := [p] w

And if you unravel the definitions of the HA semantics and the HA
structure of the poset, you reproduce exactly the definition of a
Kripke model.

Bonus: this is all a "de-categorification" of a similar situation for
lambda calculus. The Kripke models are presheaf categories and the HA
are bi-cartesian closed categories.

Pinned toot

Category Theory, Presheaves, Long Explanation 

The usual definition of a presheaf P on a category C is a functor C^op -> Set. But here is a more conceptual way to think about it. (For the experts, basically it’s the same idea as a monoid/group action).

First, a category is a notion of object and morphism between objects. These morphisms can be composed if the source of one matches the target of the other. I.e., they can be composed “tip to tail”. You need identity morphisms for this composition and the composition is associative. This all justifies notation like drawing morphisms by concatenating arrows without using parentheses (picture 1). Or writing a composition as `f;g;h` or even `hgf` with just concatenation.

One way to think about a presheaf P on a category C is that it extends the notion of morphism. We had morphisms with a source and target, and the presheaf defines a new notion: a morphism that has a source but no target. What kind of composition should such a thing have? Well if I have a morphism f with domain A and codomain B and I have a morphism phi with domain B but no codomain, then I should still be able to compose them together to get a morphism with domain A and no codomain. That is, we can compose these “morphisms to nowhere” with ordinary morphisms on one side but not the other (since there is no codomain object to tell us what kind of morphisms we can compose with). This kind of composition should intuitively extend the existing notion: composing an arrow with no codomain with the identity (id; phi = phi) shouldn’t change it and there should be an Associativity when combining with ordinary composition (f;g);phi = f;(g;phi). These rules make it so that we can cleanly extend our original notation for composition to these new morphisms (picture 2). It is a very useful exercise to figure out why this data is precisely the same as a functor from C^op to Set.

A few observations:
1. This all would work perfectly fine with arrows that only have a codomain and no domain. These are “copresheaves” and are equivalent to functors C to Set.
2. For any object A of C, we could define a presheaf by defining an arrow with no codomain to be an arrow into A! Then this satisfies the right identity and Associativity rules because it is just ordinary composition. This construction turning an object of a category into a presheaf on it is the Yoneda embedding!
3. Once we have seen (2) we should ask for any presheaf: Is this notion of morphisms with no codomain really just the same as morphisms into some fixed object? If it is, then we call it a *representable* presheaf.

The concept of a representable presheaf is, in my opinion, the most important idea of category theory. It deserves more in another post but here’s just one example that I learned from Emily Riehl’s textbook (Category Theory in Context) to tie everything together. Take the category of (undirected) graphs and graph homomorphisms. A K-coloring of a graph is an assignment of a number [1,…K] for each vertex of a graph such that if two vertices are connected by an edge, they have to be assigned different colors. The concept of a K-coloring turns out to be a presheaf. We can think of a K-coloring of a graph G as a morphism chi with domain G but no codomain. If I have a graph homomorphism f from H to G, then I can compose the coloring chi with f to get a coloring f;chi on H (since homomorphisms preserve adjacency) and this satisfies the correct identity and associativity rules. Once we realize this we should ask: is this notion representable? I.e., is there a special graph Q(K) such that a K-coloring on G is the same thing as a homomorphism from G to Q(K)? I’ll post the answer in a reply.

Pinned toot

The usual syntax for regular expressions is not a regular language because it requires parentheses. The usual syntax for context free grammars/expressions is not context-free because it uses recursion using variable names.

Is there any fundamental reason for this, i.e., some kind of diagonalization/fixed point argument or is it just a coincidence? I think the Goedelian argument doesn't apply because they are weaker than Peano arithmetic?

A new version of the Transpension paper ( arxiv.org/abs/2008.08533 ) & technical report ( arxiv.org/abs/2008.08530 ) are available.

Most important changes:
- Much clearer and more concise type-theoretic notations,
- Complete revision & improvement of all introduced terminology (a dictionary w.r.t. former choices of terminology is available in the back),
- The introductory system FFTraS is now more formally a pseudosubsystem of the main system MTraS.

People are talking here about how they "see" presheaves.

Here is how I see them. Suppose you are working in a category C and you want to add a new object X to it.

Then, in particular, you need to say what the hom sets hom(A,X), for A in C, are.

Moreover, you need to say how you would like to compose old morphisms with the morphisms you have added.

And doing all that above amounts precisely to defining a functor C^op -> Set.

A functor C^op -> Set can be seen as a new, added object to C.

In the presheaf category, you add all possible new objects.

(No wonder it gets too large if the starting category is large. You add every imaginable new object.)

The great folks at @causalislands put up a recording of my talk, "Programming Before You Program". Here are some recent results in programming education!

youtube.com/watch?v=0OqGAZo7cC

The recording of the POPL 2023 talk “An Order-Theoretic Analysis of Universe Polymorphism” is already on YouTube! 🤩
youtu.be/bp4yvTRM3DQ
Joint work with Carlo Angiuli and @totbwf

sharing because this has to be valuable information for at least one person other than me: Fallout New Vegas (including all of the DLC) is available for free on the Epic Games store until June 1st 👀​

@maxsnew Nice! A different perspective that I think is also illuminating is from enriched category theory: suppose C is a cartesian closed category. The arrow mapping of an enriched endofunctor F on C is a map (A => B) -> (FA => FB) in C. Then the Yoneda embedding on this map is a natural transformation C(Γ, A => B) -> C(Γ, FA => FB), or equivalently C(Γ × A, B) -> C(Γ × FA, FB). Mappings like this can be shown (just applying Yoneda on B again?) to be in bijection with the usual definition of strength Γ × F A -> F(Γ × A).

This way of reasoning allows one systematically transfer concepts from enriched categories/enriched functors to actegories/strong functors, which is really nice.

Dylan McDermott & Tarmo Uustalu's paper arxiv.org/abs/2207.00851 explains this really nicely.

I used to find tensorial strength very arbitrary but it is easy to remember if you think of it in terms of *actions of a monoidal category on a category* as a generalization of actions of a monoid on a set.

If you have a monoid M acting on X and Y then an equivariant function is a function f : X -> Y such that f(m * x) = m * f(x).

If instead you have a monoidal category M acting on categories C and D then a strong functor is a kind of "lax equivariant functor", i.e. a functor F : C -> D with a natural transformation
m * f(c) -> f(m * d)
Plus some typical equations that say you interact well with the unit and associativity isos of the actions. I assume someone has similarly defined "colax equivariant" and "pseudo equivariant" functors analogously.

This is all a bit obscured by the fact that usually when we talk about strength we are talking about an *endofunctor* of a monoidal category acting on itself.

Computer scientists: "Mathematicians are so bad at naming, ``lemma 12`` is useless, it's important to have descriptive names."

Also computer scientists:

If we just replaced all operational semantics with "denotational semantics in step-indexed sets/guarded type theory" (with a weaker equational theory for recursive stuff) wouldn't this be completely sufficient for everything we use operational semantics for now and be much more compositional/reusable?

My math shit post accidentally fell into plausible bad take territory

France has banned short-haul flights to destinations where the same journey can be made by train in under 2.5 hours. bbc.com/news/world-europe-6568

I'm completely drunk with power after learning that Agda's variable generalization works works with nested generalization. Now the code is significantly shorter than the on-paper version of the same language/logic

And the best way to solve a metatheory problem is to avoid it of course :)

Show thread

Explicit substitution calculi solve/avoid every problem I've had formalizing syntax in a proof assistant.

I have a fully funded 2023 home PhD studentship at University of Nottingham School of Computer Science. If you are interested in working with me in an area related to my interests, please get in touch to discuss potential research topics. The deadline of June 30th is quite tight. Further details can be found on my teaching page:

stringdiagram.com/teaching/

Boolean algebras are a standard way to formalize the concepts of 'and', 'or', 'not', 'true' and 'false' in classical logic. If you aren't careful, the list of axioms is pretty long! Each individual axiom makes perfect sense - but it's not obvious when you've got enough.

Things get simpler if you already know the concept of a 'ring' - roughly, a gadget with +, ×, 0 and 1 obeying a bunch of familiar axioms. Then you can define a 'boolean ring' to be a ring where every element x has x² = x. Boolean algebras turn out to be equivalent to boolean rings!

There's a trick, though. The × in your boolean ring is 'and', but the + in your boolean ring isn't 'or'. It's exclusive or! You can define this in any boolean algebra. In words:

x exclusive or y = (x or y) and not(x and y)

In symbols:

x + y = (x ∨ y) ∧ ¬(x ∧ y)

To go back from boolean rings to boolean algebras, we define

x ∨ y = x + y + xy

An interesting thing about boolean rings is that they automatically obey x + x = 0 and xy = yx. If you like fiddling around with equations - and honestly, who doesn't? - you can have fun trying to prove these from the ring axioms and x² = x. If you give up, you can see the answer here:

ncatlab.org/nlab/show/Boolean+

But why am I talking about this today?

(1/2)

ugh the goal parsing in my agda tactics is so brittle

I'm looking forward to going around making things opaque. Since I find the latter more intuitive I defined quotients in terms of coequalisers for the 1lab. Normally this is fine but sometimes I'll normalise a goal too far and get shit like "Coeq (\x → x .fst .fst) (\x → x .fst .snd)" which is objectively useless. Very high on the opaque to-do list.

Florida's anti-LGBTQ policies have gotten so bad that Disney is pulling billion-dollar developments from the state. Meanwhile, ACM is holding 17 co-located conferences and workshops there next month, because of course they are...

Show older
types.pl

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.