This morning's fun paper: "Normalizing Resistor Networks" https://arxiv.org/abs/2303.11839
Wonderful example of 'if you have the right tools, and can make just the right definitions, then some classically stuff just falls right out.' I love such papers because all the beauty is in the definitions, after that it's a matter of unwinding.
Unfortunately, I'm sure some idiot referee won't recognize that, and will complain that it's "too easy". "Looking hard" should be perceived as a negative, not a positive!!!
Methinks that @pigworker and I are rapidly hurtling towards X/~ bad, X//G good. Or maybe we're already there, we just don't quite know it consciously (yet)?
One of the things I'm enjoying in tangling with containers that have wiggly positions with @pigworker is how differently we're approaching the same problem.
I'm currently really puzzled by why groups have anything to do with the matter at all! One approach is to start with containers with wiggly positions - groups are then in on the ground floor.
Another approach is to look at various free constructions (left adjoints to forgetful functors from theories back down to set) that can be seen as giving rise to those containers. From that perspective, that groups are involved at all is very mysterious.
I'm looking for academic textbooks way out of my field: maybe it's linguistics, maybe it's philosophy, I don't know.
It seems obvious that (natural) language, especially all vocabulary as well as all 'meaning' associated to vocabulary, is 100% a convention. While there might be some kinds of 'a priori' stuff about grammar, I'd guess that most of that is driven not by being innate but rather by being *learnable*.
This seems completely obvious to me (but, like Sapir-Whorf, could be quite wrong!!) So: are anyone of my readers able to help steer me towards the right things to read?
This morning's ponder: maybe 'finite' should not be something that is imposed (for example: when considering arities) but rather is something that we should let emerge naturally (such as when looking at free constructions).
Brought to you by formalizing lots of stuff in Agda, where the assumption 'finite' is seldom needed.
A further hint that 'finite' is not a good property: constructively, there are many different non-equivalent ways of defining something that means 'finite' classically.
~~Category Theory as Generalized Logic (Limits)~~
As discussed previously (https://types.pl/@maxsnew/109912101112579711), a presheaf can be viewed as a categorified notion of a predicate, and universal properties correspond to unique existence of a solution. This also means we can take any definition in category theory and get back a (possibly trivial) definition in ordinary logic.
For instance, take limits. A diagram in C is a functor D : I -> C for some "shape" category I. A cone from D to an object c \in C is a natural transformation from the konstant functor (K(c) : I -> C) to D. This definition gives us a presheaf (cone D) : C^op -> Set for each D. More generally we can think of it as a profunctor covariant in (I -> C) and contravariant in C. A limit of a diagram D is an object with the data that it represents cone D.
What does this mean when we "de-categorify" to sets? Well, a D-cone over c is a family of morphisms from c to D i for each i \in I that commutes with all of the D f : D i -> D j. When de-categorifying to sets, functors become functions and morphisms become equalities. So a diagram is just a function (D : I -> X) and a D-cone is an element x \in X such that for every i \in I. x = D i. So we get that a cone is an element that is equal to every element in the codomain of the diagram D. D has a limit if there is a *unique* element of X that is equal to every element in the codomain of D. So if I is non-empty, a limit exists if and only if D is a constant function and the limiting element is the unique element in the codomain. If D is empty, a limit exists if and only if X is a singleton (maybe that's the right definition of when a function 0 -> X is constant?).
Exercise: from this de-categorification perspective, what is the analog of the theorem that Right adjoints preserve limits?
Another good one:
> Agda uses call-by-need technique for evaluation. This means that the expressions are not
evaluated until they are needed. Agda follows strict evaluation, the arguments of the function is
evaluated before evaluating the function itself.
You know something's fishy when you read
It [Agda] also helps programmers to automatically check their
work using a sound type system and a rich library of proof tactics.
Anyone know how I'd be able to tell if someone was written by GPT-3/4 ? I'm reviewing something that uses a lot of the right words, is written in cogent English (by someone whose English is weak), but is ridiculously false every second sentence or so.
It reads like it's been back-patched to include references to real things - but the real things don't make the false claims attributed to it.
So I got into a conversation recently (on here, but I'll keep this thread separate) where I failed to express myself clearly. I won't reprise that here, but instead talk about something that was definitely part of my motivation in trying to engage in that conversation.
Wow, quite the prelude!
I was doing some work on operations on theories and someone pointed out "Oh, that's just a pushout." The problem?
1. they were 100% correct, and
2. that was completely useless!
Why?
Because it was even easier to show that there could be no canonical pushout. In other words, to write down the pushout, you needed to make some choices, and these choices were necessarily dependent on the input. So?
The issue was the result of the operation was supposed to be understandable by humans (aka machine generated names are awful) and compositional. In fact, one of the requirements of the framework was that you shouldn't be able to tell the 'path' that was used to create a certain theory -- because one can prove that it is path independent. And, in fact, the pushouts involved did have the right commutativity and associativity properties, so this isn't a silly thing to be asking.
In other words, the mathematics (correctly!) said the choices didn't matter, and very basic UI principles said (correctly!!) that the choices mattered a lot.
I've seen some people resolve this by creating complex name-generation algorithms (which I guess that you're supposed to run in your head to predict how to compose things?) Instead, I solved it by always requiring the user to provide a disambiguation means whenever their pushout had no canonical answer.
[Oh, I read the other thread, and this is completely different, some onlooker might think. Of course it is - I said so at the start. This isn't an analogy story, it's a what-motivated-me story!]
This should be in the literature somewhere, but I'm having a really hard time finding it:
When are two morphisms of polynomials equivalent? Another phrasing would be, when are two morphisms of containers equivalent? I think I could ask again using (lawless) lenses a la @julesh and it would again be essentially the same question.
And, of course I don't want some kind of strict equality!! I really do mean equivalence.
A category with K objects is a square matrix C \in Set^{K x K} equipped with elementwise functions
I -> C
C^2 -> C
satisfying the monoid rules.
A presheaf is a vector P \in Set^K equipped with elementwise functions
CP -> P
satisfying monoid action rules.
A copresheaf Q is a vector Q \in Set^K with elementwise functions
Q*C -> Q* (with * the transpose)
satisfying monoid action rules.
A profunctor is similarly a matrix in Set^{K x L}.
A dagger category is a matrix C that is isomorphic to C*.
Aha, @samth I think I know why I wasn't seeing your replies. I thought I was following you, but apparently not? It looks like there are two copies of you, as far as my Mastodon is concerned!
What should programs actually denote? Old work in denotational semantics starts from a PL whose operational semantics may not be sane, works really hard, and finds a post-facto "explanation" for it. Very little sense of elegance seems to pervade some of that work.
It's been long enough to look the other way around: there are things that we want to actually say -- can we reverse engineer some decent PLs that actually created from scratch to say elegant things, instead of being merely designed to order computers around?
[Note that I'm not putting down early work on denotational semantics - it is often a mathematical tour de force, with rich insights! I'm merely questioning if maybe it's time to turn things around.]
Computing Scientists, ex-mathematician. Currently in academia, spent considerable time in industry as well. Into weird programming languages and the outer parts of programming and software engineering. Currently exploring metaprogramming, quantum programming, DSLs and "generate everything".