Software Foundations (softwarefoundations.cis.upenn.) instead recommends:

Hint Extern 1 (?x ≤ ?y) =>
match goal with
| H: x ≤ ?z |- _ => apply (le_trans x z y)
| H: ?z ≤ y |- _ => apply (le_trans x z y)
end : lat.

But in my experience this often fails, because it cannot use quantified hypotheses or hypotheses with a simple precondition.

So this is a tiny tip if you're trying to automate transitivity and the software foundations approach doesn't work: try the two hint externs above.

Show thread

If you want to make an eauto hint database with transitivity reasoning on a relation (<), then just adding the transitivity lemma to the hint db doesn't work well: (1) performance will be extremely slow due to exponential blowup (2) you actually need to add two copies of the transitivity lemma to get good automation, one copy with the preconditions reversed, because of the order the preconditions get solved.

A simple solution that works well is to add two Hint Externs where in each case one of the two subgoals has to be solved trivially:

Local Hint Extern 1 (?x ≤ ?y) => eapply le_trans; [solve [eauto 2]|] : lat.
Local Hint Extern 1 (?x ≤ ?y) => eapply le_trans; [|solve [eauto 2]] : lat.

Now transitivity will only be invoked for goal ?x ≤ ?y if you already have ?x ≤ ?a or ?a ≤ ?y as hypotheses, or if one of those can be proved with eauto 2 (which allows it to work even if ?x ≤ ?a or ?a ≤ ?y is behind a forall quantifier).

Hey mastodon!! I'm on the CS faculty job market this year---check out my stuff at languagesforsyste.ms, and maybe I'll see some of y'all on campus???

Fingers crossed!!

I wish WASM would support irreducible control flow and multiple entry points for functions..

The performance of the generated code is an order of magnitude better than an AST interpreter, while having almost zero compilation cost compared to generating the AST in the first place.

Show thread

The idea is to pre-compile small code snippet functions with LLVM and then stitch them together. They use CPS with GHC calling convention so that function calls and returns are jumps, and arguments are passed through using registers. Using linker information, they can replace the targets of those return jumps to stitch control flow together. Code snippets can be parameterized over constants that they can patch in.

Show thread

Induction and coinduction are nicely symmetric for predicates/relations, but what's the coinductive analogue of dependent elimination for inductive types?

A foundation of mathematics in which induction is an axiom. Or even better, an infinite number of axioms but then phrased as a separate syntactic check.

Defining a different type class instance for Vec n T and Vec (n+0) T

I have witnessed the most American thing imaginable today: -23C weather, thick layer of snow, but an entire outdoor street snow free and steaming hot due to underfloor heating.

Generating functions are a magical mathematical method to count the number of inhabitants of recursive types.
Consider the type \(T(X) = 1 + X\times T(X) \times T(X)\) of binary trees with \(X\)'s at the nodes. Generating functions answer ``how many different trees of size n are there?''. In other words, we want to find the coefficients of an isomorphism

$$
T(X) \cong 1 + X + 2 X^2 + 5 X^3 + 14 X^4 + 42 X^5 + \cdots
$$

Here we have written \(T(X)\) as a disjoint union over how many nodes the tree has.
The term \(5X^3 = \{0,1,2,3,4\} \times X \times X \times X\) represents the 5 different shapes of trees with 3 nodes.
To specify such a tree, we choose which of the 5 shapes it has by giving an element of \(\{0,1,2,3,4\}\),
and we give a 3-tuple of \(X\) values.

To compute the numbers, we could repeatedly unfold the defining equation \(T(X) = 1 + X\times T(X)^2\), but there is a more interesting way to compute it. We rewrite the defining equation by treating it as high school algebra:

$$X\cdot T(X)^2 - T(X) + 1 = 0$$

Solve the equation using the quadratic formula:

$$T(X) = \frac{1 - \sqrt{1 - 4X}}{2X}$$

We can now Taylor expand this function by computing derivatives:

$$T(X) = T(0) + T'(0)X + \frac{T''(0)}{2}X^2 + \frac{T'''(0)}{3!}X^3 + \cdots$$

So the \(n\)-th term in the series is the \(n\)-th derivative \(T^{'(n)}\) divided by \(n\) factorial.
It is not too hard to compute this exactly to get a formula for how many binary trees of size \(n\) there are:

$$
\frac{T^{'(n)}}{n!} = \frac{(2n)!}{n!(n+1)!}
$$

The method of generating functions extends in a very beautiful way to other data types.
For instance, we could consider trees with a variable branching factor, where each node stores a list of children rather than a 2-tuple of children.
It even works for types not usually seen in programming languages, such as binary trees where each node stores an *unordered* pair of children, or a multiset of children.

en.wikipedia.org/wiki/Generati

And also: when? (how many months before the end of your PhD)

Show thread

How do you decide whether and where to do a postdoc? For my PhD I decided mostly randomly, and I got lucky, but in retrospect it could have been bad.

There are a lot of gems in the old Scheme/Lisp literature. Here's a nice presentation about one of them: youtube.com/watch?v=K-l5CdeHn7

The Javascript world is slowly approaching the FranTK era of FRP.

ChatGPT is incredibly good at programming. Something like it will for sure be able to do most formal proofs in a proof assistant, as long as lemma statements for intermediate results are provided.

For those who want a good PL read, I can recommend Michael Arntzenius' PhD Thesis (available at rntz.net/files/thesis.pdf ). The level of detail included in the wonderful narrative made it a really easy read (for me).

I don't even care very much about Datalog, but all the techniques deployed, in the manner they were deployed, captivated me.

(Not sure of rntz is on here?)

\(sort : \mathrm{list}\ \mathbb{R} \to \mathrm{list}\ \mathbb{R}\) is a computable function

Show older
types.pl

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.