Show newer

@JacquesC2 @Andrev And it should begin with first-class data descriptions, with structure included. Far too much of the issue is that we start by declaring generatively, as if it comes from nowhere, a raw data representation. Only then do we get to do the hardf work of noticing that, accidentally on purpose, it has some structure we care about.

Generative data declarations have survived half a century with little by way of challenge, perhaps because "science" says to leave them alone while you fiddle with other stuff.

What do universities think female scientists want:
- Paid maternity leave
- Deadlines extension due to having kids
- Flexible working hours

What do we actually want:
- A safe work environment with no everyday sexism
- Fair review processes, double-blind where possible
- Being included in the important conversations
- Equal treatment in being considered for promotions and pay raises
- Not being forced into service work that no one wants to do

#InternationalWomensDay #IWD2023

Lesson that all scholars should be taught (and that now-me wishes that old-me had properly understood): you learning something is close to useless unless you write it up and make it available to others to learn from.

Today's reminder comes in the guise of "Design patterns of hierarchies of order structures" . It's a great read. This is stuff that I knew a long time ago. But I made the error of not writing it up, so others had to go through the pain of rediscovery - so they deserve the rewards of the citations.

In a number of discipline, there's a feeling that papers must be "large" (not in size, but in quantity of new results). While salami-slicing is despicable, there's really something to be said to publishing 'clean' ideas as soon as they are mature enough to be truly useful to others.

Basically: publishing should be about knowledge dissemination. Useful knowledge should be rewarded by being reused (and thus credited.)

Prompted by recent discussions, I got the boundary scavenging prototype back to roughly where it used to be. Demo attached!

If you want the details:

  • Boundary information is not associated with clauses, but with interaction points

  • This information is collected during unification, by looking at equations which are headed by a metavariable and "in the pattern fragment, modulo endpoints" (i.e.: all arguments are variables, which must be distinct, or i0/i1)

  • A problem like this doesn't give a unique solution, but it does give a partial solution: it's a face of a partial element which the meta must extend (internally I don't represent these as Partials... yet). A spine like this can still be inverted [1], by skipping over endpoints, so the RHS can be put in the meta's context

  • When printing, we can show the user this information

  • Bonus: Unsolved boundary constraints aren't shown to the user (strictly speaking they shouldn't be highlighted, either, but it's a WIP). Might be too much, but I decided to suppress any unsolved constraint that's headed by an interaction meta

  • Bonus: Since the equations we recover are in the meta's context, we can print the value of the term at each face, too! This closes a very old feature request:

[1]: One of the many reasons trying to extend the pattern fragment sucks is that now there's ambiguity over how to abstract over the RHS:

?0 A x y p i0 = p i0

should this invert to

?0 := λ A x y p i (i = i0) → p i0


?0 := λ A x y p i (i = i0) → p i

but happily in this case it doesn't matter (I use the former). Now to test the hell out of the fancy substitutions I'm inventing...

The 21st annual Oregon Programming
Languages Summer School (OPLSS) will be held from June 26th to July 8th, 2023 at the University of Oregon. The theme will be "types, semantics, and logic" and is being organized by Zena Ariola, Stephanie Balzer, and myself. More details, soon!

Can't wait for you all to read @joey's dissertation is pretty awesome.

Quite a few people have registered to MGS'23, as usual. This is a reminder that the early registration fee ends in 6 days. After that, the registration fee increases considerably:
Looking forward to seeing you there.

Me, investigating an issue @agdakx ran into: "Wow, I'm impressed this bug lasted this long without anyone noticing. I'm fairly sure it's impossible to reproduce without reflection"

@maxsnew, 3 whole-ass weeks before: reproduces the bug

Is your water glass empty? Go fill it and have a couple sips.

Well the deadline was postponed by a week, what a surprise 🙃​

Show thread

telescopes get longer but who cares 

The thing I like about dependent type theory is that stuff is routinely indexed over and thus relativised to other stuff. The same constructions work uniformly, whether the indexing is trivial or gnarly, and there is no problem about earlier stuff showing up in later indices.

That's a key test. What do you have to do to build a telescope? You start with a type X0, but then you add detail in the form of a type X1(x0:X0). What do you have to do to express that relationship? And what happens when you extend to X2(x0:X0, x1:X1(x0)), or further? Do you need new machinery, or does your old machinery flex?

The application for attending the Programming Language Mentoring Workshop
(PLMW) @pldi 2023 is open. The deadline is March 20th. Undergraduates, MS students, and junior PhD students are encourages to apply:

What I think I want in a PL:

predata Foo = A
predata Bar = B | C

data Boing = Foo , Bar

(and of course the same for records, and I'm not set on the syntax)

Foo and Bar would be open sums, and Boing would be closed.

Being me, I'd want to be able to do proper gluings of these, where I allow renamings and overlap specification when I do a join.

Bottom line: our current method of declaring closed types feels anti-modular. Note that this extends to HITs. Doing HIT-like things in a system without HITs is where I discovered the pleasure of doing mix-and-match!!

I am trying to make an example but the scopes do not match up and I'm tired, so I'm not sure if I'll get anything done.

Show thread

I just posted my review of Tim Urban's book "What's Our Problem?" on GoodReads:

tl;dr: Not what I expected, but an important book nevertheless.

New: an interview with @SURF, the Dutch IT cooperative for education and research, which has launched a Mastodon pilot. This means that tens of thousands of Dutch students and researchers can use Mastodon with just a one-click login via their institute, without having to register and pick out a server.


@mudri @pigworker I'm echoing Conor's comment — Conor discovered long ago that although it is semantically sensible to identify any two absurdity-elimination terms, this has some unexpected syntactic consequences. It makes definitional equality undecidable.

Ok let's write an abstract for . Please don't disturb me.

My students are stuck on a theorem in Cubical Agda and I'm not sure how to help them. It's here: proofassistants.stackexchange.

And I tried to prove it, too, but in the a = [ pos zero ], b = [ neg zero ] case I could not find a proof that also satisfied the necessary definitional equalities (I found a congP application that in theory had the right type, but Cubical Agda yelled at me about some definitional equalities that were not satisfied). Since it's Cubical I have no idea how to find help with this, and so I am curious if type theory mastodon has ideas.

Show older

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.