Pinned toot

In the spirit of the season, I want to share some of my recommendations for the things I really enjoyed in 2022. So without further adieu, here is my

Clarinet teacher: With your fingers, if you were a 3rd grader, I'd make you switch to an oboe or a saxophone.
Me: It's great that I'm middle aged and insist on playing the clarinet instead.
Him: As long as you're ready to persist through frustrations.
Me: As a woman in academia? My whole career has prepared me for this!

Cambridge University Press has really done a (tremendous) number on one of my manuscripts LOL

I wish we could convince copy-editors to not randomly change a bunch of technical stuff that they don't understand.

I've just noticed that there is now a mechanised proof of strong normalization for System T (arxiv.org/abs/2303.13258), done by Sebastián Urciuoli in Adga, which is apparently the first formalisation of this result. The formal proof is based on the familiar Tait-style induction with computability predicates. SN of System T via Tait represents both a fundamental result and proof method, so it's cool to see this done in Agda. I imagine that the formal framework could now be extended to establish further strong normalization results.

if you haven't yet seen this graph, take a look. This is hard, incontrovertible evidence that humanity is completely fucked. We aren't going to make it. Solving the crisis isn't a slow off-ramp. This is full-on slamming the brakes and skidding off the road into a ditch and hoping that you can emerge from the wreckage with minimal damage. Oh, and the brakes don't work because we can't engage them directly - only those profiting off the crisis can do that and they don't take our calls

An excerpt from the upcoming book "Certainty by Construction" by Sandy Maguire: reasonablypolymorphic.com/blog

Dependently Typed Programming

I had a typo in my code, that was okay (unification for the win), along the lines of:


data Foobar : List String -> Type where
FB : Elem x xs -> Foobar ys


but with a more noisy data structure.

Idris2 was happy about it, until this morning: unification not for the win. I spent an hour working out what was going on, and per chance that I noticed the spelling mistake. It wasn't in a place I would consider, I assumed my code was correct, and the noisy data structure obfuscated the issue.

Frankly, the error message 'rightfully' said:


cannot unify xs with xs


These are two different xs mind. The error message occurs as the place in the code where unification fails has the implicit vars for Elem and Foobar as the to different xs.

It would be fantastic to have more research into making linting and error messages for dependently typed programs better.

Perhaps something involving dataflow analysis to see where the information is flowing to, and coming from, within a type signature...

I dunno, pioneering away on the frontier is not easy but sure can be frustratingly fun...

Hello, tactics!

Agda: is a pure language with no defined evaluation order
Also Agda: lists.chalmers.se/pipermail/ag

Github Copilot + Static Typing = <3 (PSA: Copilot is great at Haskell now)

blog.textql.com/copilot--stati

Just your occasional reminder that I kep a curated list of resources for scientific writing - books, web sites, apps, webinars, etc. There's probably something here to help with almost any writing issue! scientistseessquirrel.wordpres

#AskingAutistics I think a lot of us feel like we have been gaslit much of our lives. It happens when people we trust tell us not to believe our own experiences.

"That tag couldn't possibly be so distracting that you can't wear that shirt!"

"I can't hear anything. It must be your imagination."

"I clearly told you to do the thing. You're so irresponsible."

"If you won't look me in the eye you must be hiding something."

"I told you not to mention that. You're so rude."

"Anyone can do that. You're just not trying."

It really does feel like people are gaslighting us when they insist we are mistaken about our own experience of our own lives. But I think most of the time they believe what they are saying, so it's not really classic gaslighting, which I think is intentional, manipulative, and malicious. The thing people do to us is not those things - it's more like they are passing along their own social indoctrination without realizing it. (When I say it like that it kind of sounds like being in a cult, doesn't it?)

So is there a better word than "gaslight" for that? For when they are kind of gaslighting you but really they are passing along their own ignorance?

@ionchy Not true. My current student Eric started his current project in Coq and we switched to Agda because dependent pattern matching in Coq was miserable. We started with no tactics but I'm looking into implementing some domain-specific solvers now.

And no we don't have any numbers in our code lol

In one day - one day! - of going open source, the Typst typesetting system passed 5,000 stars on Github.

If you ever needed evidence that there is a real hunger for a TeX replacement, this is it.

github.com/typst/typst

I'm overjoyed to announce that I will be starting as an Associate Professor in Logical Foundations and Formal Methods at University of Cambridge this fall!

@jonmsterling i think the human brain can only take so much type theory before it needs a day off to nap or something

*Red Team Blues* is my next novel, a #PostCyberpunk anti-finance finance #thriller; it's a major title for my publishers Tor Books and Head of Zeus, and it's swept the trade press with starred reviews all 'round. Despite all that, #Audible will not sell the #audiobook. In fact, Audible won't sell *any* of my audiobooks. Instead, I have to independently produce them and sell them through Kickstarter:

kickstarter.com/projects/docto

1/

I'm too tired just now to tell you what little I know about the beginnings of an answer to this question but...

Orbits. What are orbits? Never mind what groups are or how they act on whatever they act on: what are the orbits? What is it to be a system of orbits and stabilisers that comes from a group acting on a set?

I feel like Ptolemy having a go at Copernicus in 2023.

Certified logic-based explainable AI (The case of monotonic classifiers). ~ Aurélie Hurault & Joao Marques-Silva. hal.science/hal-04031193/docum #ITP #Coq #XAI

Daniel J. Velleman has translated parts of his classic text "How to prove it" into Lean 4!

djvelleman.github.io/HTPIwL/

"How to prove it in Lean" goes through examples in the original book, and discusses how they can be proved in Lean. He also talks about how Lean's use of type theory rather than set theory affects this kind of basic mathematics, and furthermore gives clear installation instructions for anyone who wants to play along at home!

Proudly helping to avert the AI apocalypse by proposing student projects, thus enabling smart students to spend their brain cycles on building type systems instead of AI models.

(This toot brought to you by several students telling me they found one of my proposals by searching for anything that did not have "AI" or "machine learning" in the description.)

Show older

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.