Data types with Negation -- some slides from a talk about how to understand data type definitions that include negation.

The talks were a while ago, but I never put the slides online anywhere.

A few "recent developments in data structures that were already quite good" papers for your evening:

TEBs -- tree-encoded bitmaps (beating Roaring Bitmaps):

HOTs -- height-optimized tries (beating ARTs):

BloomRFs -- bloom range-filters (beating SuRFs):

One characterization of Answer Set Programming is that it is Datalog with prescient negation. The consequences of this is surprising.

The EGRAPHS workshop will be taking place again at PLDI 2023 in Orlando! See you there!

Start thinking about a talk that will generate great discussion about anything related to e-graphs. Deadline: Apr 5

here are my group's talks from the recent LLVM Developers Meeting!!

Zhengyang Liu presenting his superoptimizer for x86 vector code

Nader Bushehri presenting our work on discovering miscompiles in the AArch64 backend by combining fuzzing and formal methods

Yuyou Fan presenting his mutation engine for LLVM IR

Vsevolod Livinskii presenting his fuzzer for loop optimizers

Besides the fact that Andy is a delight to read, I think it's important in this day of Rust-infused reflexive dislike of GC to push back a little: a semispace GC is an incredibly elegant and simple device that neatly answers your language's entire memory management question like 75% of the time. Sure there are some limitations and taxes to pay, but they're often within reason for a given problem space, and it works great. Embrace it!

Why cubical type theory, and why cubical Agda?

The discussion in this thread is going to be a bit technical in several fronts. Sorry.

I was meaning to write this for a while, but this discussion motivated me to write this now:


I like potatoes. That's a shorthand for saying I like data made from least fixpoints of sums of products. Nothing higher-order in sight. Types of things there are countably many of: the things are arbitrarily large, but you can explore any one of them entirely in finite time.

@burakemir @shriramk @joomy

This is not a shitpost thank you.

A regexp is a logical formula with denotation in a set of "non standard truth values": the poset of predicates on strings rather than classical truth values {top, bot}. Empty and alternatives denote the bottom element and the join of this poset. Regular languages are closed under intersection so it is ok to add in conjunction as well. Kleene star is a particular LFP. The characters are atomic formulae and the empty string/sequencing are interpreted as a nice monoidal structure of the poset of languages (Day convolution).

A different way to say this is that regular expressions can all be interpreted as formulae in a non-symmetric variant of the logic of bunched implications where the sequencing is the *.

Something that is kinda cool is that CoDebruijin actually becomes *simpler* when we have less structural rules! For linear types we no longer need multiple thinnings for a node, a single one suffices. For *ordered* linear types we can go even further; the thinnings can be encoded by spliting indicies instead of masks.

A student of mine is going to speak about the history of the Axiom of choice at a seminar. She's got all the facts, but she asked me if I knew any juicy gossip about the Axiom of choice. I have to admit I don't, but perhaps Mathstodon can fulfill its purpose. #axiom #choice

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.