What should programs actually denote? Old work in denotational semantics starts from a PL whose operational semantics may not be sane, works really hard, and finds a post-facto "explanation" for it. Very little sense of elegance seems to pervade some of that work.
It's been long enough to look the other way around: there are things that we want to actually say -- can we reverse engineer some decent PLs that actually created from scratch to say elegant things, instead of being merely designed to order computers around?
[Note that I'm not putting down early work on denotational semantics - it is often a mathematical tour de force, with rich insights! I'm merely questioning if maybe it's time to turn things around.]
@samth @JacquesC2 True, but i think it does describe (in a perhaps provocative way) the project of full abstraction.
@jonmsterling @samth Sigh - Mastodon is not showing the thread that this is a reply to, so I'm missing the full context!
@JacquesC2 @jonmsterling @samth Open Jon's post in the browser on the mathstodon instance where Sam's instance isn't limited.
@samth Ok, found the thread - apparently my server (types.pl) doesn't like your server!!
Yes, Conal has definitely been thinking this way for a while. I wish that more people did.
You're right that the original work started from asking the question regarding domain equations. Those domain equations are an abstraction over the underlying operational semantics for the untyped lambda calculus. I don't think you'd ever ask yourself that question if your starting point was "internal language of CCCs"!
@JacquesC2 @samth I don’t think i get it… domain equations are very important, even indispensable, regardless of whether you know about CCCs.
@jonmsterling @samth My current opinion is that many of the domain equations that we're writing down are overly simplistic, leading to extremely complex answers.
In other words: we're not asking the right questions.
The advent of extremely powerful total languages, languages that track all sorts of effects, and so on, should make it clear that domain equations that force undefinedness and/or non-termination on you are flawed (overly simplistic).
@JacquesC2 @samth Note that contrary to popular belief, even having recursive types does not force non-termination. Indeed, the category of dcpos (and partial maps) is algebraically compact, but you can nonetheless define the empty dcpo --- and it really is empty.
EDITED to clarify that it is pDcpo that is algebraically compact
@jonmsterling @samth So my function types don't force the domains to be extended with a \bot value? I don't mean "non-termination", sorry, that was sloppy of me. I mean being forced to add 'undefined' or `\bot`. I don't want to see those anywhere, and I'm willing to pay the price for not having them forced on me.
@JacquesC2 @samth Indeed, the function spaces do not force you to add 'undefined' or a bottom element; there is of course a partial function space, but you are not required to use this. (But see the clarification from @zyang.)
@JacquesC2 @samth @zyang This is one of the reasons I like to work with call-by-push-value, where my adjunction is the (monadic!) adjunction between pointed dcpos and plain dcpos. This lets me talk about total (but continuous) functions, as well as partial functions in the same language.
@JacquesC2 @samth @zyang Coming back to some of the original questions, I do think it is fair to say that some of the domain theoretic approaches in denotational semantics started to "lose the thread" after a while. Attempts to rectify this came a bit too late, after the field had already exhausted its complexity budget within computer science.
But it is important to remember that domain theory is really interdisciplinary, and arose from a shocking convergence between order theory, lattice theory, non-Hausdorff topology, and computer science. So what might look to us like asking the wrong questions could nonetheless be very important.
@jonmsterling @samth @zyang I agree! Domain theory as a discipline has a lot of wonderful results, bringing together some nice threads.
My principal complaint is its application to PL has at times been done in overly simplistic ways. It's time for a bit of a reboot - which I think you (@jonmsterling) have a current hand in.
Some context: I'm going to teach our 3rd year "Principles of PL" for the first time this fall. So tons of different things are swirling in my head about how to approach some material. And even though I might end up teaching the classic stuff, that doesn't prevent me from whining about it on here first!!
@JacquesC2 @jonmsterling @samth @zyang from “Domains for Denotational Semantics”: https://link.springer.com/chapter/10.1007/BFb0012801
@ayberkt @JacquesC2 @samth @zyang It is funny... I have a lot of respect for Dana, but I think the problem with domain theory was exactly the opposite of this. The concrete aspects were too arcane, and the helpful abstractions were introduced far too late (the 1990s).
@ayberkt @JacquesC2 @samth @zyang I also think a lot of effort was spent on somewhat arcane variations on domains that enable simpler explicit constructions of things (like solutions to domain equations, but also powerdomains!) — but I don't think the problem was ever that we needed simpler explicit constructions. We needed to bring order to the nightmare of a million different kinds of domain, each of which is compatible with one thing you need but not another thing you need.
Dana was conscious of this arcaneness (though maybe from a different angle), and so he proposed synthetic domain theory in one decade, and it was also the reason why (more than a decade after synthetic domain theory), he famously proposed the theory of equilogical spaces as a "new category" for domain theory.
Synthetic domain theory succeeded in many respects — it is a quick way to get a category of domains that is more well-behaved than *any* classical category of domains. But I think this came too late for computer science and PL, as programming language theorists had largely moved on. Synthetic domain theory again came to the fore in the guarded context, which has fruitfully made contact with PL by both obviating **and** anticipating the concept of step-indexing. Today, I hope that all kinds of domain theory will again begin to flourish as the generation of American scientists who so viciously vilified domains are retiring or becoming increasingly marginal in their connection to the ongoing project of PL as a field.
@ayberkt @JacquesC2 @samth @zyang It is with great pleasure that I see more and more young people learning to pick up these 'ancient weapons of a more civilized time', disobeying their teachers, and doing beautiful and interesting work that senior researchers promised would guarantee them unemployment...
Coming back to @JacquesC2, I think it is indeed an amazing time to start re-thinking what the lessons of PL are, and how to teach them to students. I look forward to hearing more about what you come up with!
@jonmsterling @ayberkt @JacquesC2 @samth @zyang
How about this? Domain theory can be used to conceive and write down programs that you never thought of before. Even seemingly impossible programs.
Examples are here:
We don't need domain theory to understand what we already know. We need it to advance the frontiers of what is possible. For this, we need to see more than what is in front of our eyes. We need to explore without knowing in advance what we are going to find.
This is what domain theory offers. And what many other mathematical theories offer. Open your eyes and look further and far away of what you are familiar with.
@MartinEscardo @jonmsterling @ayberkt @samth @zyang Which is entirely compatible with my original post!!! Start from the *elegant* version of the mathematics, learn from it, and map *back* to programming. You're one of the grand wizards of that technique.
@jonmsterling @ayberkt @JacquesC2 @samth @zyang
I would like to link my MGS'08 slides on domain-theoretic denotational semantics here, where I make similar points, and also give a gentle introduction to semantics and its uses:
@jonmsterling @JacquesC2 @samth I might be missing some obvious thing, but how is the category of dcpos algebraically compact? The empty dcpo is the initial algebra of the identity functor, but it isn't the final coalgebra of the identity functor, is it?
@zyang @JacquesC2 @samth Oh sorry, my bad, I misspoke. I meant "pdcpo", the category of dcpos and (scott-open) partial maps. So my point here is that even though you have a notion of partial map, you are not forced to work inside this category. You can compute the recursive type, but you can nonetheless have a notion of a value.
@jonmsterling @JacquesC2 @samth doesn't the edit contradict the point? In pDcpo the initial object has a point.
@ohad @JacquesC2 @samth The point was badly made on my part... My point that in practice we can work in the adjunction between dcpo and pdcpo (or the bigger category dcppo). so you still have the empty dcpo in your language, and you can also even have the total function space in your language. the problem that jacques was talking about comes from fixing evaluation strategy too early, e.g. by choosing either cbv or cbn semantics.
@jonmsterling @JacquesC2 @samth what are some examples of things domain equations are important/indispensable for?
@rntz @JacquesC2 @samth Semantics of recursive types when the type operator is neither covariant nor contravariant; semantics of state (these two problems are related, as state gives new examples of types that are neither inductive nor coinductive). An example is the type of actual imperative linked lists:
type LinkedList A =
| cons : ref (A * LinkedList A)
@jonmsterling @JacquesC2 @samth these are good examples, thanks! but, taking a hard-core denotational design/pure FP/declarative programming perspective, unrestricted state/references and unrestricted recursive types do seem like they fall more toward "designed to order computers around" than "created to say things elegantly"...
@rntz @jonmsterling @JacquesC2 @samth We can get the (programming) elegance back with separation logic or “separation languages” (currently, just Rust); there’s a blog post about this from NeelK.
In turn, that separation logic can benefit from exactly this kind of domain theory (see Iris); and it’s even needed to model Haskell’s ST monad.
Here's one attempt at a counterfactual for why we would still discover domains even if we had started from CCCs.
An obvious question is whether the Set theoretic semantics of the free CCC generated from some base types/fun symbols/equational axioms is complete: given G |- M, N : A if [[M]] = [[N]] in all set theoretic models can I prove that M = N is already provable in the free CCC/typed lambda calculus. The answer is no! And the usual counterexamples are ones where you axiomatize a domain equation that has only trivial solutions in Set. Details left as a fun exercise I can put more in a response. This is all standard, e.g. Lambek & Scott.
@samth @JacquesC2 It is indeed the simplest thing that works, but this doesn't endow utlc with some fundamental quality. Sometimes the simplest thing that works just happens to work: Does the smallest universal Turing machine have any fundamental quality? To my knowledge no, it's just about the same as all the other universal TMs.
From this regard it looks like utlc is just some arbitrary structure, because requiring functions but not finite products just looks weird and hard to characterize from a categorial perspective. There are a lot of ingredients you can throw in to make something equivalent to utlc, and can serve as an equally good foundation for the field.
@trebor @samth @JacquesC2 I do not agree — and note that from the UTLC perspective, as soon as you take retracts (or closure operations?) of a universal domain of this sort, you do have finite products (IIRC).
@jonmsterling @samth @JacquesC2 Yeah, but the point I'm making is something like D = D * D -> D or D * D + D -> D would probably work too. As a comparison, something that I would consider as actually special would be infinity-groupoids, which is indeed the most fundamental thing, subsuming (as opposed to "being able to encode") other definitions of the homotopic concept of spaces.
@trebor @samth @JacquesC2 I still don't agree — a profound aspect of Scott's work here is that you all that structure for free from the reflexive object, so it is redundant to solve this more complex domain equation.
Regarding infinity groupoids, I agree that this is fundamental, but it is fundamental in the sense that sets are fundamental — so this is like arguing with a geometer whether sets or rings are more important. It doesn't really make sense…
@jonmsterling @samth @JacquesC2 This to me has the same nature as proving every ZF-structure has ordered pairs, by doing some Kuratowski construction. But I would be delighted if it is more interesting than that!
@trebor @samth @JacquesC2 Interesting analogy... But I think it is indeed more interesting than this. This is part of an amazing representation theorem the embeds a model of the untyped lambda calculus into a cartesian closed category (== a model of typed lambda calculus). It is a very profound statement about the relationship between typed and untyped programming.
@samth I agree that it is "most fundamental". What I'm arguing is that we should also move on.
Best analogy that I can come up with: the natural numbers are "most fundamental" if what you're trying to do is to count. When you're tired of merely counting and want to do things with more general notions of numbers, other things come up.
So where is CS's algebraic closure of the rational numbers? That's a fantastic place to do all sorts of stuff. [Let's assume we're only interested in finitary operations, which is why I didn't name some more topologically-inspired notions of 'number'.]
@samth Mathematics has had many "move on" moments:
- from Roman numerals to using a 'base' in which to represent numbers, (a HUGE move forward!)
- from positive only to allowing zero and negative numbers,
- ... to complex and quaternions and ...
- from Euclidean to non-Euclidean geometry
- from commutative to non-commutative geometry
- from field-based algebraic geometry to ring-based
and on and on.
PL also studies way way more than just the untyped lambda calculus, of course. What I'm bemoaning is the dearth of people who are trying to have a go at it from a semantics-driven perspective instead of taking PLs themselves as the starting point.
At least we've "moved on" from COBOL. Too bad we lost some of the nice features of COBOL when doing that.
@samth Some of the things on my list (granted: not all) turn out to be more elegant than what it 'replaces'.
It's never that you shouldn't study the previous thing, but rather than you might find that studying as a special case might be way more illuminating.
A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.
@JacquesC2 That was the reason I found difficult to understand what denotational semantics are. The constructions are not meaningful.