Pinned toot

Hi, I'm Joel! I like programming, woodworking, math, bushcraft, literature, history, all sorts of things!

Mastodon should support a feature where, if enough people on your server downvote toots on the "federated" timeline which your account is reponsible for, you get kicked off the server

I highly recommend doing it like this, especially on a job interview etc, it’s always unambiguous which one I meant, and people like it (IME)

Show thread

When people see how I hand write my curly braces, they always laugh and tell me they like them.

I’m just lazy

Sometimes, I really wish I could go back to school. Perhaps pursue a PhD in programming languages. But it’s just not feasible.

The problem with for me is that I can't tell if its ideas are good or bad, really. They're just so so different.

It's really amazing. I often think that we are so lucky to have stumbled into such a great neighbourhood in the Netherlands.

Then I cycle through other neighbourhoods and it wasn't luck at all...everywhere has amazing architecture, beautiful designs (albeit, often still with many cars).

The book of re: why other languages haven’t implemented a type system which is basically just the

I'm curious, how do you prefer to consume information online?

Would be amazing if you could boost for improved statistical rigour and all that jazz. #poll #content #information

Shen is *such* a mind-blowing language. Its type system is so different, its almost hard to consider it a type system, or at least it is *very* different from every other one I've ever seen. TBOS covers this briefly (here shenlanguage.org/TBOS/tbos_228) giving rationale as to why other languages don't do this; I can't comment on that as I don't know the history, but it is very interesting.

Has anyone else used it? I'd love to have some people to geek out about it.

Can we make the local timeline public on types.pl? I'm not sure if its intentionally off, but I wonder what others think about it.

My thoughts on effects in the type system, as a software engineer trying to make bug free software on a practical basis. I'd really appreciate your thoughts!

The TL;DR; is that IME, the only real way to avoid having to read an absurd amount of code while refactoring is to have effects clearly spelled out in the types of the functions I am working with.

The only way i have found to write relatively bug-free code is that, when editing a function, I check all of the functions that the function also calls to ensure I understand what is happening, and I check all of the callers of the function to ensure their logic still holds. This is extremely time consuming and still error-prone. But, by having side effects explicit in the types of what I am working with, I am much better able to do this analysis.

First, some personal background. I fell in love with programming before college. At first I was just fascinated with it in general. i remember when, before my night job unloading trucks for UPS (when I was in community college), I was sitting in my car because I got there early, and I read about how `fork` worked from "Unix Network Programming", and it made me laugh until my eyes teared up, it was somehow so elegant and beautiful at the same time.

In college I discovered functional programming; it became so so clear to me that it was the *right* way to solve so many things, but especially at that time it was clear that "algorithmic" challenges were best described in FP. Look at a non-trivial algorithm written in FP style vs imperative style and it becomes apparent which is better. However, at that time, I was not convinced that effects in the type system/pure functional programming was worth it, good, etc, really in any way. I mostly went with what is now called "functional core, imperative shell".

As I gained experience, however, I came again and again up against several problems with mainstream programming. The main problem: I had to read all, all, all of the code, almost all of the time, in order to prevent bugs.

Personally I've always taken it that any time I write a bug which makes it past my own purview, I've made a (non-insignificant) mistake, and try my best to figure out how to avoid making the mistake next time. In general my bosses have told me that I write code with very few bugs, but I am also a lot slower than other people at programming. I think this largely comes from being meticulous, as careful as I can be.

I have found that it is essential to read the (application) code of the functions I'm calling, transitively, all the way down until I am satisfied that I know what is happening when I call some code. Why? Because I've found that basically nobody actually knows how their systems work. They have ideas about how they work, and they may explain it to you, but very frequently there is some missing component which is essential to the story. So, when someone tells me something about the behavior of the system, I always verify that that is the case.

Unsurprisingly, this takes a long time. I think most people don't bother with doing this kind of thing, but I have found that it pays off time and again.

This is where effects in the type system pays off: I can tell which parts of the functions that are called might have surprising side effects that I need to comprehend. This makes it *much* easier to understand the overall behavior of the system, and makes it much faster to write code that works.

The second practice I have that is somewhat uncommon is that when I edit a function, *especially* when I am making a subtle semantic change, I know that I need to check all of the *callers* elsewhere who call this function. In mainstream OO, frequently, a function is called for its "effects", and very frequently the specific effects of a function are depended upon by those who call it. This is all fine, but what I see people mistake is that they make changes *without* checking the logic of the calling functions to make sure that those functions still make sense.

Of course, this, too, is much easier in the context of a effects in the types. I am much more easily able to tell what effects the function I am working on might be depended upon for, but then also having effects in types makes it easier to scan the *callers* for the important parts.

Thoughts? I haven't ever heard someone describe this process of "check all of functions my function calls to make sure i understand it, and check the callers of my function to make sure I understand how it is being used", but I find it critical to writing reliable code.

Finally came up with what I think is a good explanation: in a system without typed effects, a type signature tells us how a function *interfaces*. But, if the type contains the effects, it also tells us a significant amount about *what it does*.

This is why a language without effects in the type system may be "type safe", but ultimately you really have no idea what its going to do.

@tao every person who historically confused correlation with causation has since ended up dead. I think we all know what that means.

I'm on the lookout for a complete, simple implementation of SML; basically I want one that is as simple and clear as can be, while still implementing the entire language. Anyone aware of such a thing, I'd love to know

Company blog post announcing #ghc #wasm backend: tweag.io/blog/2022-11-22-wasm-
tl;dr You can already compile #haskell programs into self-contained #wasm modules supported by any engine that supports wasi. With even more features on the horizon!

Show older
types.pl

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.