today sam actually used the library that i've been working on for... around two and a half months now? and he just repeatedly said "this is awesome"

like. it's weird to me that i am actively getting complemented on this. i have felt for the entire semester that i haven't been putting enough work into this/have been putting this off/etc
it's. ??

it's like
92 lines of C tho so like, the error surface is low compared to the rest of the runtime (also written in C)

Show thread

so based on that rate i can estimate the rate of further segfaults is probably low. though it's also a small sample size, so the error bars are probably bige

Show thread

thing is
there was 1 segfault and that wasn't even due to the actual GC it was due to not zeroing stack frames before using them because like, i added some extra checks to the GC cause of this to make sure it's not dereferencing obviously bogus addresses but also it reduced the false positive rate of the GC run when i had the compiled code actually zero the stack frame lol

Show thread

(by any% it means i've eliminated the segged faults so far but there could be more)

Show thread

now that i have gotten familiar with writing basic noob ocaml it's really time to go see how jane street does it and be like, a

i finally belong on this fuckimg masto instance :dragnuwu:
i wrote an shitty H-M type inference. i did it,,,

why do we need to prove things? just look and see if they're right or not

literally nothing 

:dragnsleep: initialism
:dragnuwu: mu-acronym

wait you tpyes are infered?
th a t is so cool.....

recursive types work. lambda works. i literally can type inference on the church-encoded fibonacci test case i wrote
list implementations using tuples do the inference with 1 mandatory type annotation whenever you create a recursive tuple

Show thread

literally problem to solve in ocaml:

me: [pawb slam] List.fold_left_map


isorecursive is when it's a disk image
equirecursive is when it's a horse

Show thread

the whole classical H-M implementation still feels like undecipherable magic tho
like you have substitution (of a set of type variables to replacement types, within a type), and you build that for types, schemas, and (binding) environments. you also make a compose function for substitutions
then you also have unify, which comes up with a subst that makes 2 types match, or errors if there's a problem
then there's instantiate, which turns a schema into a type (by replacing the schema vars in the type with new ones), and generalize, which turns a type (with free variables in it) into a schema

now with all these pieces you do the inference
your inference function takes a binding environment (name to schema) and an AST node and returns an inferred type, along with a list of substitutions necessary for that inferred type to be true
but the necessary order of infer, unify, instantiate, generalize, compose for each AST expression is fucky

Show thread
Show older

A Mastodon instance for users who like the study of programming languages, formal semantics, types, or linguistics.