I love my job, but I do find it frustrating how much of my job consists of repeatedly arguing obvious stuff like "phrenology is bad even if a computer does it" and "using probabilistic lossily-compressed text databases to guess next words is not sentience" and "understanding Merkle trees does not guarantee you'll make tons of money in speculative trading of unregulated currencies" to students and peers and occasionally university administrators.
Intro (borderline long?)
Realizing I never did an #introduction here, even though I joined a few months ago.
I'm Colin, a #pl professor at #Drexel University. Historically I've mostly worked in (capability-based) type systems, effect systems, and program verification. Lately I'm also involved in the "linguistics of formal specifications" (particularly writing formal specs in English) though I'm moving my grumbling about linguistics papers that misuse definitions from computability and logic over to my linguistics-focused alt @csgordon@lingo.lol
I will however continue to talk here about #coffee, running #freebsd on laptops (bhyve & encrypted #ZFS home directories ftw), random OS (operating system) kernel stuff, and resisting the ever-present temptation to rewrite everything in #rust (mastodon, FreeBSD wifi drivers, research code that already works, code for classes I teach in Java, i3, etc).
Turnitin is back on their bullshit with a new claim. They claim that their AI text detector has only a 1% rate of false positives. Any amount of false positives will cause harm, and 1% is a falsehood. In the video, they admit that English language learners have a higher rate of false positives, but no details.
I've heard Turnitin will unilaterally roll this feature out to current customers in April. Don't let your school go along with harm. Opt out, if they let you.
PLDI SRC submission deadline is in 2 days! https://pldi23.sigplan.org/track/pldi-2023-src
PLDI SRC has a two-track model that supports both in-person and remote presentations.
This year's PLDI is also part of FCRC (https://fcrc.acm.org/) which contains many different and interesting CS conferences.
The Proceedings of EVCS – the Eelco Visser Commemorative Symposium – have now been published online, open access:
Found this on other account. Please boost for reach: Fedora Linux is focusing on accessibility for the next five years. I'd love to see people with disabilities, very much including blind people, comment on this. If you're interested in Linux, or free and open source software, your voices are valuable. I hope Fedora finds our voices valuable too.
https://discussion.fedoraproject.org/t/fedora-strategy-2028-focus-area-review-accessibility/46898/12
#accessibility #linux #fedora #foss
A decade ago Matija Pretnar and I introduced the first programming language with algebraic effects and handlers (https://www.eff-lang.org). One of the examples therein were cooperative threads.
Today I used for the first time #ocaml 5 effects and handlers to implement cooperative threads that actually do something useful: an interpreter for a programming language that computes with exact real numbers and has non-deterministic guarded case. (I'll post more about it when its ready for public consumption, but the repo on GitHub is public if you feel like stalking me. It's joint work with Sewon Park and Alex Simpson.)
I can't wait to have a bit more time to throw in the concurrent features and finally put to work all these M1 cores that are just hanging around inside a silver box on my desk.
Many thanks go to the wonderful OCaml team for their heroic effort in making multi-core OCaml.
More ChatGPT hype crap
So apparently Stanford HAI has put up a PR blurb about a preprint where they concluded that "AI" (i.e., ChatGPT) "may have already caught up to the persuasive capacity of everyday people, a critical benchmark of human-like performance."
No, AI has done no such thing. If you read their methodology section, they compared ChatGPT's output -- a statistical interpolation of every extended argument on these issues that hundreds (or more) of dedicated humans (many probably experts spending hours of time each) have written over the many years of training data ChatGPT was trained on -- to the argument put together in a short time by some rando they recruited from a mechanical turk knockoff.
They've shown that the statistical average of every written argument for these issues is on par with an internet rando.
https://saturation.social/@Noupside/110052790316882007
https://doi.org/10.31219/osf.io/stakv
@zkat I'm reminded that I've seen AI bros taking offence at calling them Markov chains because it's dehumanising.
Sadly I can't find this forward in English, as I cant find any English binding of the initial trilogy at all, only the German version I'm reading https://www.kobo.com/us/en/ebook/die-neuromancer-trilogie-1
All the English collections I can find include the fourth book
If you want a cool example of language change over time, or just want to feel old (it's a 2-for-1 deal): I'm reading a German translation of the first 3 books of the Sprawl series (apparently published before the 4th book came out), and in the foreword Neil Gaiman pointed out how the meaning of the first sentence of Neuromancer has changed drastically. It refers to the color of a TV screen tuned to a dead channel, and to most people around my age or older this is obviously fluctuating gray and white static, to a generation younger it's blue, and if my son reads this when he's older it'll be black.
Added my first animations to Creative Scala:
https://www.creativescala.org/creative-scala/polygons/02-polar.html
Not sure I'm 100% happy with them, but it's a start.
Take that back: almost perfect. There are small groups of LLM researchers who are realistic about limitations and are also interested in using them for purposes they're fit for (or specifically trying to make them fit for specific well-scoped purposes) where one doesn't need to assume they "understand" meaning in order to drive benefits.) Those people are usually pretty willing to talk about ethics and have coherent well-reasoned thoughts about the matter.
The other 95% though, are well-captured by the linked post.
Truly the perfect analogy
https://fediscience.org/@MarcSchulder/110023486566798654
MSFT lays off its responsible AI team
The thing that strikes me most about this story from @zoeschiffer and @caseynewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers"
https://www.platformer.news/p/microsoft-just-laid-off-one-of-its
>>
From the beamer manual:
"Till created the first version of beamer for his PhD defense presentation in February 2003. A month later, he put the package on ctan at the request of some colleagues. After that, things somehow got out of hand."
I don't know why, but I think things got out of hand before what the quote states🤷
This will be the most *controversial* and *divisive* polls on Mastodon and the fediverse >:3
Do you like your books hardback or paperback?
♻️boost for more votes
#novels #lightnovels @bookstodon #bookstodon #books #reading #controversial #poll #polls
Stolen from @raccoonformality
PL professor, kernel hacker, aspiring linguist (syntax & compositional semantics)