Follow

I'm as worried as the next cranky old nerd about the oncoming tidal wave of neural-net spamspew, and I am deeply skeptical about the "LLMs change the world overnight" bandwagon.

But a part of me is willing to bite at "a compressed-summary access mode for the same corpus of information as exists on the web, driven by linguistic prompts of the asker". Summary and synthesis are extremely valuable services -- even if contaminated by errors and lies -- as is the ability to request "more detail on this bit I don't understand yet".

(This latter one is arguably what a hyperlink is supposed to be, but I wonder what the interaction ratio is these days between a user following a manually-crafted hyperlink vs. highlighting a term and asking a search engine for "more detail on this".)

((There is even an argument to be made that at a fairly deep information-theoretic level, the very meaning of knowledge-work is at least in part performing such summarization. Cf. Chaitin's argument that a scientific theory's value is in its ability to faithfully compress data about the real world into a much smaller model. I'm curious what the bit-size ratio is between one of these headline-grabbing LLMs and its training corpus.))

@graydon for things like stable diffusion, the model size to corpus ratio is on the order of a few bytes per image.. and it's faithful enough to upset lots of people

@graydon not Inf. Theory but an interesting question is where "producing a *useful* summary" falls on Bloom's taxonomy, maybe with more suitable category names than "understand," and whether our answer may depend on what we consider useful in a given context, how much scrutiny we are willing to apply, how much slack we are willing to give.

@theincredibleholk Fun fact while I was bringing up rustboot I briefly shared an office with a young Patrick Collison who was at the time busy trying to cram an offline dump of Wikipedia into this newly released "iPhone" thing: radar.oreilly.com/2009/07/osco

@graydon LLMs differ from your valuable compressing summarizer in two important ways.

1. Summary and compression stop at the edges of the thing being summarized and compressed. LLMs aren't summarizers, they're generators. If LLMs don't have information to fill a request they'll probably make shit up.

2. Summary and compression may not care about the truth of the input, but they do care a lot about the precision with which their output reproduces the input. The LLM's reward function is neither truth nor accuracy: it is verisimilitude, it is truthiness. LLMs output any bullshit that a dumb computer can't distinguish from the real thing.

@gparker Fair enough! I was certainly surprised about all the truthy but wrong information ChatGPT produced when I asked it for a bio of myself. Maybe it's only valuable for tasks where mere truthiness is sufficient.

@gparker (I should also note that what I described and what we are all experiencing when interacting with this tool isn’t just a one shot “compressing summarizer”, like gzip or “a blurry jpeg of the web”, but one that you can _interrogate interactively_, expanding portions of interest, a quasi-conversational semantic zoom on an “image” you could never actually apprehend the entirety of at once.

IOW I’m discussing it in terms of a dynamic interaction modality as much as a static performance of a single compression task. It doesn’t show you a compressed version of the whole web, it shows you a decompressed fragment at a time, in the context of your interaction prompts (in some bewilderingly high dimensional sparse space). That is somewhat unique, sorta half way between browsing static summaries returned by search engines and having a conversation with an expert willing to tailor its answers to your requests.

And yes, obviously one that readily hallucinates when at a loss for facts. It will be interesting to know if they can fix that — if the function can ever be made to better differentiate strong signals from weak, and answer “I don’t know enough to say” when it should.)

Sign in to participate in the conversation
types.pl

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.