@agdakx Imagine that we needed new set theory axioms to prove the results for each new field (number theory, analysis, group theory, etc.) and that we would regularly discover combinations of those axioms to be contradictory. Why isn't the world like that? We've only had to tweak them a few times, and we only discovered natural independent statements a few times.
Roomba testers feel misled after intimate images ended up on Facebook | MIT Technology Review
https://www.technologyreview.com/2023/01/10/1066500/roomba-irobot-robot-vacuum-beta-product-testers-consent-agreement-misled/
Playing around with #OpenAI's #GPT3 text generator led me to perhaps the creepiest behavior yet. As @ct_bergstrom noted, the AI bots are able to cite the sources of their own writing.
I decided to see if it could function as a plagiarism detector. But when I put a few snippets of my own writing in, it found several uncited sources. (Attached.)
Aha! Someone must have plagiarized me! Or so I thought. But the truth was much stranger....
Surely this can't be true... #AdventOfCode as a virus scanner?
https://www.reddit.com/r/adventofcode/comments/zb98pn/2022_day_3_something_weird_with_copypasting/
An Incredible Day In Internet History
It started with the Twitter lockout. 10,000 new users per hour. A QUARTER MILLION people migrated to Mastodon in one day. The servers struggled. Remarkably, admins all over the world built up capacity in real time. New users were patient. The system held.
It's running better now. There will be more hard days ahead, but people powered social media has arrived.
Had my compilers class try to break 133 different compilers, which just wrapped up.
https://types.pl/@dvanhorn/109305895477147646
OK first, this was fun and I recommend it.
The assignment was to write input programs that would be run on the collection of compilers submitted from the *previous* assignment. If results differed from a reference interpreter, that compiler was considered broken. The goal was to break as many things as possible.
The learning objective here was to learn how to read an informal spec and write test cases that are likely to exercise bugs. I think that worked. For many students it was clear they hadn't done this kind of task before and didn't really know where to start, which was surprising to me.
Students very quickly (like in hours) found overspecifications in the behavior of the interpreter and used it to "win", but I was able to adjust the interpreter and after the first day, that kind of exploit went away.
Many students did what I expected: they wrote small tests based on the assignment spec that broke a good chunk of the compilers. With some effort, they could get ~70-80 of the 133 compilers this way.
A few students wrote tests *not* guided by the assignment spec, but instead just wrote small examples drawn from the whole language. They found bugs in the starter code that was given to students, and thereby knocked out all 133 compilers.
One student found a bug in the parser which in two characters broke all the compilers!
Another student found a bug in the run-time system which read some memory as a uint, when it should've been int.
I look forward to refining and iterating on this in the future.
As more academics reach Fedi, please PLEASE consider not doing research on users here without explicit opt-in consent
This isn't a zoo
It's not just condescending for you to treat us that way, it's also against a lot of instances' terms of use
See "Use of Scholar Social for research" at the following link for an example:
Niki and Wouter interviewed me on the Haskell Interlude podcast, the episode is out!
I am an undergraduate student at Charles university. I am interested in PL overall and I like Haskell.