Found this on other account. Please boost for reach: Fedora Linux is focusing on accessibility for the next five years. I'd love to see people with disabilities, very much including blind people, comment on this. If you're interested in Linux, or free and open source software, your voices are valuable. I hope Fedora finds our voices valuable too.
https://discussion.fedoraproject.org/t/fedora-strategy-2028-focus-area-review-accessibility/46898/12
#accessibility #linux #fedora #foss
seen on the side of a vending machine in #Reykjavik, #iceland
(photo by @akiva )
Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises
I keep seeing a lot of stuff floating around saying things like "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" (seen this morning) and I'm sorry but I find this take incredibly naive. I'm no fan of boring assignments where students do some activity just because and nobody seriously evaluates them or gives them feedback. That's a real problem, but not one this catchy suggestion addresses.
We still teach students addition, subtraction, etc., despite having calculators for decades. Why? Because if you only ever punch numbers into a calculator you don't actually understand numbers! We still teach students how to implement linked lists not because we need more linked list implementations, but because it's a stepping stone to understanding data structures in general. We don't ask students to write essays because we care about having more text sequences that take the form of an essay. We ask them to write essays so they can practice organizing their thoughts and stating their thoughts, opinions, and arguments clearly in a form that other humans can understand (to practice clear communication!).
Replace "AI" in the quote above with "online outlets that do your homework for a price." Or replace it with "parents." We ask for these things not because the outputs themselves are generally important, but because we care about the learning outcomes that arise from a student doing them; learning how to produce these outputs is how we teach students to think critically, or understand numbers or data structures. Yes this can (and often is) done poorly, and that needs fixing, 100%. But asking students to do things others already know how to do is a critical pedagogical tool for building understanding.
Nevermind that these lines of argument give the "AI" too much credit. ChatGPT can't actually do math, for example, it just memorized millions of examples of doing thousands of math problems. It's why it screws up if you ask it to work with really large numbers: it hasn't seen those in its training data so you get the output of smoothing an uneven probability distribution.
EDIT: I want to clarify something important: I'm specifically arguing against the idea that just because ChatGPT-like systems can "do" an assignment we should not use it for teaching anymore, which is some nonsense I've been hearing a lot lately. Because I wasn't clear, some people have read this post as implicitly defending business as usual. That's not what I mean. The examples I gave of assignments are things that *can* be used for excellent learning, but any style of assignment can be given or graded thoughtlessly in a way that leads to no learning at all, and I don't care to preserve those uses. So please do reevaluate assignments and toss ones that don't work; just please toss the ones that actually don't lead to learning (plenty of valid reasons to do this even before ChatGPT), and keep the ones that do even if a few students might use automated systems for them. We already have enough trouble with instructors more concerned with cheating detection than with learning outcomes.
Here is your must-read article for the day, a profile of @emilymbender, and her efforts to deflate the ridiculous hype around large language models such as ChatGPT.
It's also about the people who are behind that hype, and about what their way of thinking has the potential to do to us.
It's worth reading all the way to the end.
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Our super anxious kitty Gort basically lives for three things, one of which is getting "counter love" in the bathroom while we're brushing our teeth, washing our faces, or whatever. Tragically, however, he is terrified of the slightly different sound our sink makes since we had a new drain installed.
The current debate in our household is whether to power through so Gort gets used to it and we can continue as before, or whether we switch to brushing our teeth in the kitchen and the bathroom becomes exclusively a Gort petting station.
These "Floppy Disk Costumes" for SD cards by @charlyn are perfect! 🤩
Get yours here: https://www.etsy.com/listing/1406341370
THE CASE FOR SHUNNING
So there’s this comic strip called Dilbert that a lot of people used to think was funny—certainly enough to sustain an enormously successful career in the funny pages for its creator, whose name is Scott Adams.
I read Dilbert occasionally back in the day—that is in the 1990s. I thought it was pretty funny, I think. It’s hard to remember.
There is an ongoing news cycle about Linux 6.2 being the first kernel to support the M1, started by ZDNET. This article is misleading and borderline false.
You will not be able to run Ubuntu nor any other standard distro with 6.2 on any M1 Mac. Please don't get your hopes up.
We are continuously upstreaming kernel features, and 6.2 notably adds device trees and basic boot support for M1 Pro/Max/Ultra machines.
However, there is still a long road before upstream kernels are usable on laptops. There is no trackpad/keyboard support upstream yet.
While you can boot an upstream 6.2 kernel on desktops (M1 Mac Mini, M1 Max/Ultra Mac Studio) and do useful things with it, that is only the case for 16K page size kernel builds.
No generic ARM64 distro ships 16K kernels today, to our knowledge.
Our goal is to upstream everything, but that doesn't mean distros instantly get Apple Silicon support.
As with many other platforms, there is some integration work required. Distros need to package our userspace tooling and, at this time, offer 16K kernels.
In the future, once 4K kernel builds are somewhat usable, you can expect zero-integration distros to somewhat work on these machines (i.e. some hardware will work, but not all, or only partially).
This should be sufficient to add a third-party repo with the integration packages.
But for out-of-the-box hardware support, distros will need to work with us to get everything right.
We are already working with some, and we expect to announce official Apple Silicon support for a mainstream distro in the near future. Just not quite yet!
1/ We're delighted to announce the next release of DCIC. This is a major revision that's been a long time in the making.
The quoted tweet thread summarizes the book; the rest of this thread outlines what's new:
https://twitter.com/ShriramKMurthi/status/1429181263487844354
↵
I am excited to announce that our paper "Back to Direct Style: Typed and Tight" has been accepted at OOPSLA'23.
We present a typed translation, which allows compilers to go to CPS, perform optimizations, and go back to direct-style (DS).
The translation...
- preserves well-typedness
- preserves semantics
- is a syntactic right-inverse of the CPS translation (that is, going to CPS and back is the identity)
- it is a left-inverse of the CPS translation, if DS programs don't use control effects
https://se.cs.uni-tuebingen.de/publications/mueller23continuation/
Someone played a game of chess between the Stockfish engine and ChatGPT. The result is hilarious and an insightful comparison of very different kinds of AI.
Here's the post with an animation of the game: https://www.reddit.com/r/AnarchyChess/comments/10ydnbb/i_placed_stockfish_white_against_chatgpt_black
Here's the comment from the OP with a link to the ChatGPT transcript: https://www.reddit.com/r/AnarchyChess/comments/10ydnbb/comment/j7xh4qx
I'm as worried as the next cranky old nerd about the oncoming tidal wave of neural-net spamspew, and I am deeply skeptical about the "LLMs change the world overnight" bandwagon.
But a part of me is willing to bite at "a compressed-summary access mode for the same corpus of information as exists on the web, driven by linguistic prompts of the asker". Summary and synthesis are extremely valuable services -- even if contaminated by errors and lies -- as is the ability to request "more detail on this bit I don't understand yet".
(This latter one is arguably what a hyperlink is supposed to be, but I wonder what the interaction ratio is these days between a user following a manually-crafted hyperlink vs. highlighting a term and asking a search engine for "more detail on this".)
Programming languages! Rust, Haskell, types, DSLs, language design, human factors, ... Compiler hacker at https://EC.ai.
Into other nerdy things like game theory, linguistics, board games, and sci-fi/fantasy books. Also into (arguably) less nerdy things like playing tennis, gardening, traveling, kayak camping, lefty politics, local beer, and following my hometown sportsball teams.
Based in beautiful Corvallis, Oregon, USA.