I’m telling you, pretty soon there will be very few resources in this field to do any research that is not something something AGI if people don’t have a robust anti-“AGI” movement. The #TESCREAL bundle has NSF calls with more money than I’ve seen, all over conferences, all over universities, and of course almost all the big tech co. labs, especially with this last google brain + deepmind merger & James Manyika being in charge of google research.

With thoughts from @TaliaRinger !
Some Glimpse AGI in ChatGPT. Others Call It a Mirage | WIRED wired.com/story/chatgpt-agi-in

After two weeks of letting myself try, fail, ask silly questions, and repeat, I finally feel very comfortable using Cubical Agda. Like actually. Wow

"Baldur: Whole-Proof Generation and Repair with Large Language Models"

arxiv.org/pdf/2303.04910.pdf

ELI5: Kan extensions

My students are stuck on a theorem in Cubical Agda and I'm not sure how to help them. It's here: proofassistants.stackexchange.

And I tried to prove it, too, but in the a = [ pos zero ], b = [ neg zero ] case I could not find a proof that also satisfied the necessary definitional equalities (I found a congP application that in theory had the right type, but Cubical Agda yelled at me about some definitional equalities that were not satisfied). Since it's Cubical I have no idea how to find help with this, and so I am curious if type theory mastodon has ideas.

Anyone up to exchange ITP drafts for feedback?

Heather Macbeth closed the workshop by exploring the differences between “paper proofs” written by humans, which often expend effort in manually “pruning” the proof to reduce the computation required, and proofs arising from machine assisted methods, which can focus instead on setting up the proof so that the main computation can be easily outsourced to a machine. The key point was that these tools may cause our aesthetics for what constitutes a readable or beautiful proof to evolve.

Micaela Mayero spoke on how the formalization of core results in analysis, such as the Lebesgue integral, is proceeding in Coq, with a target of formally verifying the convergence of finite element methods (FEM). My impression was that this project had roughly reached the level of a second year graduate student taking a standard analysis sequence. There are numerous subtleties going back to how one even formalizes the real numbers.

Tony Wu talked about how Google OpenAI’s large language models could generate informal proofs for high school to IMO problems, wotg a separate autoformalizer then converting them into formally verifiable proofs ( or vice versa) with reasonable success rates at the high school level.

Jason Rute talked on various approaches to use AI tools to automate or semi-automate the task of finding formal proofs of mathematical statements. Some attempts were inspired by an analogy with AlphaGo.

Anyone down to give feedback on a paper draft sometime over the weekend? Fresh eyes would be good. Target audience is proof assistant researchers, especially those interested in certified compilation and in embedded program logics.

Question for a student: What venues are interested in work on reverse engineering neural network behavior? And what are the standards for that work at those venues?

For example, say we reverse engineer behavior for an interesting class of functions with strong motivation that reveal useful insights about the behaviors of a commonly used neural architecture, but we don't yet actually improve the architecture. Is this publishable anywhere?

Or must we also take it a step further and use our insights to improve performance? We plan to do this at some point, but it's a bit downstream.

You may have heard that Google laid off 12000 employees yesterday. I have created a Google Doc to crowdsource job openings and useful contacts for those 12000 employees. Please add anything you know of: docs.google.com/document/d/1g5

You may have heard that Google laid off 12000 employees yesterday. I have created a Google Doc to crowdsource job openings and useful contacts for those 12000 employees. Please add anything you know of: docs.google.com/document/d/1g5

Are folks aware of any good resources that cover the basics of compiling recursive functions, both inefficiently and with the basic optimizations like tail call optimization? Just looking for an overview at a late undergrad or early grad level.

While I'm here, also, any resources on implementing recursive functions on a Turing Machine?