These are notes from the Oxide and Friends podcast that Brian Cantrill runs on rigor in software engineering. The podcast included two of the most active users of AI within Oxide where they build low-level system software and operating systems and things that require a certain high bar for quality.
1. Brian's first example of using AI for the first time in a Rust project was a bug that he needed to debug and fix. It was a very specified and deterministic task that he understood. So he had a very clear idea of what he needed at the end. So and he was impressed by how it did. But at the same time, he didn't really let it go further than that. .
1. One other thing he noted was that it was confused between good practices, namely commenting the member fields of a struct and the existing bat pattern in the code base where the member fields were not named. Goes to show that it does amplify existing patterns.
2. Rain mentioned that LLMs are basically a pattern amplification machine once you put them in a codebase. So if there are a few examples of doing something a certain way, it'll amplify that.
3. Rain's usages were a little bit more advanced.
1. Writing a key value store implementation that required the tedious changing of thousands of lines of code to have different colors implement a trait or something similar like that was tedious work that could be validated with a deterministic set of tests where AI really came in handy.
2. Another was refactoring some part of their code based on an RFD that was detailed and written
4. One interesting command that was made was that, given the new leverage and time that AI provides us, we could use that to improve the quality of our software and not improve the velocity of our software. That made me think that increasing the velocity of building software with AI, with no other concerns, can lead to a drop in the quality, while using that extra time and energy to improve the quality of our software will subsequently improve our velocity. this isn't just true for LLMs and AI, but it is true for humans as well. Just that with AI, the velocity factor is significantly larger and amplifies bad or good patterns a lot more.
5. It could also be used to tackle a lot of tech debt, whether it's test coverage or refactors that would otherwise be things that we don't have time for because we have something to ship tomorrow
6. Someone also mentioned knowing better which kinds of software we care about quality and we don't. For example, dynamic JavaScript code that would generate a static website HTML is something you don't really care about because there are no runtime consequences to it, especially if it's for your personal website, that you don't take too seriously.
7. There's also a point that, given that there's going to be so many AI pull requests on open source platforms. They're going to move to reject by default instead of accept by default. They're also going to have to leverage AI to review code. And people are going to value software with a person's name behind it.