23 comments

  • n_u 5 hours ago
    This is my second attempt learning Rust and I have found that LLMs are a game-changer. They are really good at proposing ways to deal with borrow-checker problems that are very difficult to diagnose as a Rust beginner.

    In particular, an error on one line may force you to change a large part of your code. As a beginner this can be intimidating ("do I really need to change everything that uses this struct to use a borrow instead of ownership? will that cause errors elsewhere?") and I found that induced analysis paralysis in me. Talking to an LLM about my options gave me the confidence to do a big change.

    • augusteo 4 hours ago
      n_u's point about LLMs as mentors for Rust's borrow checker matches my experience. The error messages are famously helpful, but sometimes you need someone to explain the why.

      I've noticed the same pattern learning other things. Having an on-demand tutor that can see your exact code changes the learning curve. You still have to do the work, but you get unstuck faster.

    • pfdietz 4 hours ago
      I don't see why it shouldn't be even more automated than that, with LLM ideas tested automatically by differential testing of components against the previous implementation.

      EDIT: typo fixed, thx

      • happytoexplain 4 hours ago
        Defining tests that test for the right things requires an understanding of the problem space, just as writing the code yourself in the first place does. It's a catch-22. Using LLMs in that context would be pointless (unless you're writing short-lived one-off garbage on purpose).

        I.e. the parent is speaking in the context of learning, not in the context of producing something that appears to work.

        • pfdietz 4 hours ago
          I'm not sure that's true. Bombarding code with huge numbers of randomly generated tests can be highly effective, especially if the tests are curated by examining coverage (and perhaps mutation kills) in the original code.
      • n_u 4 hours ago
        I'm assuming you meant to type

        > I don't see why it *shouldn't be even more automated

        In my particular case, I'm learning so having an LLM write the whole thing for me defeats the point. The LLM is a very patient (and sometimes unreliable) mentor.

    • monero-xmr 4 hours ago
      I am old but C is similarly improved by LLM. Build system, boilerplate, syscalls, potential memory leaks. It will be OK when the Linux graybeards die because new people can come up to speed much more quickly
      • lmm 4 hours ago
        The thing is LLM-assisted C is still memory unsafe and almost certainly has undefined behaviour; the LLM might catch some low hanging fruit memory problems but you can never be confident that it's caught them all. So it doesn't really leave you any better off in the ways that matter.
        • monero-xmr 3 hours ago
          I don’t code C much, is my passion side language. LLM improves my ability to be productive and quickly. Is not a silver bullet, but is an assist
  • ajkjk 3 hours ago
    I kinda like viewing this as similar to coordinate-invariance in physics / geometry. A programming language is effectively a function from textual programs to behaviors; this serves the same role as a coordinate system on a space, which is a function from coordinates to points. Naturally many different programs and programming languages can describe the same behavior, especially if you forget about implementation-specific details like memory layout or class structures. LLM code generation is just another piece of the same model: turns out that to some degree you can use English+LLMs as coordinate systems for the textual programs.

    I expect that over time we will adopt a perspective on programming that the code doesn't matter at all; each engineer can bring whatever syntax or language they prefer, and it will be freely translated into isomorphic code in other languages as necessary to run in whatever setting necessary. Probably we will settle on an interchange format which is somehow as close as possible to the "intent" of the code, with all of the language-specific concepts stripped away, and then all a language will be is a toolbox that an engineer carries with them of their favorite ways to express concepts.

    • TomasBM 1 hour ago
      I think this is certainly true, except for the "each engineer [bringing] whatever syntax or language" point.

      At some stage, I expect that we will know what is the set of "optimal" computer languages for the interface between the programmer and the machine code.

      Natural languages can't really capture the lower-level details of a program, but there's (probably) also no need for all N different ways to write a for loop.

    • lacunary 3 hours ago
      but the interesting thing about coordinate systems is that they do matter, a lot! many problems are much much easier to solve in one coordinate system than another.
      • ajkjk 2 hours ago
        no doubt, but... also physics advanced in leaps and bounds every time someone figured out that ways of abstracting out a coordinate system.
    • worthless-trash 1 hour ago
      I would dearly love this to happen. There are some corner cases that won't fit the translation layer well, for example almost any beam languages philosophy of let it crash..

      I imagine those languages will be left by the wayside.

  • srcreigh 4 hours ago
    I don't really want to learn how to use the borrow checker, LLM help or not, and I don't really want to use a language that doesn't have a reputation for very fast compile/dev workflow, LLM help or not.

    Re; Go, I don't want to use a language that is slower than C, LLM help or not.

    Zig is the real next Javascript, not Rust or Go. It's as fast or faster than C, it compiles very fast, it has fast safe release modes. It has incredible meta programming, easier to use even than Lisp.

    • ycombinatrix 4 hours ago
      Writing code without the borrow checker is the same as writing code with the borrow checker. If it wouldn't pass the borrow checker, you're doing something wrong.
      • srcreigh 3 hours ago
        Idk. Did you see the "Buffer reuse" section of this blog post? [1]

        Kudos to that guy for solving the puzzle, but I really don't want to use a special trick to get the compiler to let me reuse a buffer in a for loop.

        [1]: https://davidlattimore.github.io/posts/2025/09/02/rustforge-...

      • forrestthewoods 3 hours ago
        This is a n objectively false statement.

        Rusts borrow checker is only able to prove at compile-time that a subset of correct programs are correct. There are many correct programs that the BC is unable to prove to be correct and therefore rejects them.

        I’m a big fan of Rust and the BC. But let’s not twist reality here.

        • lmm 2 hours ago
          > There are many correct programs that the BC is unable to prove to be correct and therefore rejects them.

          There are programs that "work" but the reason they "work" is complicated enough that the BC is unable to understand it. But such programs tend to be difficult for human readers to understand too, and usually unnecessarily so.

      • drivebyhooting 4 hours ago
        Come on that’s not true. How would you write and LRU cache in rust? It’s not possible in idiomatic rust. You either need to use unsafe or use integer indices as a poor man’s pointer.
        • mattgreenrocks 3 hours ago
          Indices are fine. Fixating on the “right” shape of the solution is your hang-up here. Different languages want different things. Fighting them never ends well.
        • ycombinatrix 3 hours ago
          What's wrong with integer indices? They have bounds checking. You definitely do not need unsafe to do LRU.
  • felipeccastro 3 hours ago
    It might be the opposite. Python apps still get written despite the performance hit, because understandability matters more than raw performance in many cases. Now that we’re all code reviewers, that quality should matter more, not less. Programmer time is still more expensive than machine time in many cases.
    • jact 3 hours ago
      Are Python apps really so easy to understand? I seriously disagree with this idea given how much magic goes behind nearly every line of Python. Especially if you veer off the happy path.

      I certainly am no fan of C but from a certain point of view it’s much easier to understand what’s going on in C.

      • fwip 2 hours ago
        Well-written Python apps are very easy to understand, especially if they use well-designed libraries.

        The 'magic' in Python means that skilled developers can write libraries that work at the appropriate level of abstraction, so they are a joy to use.

        Conversely, it also means that a junior dev, or an LLM pretending to be a junior dev, can write insane things that are nearly impossible to use correctly.

      • awesome_dude 2 hours ago
        One of the (many) reasons that I moved away from Python was the whole "we can do it in 3 lines"

        Oh cool someone has imported a library that does a shedload of really complicated magic that nobody in the shop understands - that's going to go well.

        We're (The Software Engineering community as a whole) are also seeing something similar to this with AI generated code, there's screeds of code going into a codebase that nobody understands is full across (give a reviewer a 5 line PR and they will find 14 things to change, give them a 500 line PR and LGTM is all you will see).

    • Larrikin 3 hours ago
      I've cooled significantly on Python now that there are a number of strongly typed languages out there that have also gotten rid of the boilerplate of languages Python used to compete with.

      Readability gets destroyed when a function can accept 3 different types, all named the same thing, with magic strings acting as enums, and you just have to hope all the cases are well documented.

      • awesome_dude 2 hours ago
        Type systems document data movement throughout applications :-)

        And the other problem with functions accepting dynamic types is that your function might only in reality handle one type, it still has to defensively handle when someone passes it things that will cause an error.

        All the dynamic typing really did is move the cognitive load from the caller to the called.

    • viraptor 3 hours ago
      I'd much more prefer to review something written in Rust or Go, even if I'd much rather write it in Python if I had to do it manually.

      The better structure and clear typing makes the review much easier.

      • awesome_dude 2 hours ago
        My biggest reason for liking Go, over Python can be summed up in one word: Discipline.

        Python was supposed to be embracing the idea of "there's only one way to do it", which appeals after Perl's "There's many ways to do it", but the reality is, there's 100 ways to do it, and they're all shocking.

  • PeterWhittaker 3 hours ago
    While I still struggle to think in Rust after years of thinking in C, it is NEVER the borrow checker or lifetimes that trip me, it's the level of abstraction, in that C forced me low level, building my own abstractions, while Rust allows me to think in abstractions first and muse over how to implement those idiomatically.

    What did it for me was thinking through how mutable==exclusive, non-mutable==shared, and getting my head around Send and Sync (not quite there yet).

    AI helps me with boiler plate, but not with understanding, and if I don't understand I cannot validate what AI produces.

    Rust is worth the effort.

    • justarandomname 2 hours ago
      THIS, I can barely remember a time with lifetimes or the borrow checker caused me undue suffering but can recall countless times that abstractions (often in the async world) did and sometimes still do.
    • nvader 3 hours ago
      Thank you for this comment.

      I'm starting my rusty journey and I'm only a few months in. With the rise of autogenerated code, it's paradoxically much harder to go slow and understand the fundamentals.

      Your comment is reassuring to read.

      • s1mplicissimus 3 hours ago
        Just wanted to add another bit of reassurance. At some point during my career people started "stack overflow coding". But ultimately someone has to fix the difficult issues and when you have the skills, you are coming out on top where others can just shrug and say "well there's no solution on stack overflow".
  • felixfbecker 2 hours ago
    The author makes the argument that in the age of LLMs more type safe languages will be more successful than less type safe ones. But how does that support the claim that Go is more suitable than JavaScript? TypeScript is more type safe than Go: Go doesn’t validate nil pointers, it doesn’t enforce fields to be set when initializing structs, it has no support for union types. All those things can cause runtime errors that are caught are caught at compile time in TypeScript.
    • giancarlostoro 2 hours ago
      Not sure, but I gave it a shot weeks ago and finally started building something using Rust for a project I've wanted to build for years now, in maybe 12 hours worth of effort total I've probably done several months worth of engineering effort (when you consider I only touch this project in my spare time). Every time I pick up Rust I fight it for hours because I don't do any Rust in my dayjob, but the LLM helps me pick up that Rust nuance slack wherever I fall short and I can focus on key architectural details as I have been obsessing over years now.
    • chb 2 hours ago
      A gross mischaracterization of the author's point (the word "type" doesn't even appear in the article). The author focuses on the cost of interpreted languages, which he describes as "memory hungry" and computationally expensive.
  • nxobject 3 hours ago
    I hope this law applies to all of the Electron applications out there...
    • ravedave5 1 hour ago
      It might. An electron app is always a compromise, everyone knows several native apps would be better, but that's 3x the effort and maintenance. If you have LLMs porting to other systems they may be kinda janky, but is that worse than an electron app?
    • furyofantares 1 hour ago
      Oh don't worry, they'll recreate the bloat with millions of lines of bespoke slop per app.
  • yert3 2 hours ago
    Why not assembly? Not enough compile time checks, only runtime errors.

    Why not go? Slower than rust. Type system is lacking.

    Why not c? Not memory safe.

    Why not c++? Also not memory safe.

    Why not zig? Not memory safe.

    Why rust? Fast, memory safe, type safe. Compiler pushes back on LLM hallucinations and errors.

    Over time, security-critical and performance-critical projects will autonomously be rewritten in rust.

    • gritspants 2 hours ago
      So, I am then interested in non trivial software written with LLM assistance and in Rust. Off the top of my head I'm aware of Turso (supposed sqlite successor), marketing approach aise, I find it interesting to see how that works out. Any others?
      • r-johnv 1 hour ago
        uv is a good example of this.
    • globalnode 2 hours ago
      assembly is fun, python is fun, both get the job done.
    • jimbob45 2 hours ago
      Rust isn’t versatile enough to be used in high-level contexts. You simply have to jump through too many hoops. It’s great for low-level apps but if I just want a simple UI and CRUD functionality, I’m reaching for C#/Kotlin.

      That’s not a bad thing though. It’s okay to aim for your thing and be good at just that. No need to try to please everyone.

      • noosphr 10 minutes ago
        I don't know why you are being downvoted. Rust in the Linux kernel is basically all unsafe which is what I expected to happen the first time I heard about it.
      • esafak 2 hours ago
        And Rust compilation is slow.
  • leonidasv 3 hours ago
    I use Claude Code daily to work on a large Python codebase and I'm yet to see the it hallucinating a variable or method (I always ask it to write and run unit tests, so that may be helping). Anyway, I don't think that's a problem at all, most problems I face with AI-generated code are not solved by a borrow-checker or a compiler: bad architecture, lack of forward-thinking, hallucinations in the contract of external API calls, etc.
  • fuddle 3 hours ago
    My take: "Any application that was written in Javascript, eventually will be written in a system language"
  • brown 2 hours ago
    If LLMs live up to their potential, then they should be able to rewrite language runtimes to eventually be as fast or faster than systems languages. "Sufficiently intelligent compiler" and whatnot
    • jpc0 1 hour ago
      Every time I see this take I ask myself if the person commenting does not understand that by definition a runtime has to do more work than a strictly compiled version of any said code and therefore can never be quicker.

      To steelman your argument, the only case I can see that it is not true is the case where the compiled output is not strictly the fastest solution possible. In that case through I can see the very same LLM just generating significantly better code for in the compiled language.

      Given infinite memory all problems can be boiled down to O(1), the reality is that that is unlikely to happen because resources are finite and if there is such a hot path in code you can find it an perform the relevant operations in a compiled language, you can also imply an LLM given access to the runtime metrics.

      The thinking that the runtime can solve the problem (cache the correct solution to a dynamic problem) is directly at odds with modern infrastructure that treats services and ephemeral and just kills them. I imagine every time you git push and your CD pipeline runs it ou have a large factor performance degradation for the first X thousands queries.

    • wmf 2 hours ago
      No, because some language features like monkey patching have inherent runtime cost that cannot be eliminated. And if we reach superintelligence you can just let the AI invent its own language.
  • noosphr 3 hours ago
    Why stop at a systems language? Why not assembly? Hell why not raw machine code?

    You have to rewrite it for every new processor? Big deal. Llm magic means that cost isn't an issue and a rewrite is just changing a single variable in the docs.

    • rankdiff 2 hours ago
      why even write the code?? just tell AI to give output directly!
    • zephen 2 hours ago
      You might wonder that if you just read the headline.

      But, if you read the article, the reasons given for rust in particular are reasonable, and not matched by assembly or machine code.

      • noosphr 2 hours ago
        If you read the article you'd know they were talking about go/rust. We can sprinkle bugs with borrow checker to burn them in Rust, fair enough everyone knows that the only bugs that happen in system languages are memory errors. But what's the holy water we can use to banish bugs in Go?
  • agentultra 1 hour ago
    If you don’t understand how to do programming at this level you’re not going to get far with an LLM. They tend to fall down on anything larger than a greenfield todo app.

    You can easily write slow code in C and Rust. Get enough branch mispredictions and cache misses and it won’t matter that your program is written in a low level language. Guiding profilers to optimize workloads takes more context than source code alone has. You’ll still need to know how memory is architected and how many cycles instructions take on the platform you’re targeting. And you’ll have to guide the LLM to it. It might be easier and faster to do it yourself.

    An LLM can generate the kind of code that has no obvious errors in it. It’s trained on the sloppy code that unreasonable deadlines and hubris produce. Humans are notoriously bad at detecting undefined behaviour, safety, and temporal errors in single programs let alone whole systems. How is an untrained developer who hasn’t been contending with systems programming for the last twenty years supposed to review and verify this code?

    You’ll need to get fast at verifying large amounts of code quickly or else pray that the LLM isn’t generating the kinds of exploitable memory bugs that lead to funny machines and RCEs. We already have enough of this code as it is and most teams are trying to be conservative with their tech debt.

    I have no doubt that some amount of code will be generated by LLMs in these sorts of languages and systems. Even after the bubble pops and the feudal lords lose their ability to extract rents. But I think most people using them will be more tactical and careful in their approach. At least I hope so.

    There are still many good reasons to use a high level language. And if folks want to break into systems programming that’s great! But you have to learn systems programming in order to use an LLM effectively, in my experience. There’s no shortcut.

  • captain_coffee 4 hours ago
    > in 2026 devs write 90% of the code using natural language, through an LLM.

    That is literally not true - is the author speaking about what he personally sees at his specific workplace(s)?

    If 90% of the code at any given company is LLM-generated that is either a doomed company or a company doesn't write any relevant code to begin with.

    I literally cannot imagine a serious company in which that is a viable scenario.

    • PaulHoule 4 hours ago
      I can believe LLM generated after being cut up into small slices that are carefully reviewed.

      But to have 20 copies of Claude Code running simultaneously and the code works so well you don't need testers == high on your own supply.

      • justarandomname 2 hours ago
        Sadly, I'm seeing a LOT of this kinda of usage. So much so, I know a couple people that brag about how many they have running at time same time, pretty much all the time.
      • s1mplicissimus 3 hours ago
        > high on your own supply.

        reminds me of a bar owner who died of liver failure. people said he himself was his best customer

    • ravenstine 3 hours ago
      That depends on how you define "doomed". Most screwed up companies don't go belly up overnight. They get sold as fixer-uppers and passed between bigger firms and given different names until, finally, it is sold for parts. The way this works is that all parties behave as if the company is the opposite of doomed. It's in a sense correct. The situation hardly seems doomed if everyone has enough time to make their money and split before the company's final death twitches cannot be denied, in which case the company accomplished its mission. That of course doesn't mean everything from its codebase to its leadership didn't lack excellence the whole time.
    • bartread 3 hours ago
      Yeah, I would say it's pretty variable, and it depends on what you mean by the word write.

      I've recently joined a startup whose stack is Ruby on Rails + PostgreSQL. Whilst I've used PostgreSQL, and am extremely familiar with relational databases (especially SQL Server), I've never been a Rubyist - never written a line of Ruby until very recently in fact - and certainly don't know Rails, although the MVC architecture and the way projects are structured feels very comfortable.

      We have what I'll describe as a prototype that I am in the process of reworking into a production app by fixing bugs, and making some pretty substantial functional improvements.

      I would say, out of the gate, 90%+ of the code I'm merging is initially written by an LLM for which I'm writing prompts... because I don't know Ruby or Rails (although I'm picking them up fast), and rather than scratch my head and spend a lot of time going down a Google and Stackoverflow black hole, it's just easier to tell the LLM what I want. But, of course, I tell it what I want like the software engineer I am, so I keep it on a short leash where everything is quite tightly specified, including what I'm looking for in terms of structure and architectural concerns.

      Then the code is fettled by me to a greater or lesser extent. Then I push and PR, and let Copilot review the code. Any good suggestions it makes I usually allow it to either commit directly or raise a PR for. I will often ask it to write automated tests for me. Once it's PRed everything, I then both review and test its code and, if it's good, merge into my PR, before running through our pipeline and merging everything.

      Is this quicker?

      Hmm.

      It might not be quicker than an experienced Rails developer would make progress, but it's certainly a lot quicker than I - a very inexperienced Rails developer - would make progress unaided, and that's quite an important value-add in itself.

      But yeah, if you look at it from a certain perspective, an LLM writes 90% of my code, but the reality is rather more nuanced, and so it's probably more like 50 - 70% that remains that way after I've got my grubby mitts on it.

      • WD-42 2 hours ago
        This is exactly how I use AI as well in codebases and languages I’m not familiar with.

        I’m a bit concerned we might be losing something without the google and stack overflow rabbit holes, and that’s the context surrounding the answer. Without digging through docs you don’t see what else is there. Without the comments on the SO answer you might miss some caveats.

        So while I’m faster than I would have been, I can’t help but wonder if I’m actually stunting my learning curve and might end up slower in the long term.

      • captain_coffee 2 hours ago
        So let me get this straight - you vibe code, make what you consider as necessary changes to the LLM-generated code, create PRs that get to be reviewed by another AI tool (Copilot), potentially make changes based on Copilot's suggestions and at the end, when you are satisfied with that particular PR you merge it yourself without having any other human reviewing it and then continue to the next PR.

        Did I get that right or did I miss anything?

      • AstroBen 3 hours ago
        This seems like a really short-sighted view. 6 months from now you'll be much more inexperienced than if you just went through the initial struggle (with an LLM's help!)
        • WD-42 2 hours ago
          What does the struggle with an LLMs help look like?
          • AstroBen 2 hours ago
            "Explain this to me" until you're able to complete the task, instead of "do this for me"
            • WD-42 45 minutes ago
              If you break down the problem to small enough chunks, these basically become the same thing. The hardest part of new languages is the syntax and new APIs, so you end up getting the code anyway.
    • 20k 3 hours ago
      Its insane to me seeing this kind of thing. I write 100% of my code by hand. Of developers I know, they write >95% of code by hand

      >We are entering an era where the Brain of the application (the orchestration of models, the decision-making) might remain in Python due to its rich AI ecosystem, but the Muscle, the API servers, the data ingestion pipelines, the sidecars, will inevitably move to Go and Rust. The friction of adopting these languages has collapsed, while the cost of not adopting them (in AWS bills and carbon footprint) is rising.

      This is the most silicon valley brain thing I've seen for a while

      We're entering an era where I continue to write applications in C++ like I've always done because its the right choice for the job, except I might evaluate AI as an autocomplete assistant at some point. Code quality and my understanding of that code remains high, which lets me deliver at a much faster pace than someone spamming llm agent orchestration, and debuggability remains excellent

      90% of code written by devs is not written by AI. If this is true for you, try a job where you produce something of value instead of some random silicon valley startup

    • ekidd 3 hours ago
      If there's a human in then loop, actually reading the plans and generated code, then it's possible to have 90% of me code generated by an LLM and maintain reasonable quality.
    • happytoexplain 4 hours ago
      It seems like it may be true, but pointlessly true. I.e. yes, 90% of code is probably written by LLMs now - but that high number is because there is such a gigantic volume of garbage being generated.
      • monero-xmr 3 hours ago
        The problem is not coding (for me). The problem is thinking for a long time about what to code, then the execution is merely the side effect of my thinking. The LLM has helped me execute faster. Is not a silver bullet, and I do review the outputs carefully. But I won’t pretend it hasn’t made me much more productive.
    • neya 3 hours ago
      In my experience, most of the NodeJS shops do this. Because, LLMs on the surface seemingly are good at giving you a quick solution for JS code. Whether it's a real solution or patchwork is up for debate, but, for most mid-level to junior devs, it's good enough to get the job done. Now, multiply this workflow 10x for 10 employees. That's how you end up with a complete rewrite and hiring a senior consultant.
    • liveoneggs 2 hours ago
      they just have to keep repeating it
  • spicyusername 4 hours ago
    I wonder when we'll start to see languages designed exclusively to be easy to write by agent programming.
    • nemo1618 4 hours ago
      Here's one attempt: https://x.com/sigilante/status/2013743578950873105

      My take: Any gains from an "LLM-oriented language" will be swamped by the massive training set advantage held by existing mainstream languages. In order to compete, you would need to very rapidly build up a massive corpus of code examples in your new language, and the only way to do that is with... LLMs. Maybe it's feasible, but I suspect that it simply won't be worth the effort; existing languages are already good enough for LLMs to recursively self-improve.

    • gwern 3 hours ago
      Not going far enough - why would applications be written in either 'systems languages' or 'agent languages' if you have superintelligence too cheap to meter and you will amortize the costs over more than, say, a few days? Just write in raw assembler from a domain-specific design hyperoptimized for solely the task, the way Donald Knuth on steroids would.
    • Grosvenor 4 hours ago
      Lisp?

      I'm always surprised when agents aren't working directly with the AST.

    • Rustwerks 4 hours ago
      There has been at least one posted here in Hacker News, Mojo. Google shows some other similar attempts.

      The real issue with doing this is that there is no body of code available to train your models on. As a result the first few look like opinionated Python.

    • krackers 4 hours ago
      It seems the language would need to be strongly typed, have good error reporting and testing infrastructure, have a good standard library and high-level abstractions, and be "close enough" to existing languages. Go would seem to already fit that bill, any bespoke language you come up with is going to have less exposure in the training set than Go. Maybe Rust as a second, but Go's memory management might be easier for the LLM (and humans) than Rust's.
      • wenc 3 hours ago
        Rust, Go and TypeScript are good bets.

        Python too -- hear me out. With spec-driven development to anchor things, coupled with property-based tests (PBT) using Hypothesis, it's great for prototyping problems.

        You wouldn't write mission critical stuff with it, but it has two advantages over so-called "better designed languages": massive ecosystem and massive training.

        If your problem involves manipulating dataframes (polars, pandas), plotting (seaborn), and machine learning, Python just can't be beat. You can try using an LLM to generate Rust code for this -- go ahead and try it -- and you'll see how bad it can be.

        Better ecosystems and better training can beat better languages in many problem domains.

        • Jtsummers 3 hours ago
          > You wouldn't write mission critical stuff with it

          People do, they also write mission critical stuff in Lua, TCL, Perl, and plenty of other languages. What they generally won't do is write performance critical stuff in those languages. But there is definitely some critical communication infrastructure out there running with interpreted languages like these out there.

    • calvinmorrison 4 hours ago
      or rather, maybe we stop seeing new features that are mostly there for developers and find some older languages are quite good and capable, maybe even easier since there's less to reason about
  • invalidname 4 hours ago
    Predicting the future is futile, but I would guess this would be exactly the opposite. LLMs make it remarkably easy to generate a lot of code so they can easily generate a lot of Rust code that looks good. It probably wouldn't be, and for us it would be unreadable when something goes wrong. We would end up in LLM debugging hell.

    The solution is to use a higher level safer, strict language (e.g. Java) that would be easy for us to debug and deeply familiar to all LLMs. Yes, we will generate more code, but if you spend the LLM time focusing on nitpicking performance rather than productivity you would end up in the same problem you have with humans. LLMs also have capacity limits and the engineers that operate them have capacity limits, neither one is going away.

  • cratermoon 2 hours ago
    I was expecting an insightful analysis of systems languages versus dynamically typed interpreted languages, but instead I got more sloperator hype.
  • ElectronCharge 3 hours ago
    I'm surprised the author of this article thinks Go is a "system language".

    Go uses GC, and therefore can't be used for hard real time applications. That's disqualifying as I understand it.

    C, C++, Rust, Ada, and Mojo are true system languages IMO. It is true that as long as you can pre-allocate your data structures, and disable GC at runtime, that GC-enabled languages can be used. However, many of them rely on GC in their standard libraries.

    • Jtsummers 2 hours ago
      The Go creators declared it a systems language and it's stuck around for some reason.

      Their definition was not the one most people would have used (leading to C, C++, Rust, Ada, etc. as you listed) but systems as in server systems, distributed services, etc. That is, it's a networked systems language not a low-level systems language.

    • jandrewrogers 2 hours ago
      I think the broad consensus (and I agree with it) is that a systems language cannot have a mandatory GC. The issue with GCs isn’t just latency-optimized applications like hard real-time. GCs also reduce performance in throughput-optimized applications that are latency insensitive, albeit for different reasons.

      Anything that calls itself a “systems language” should support performance engineering to the limits of the compiler and hardware. The issue with a GC is that it renders entire classes of optimization impossible even in theory.

    • code_martial 2 hours ago
      You can preallocate your data structures and control memory layout in Go.

      Also, despite GC there’s a sizeable amount of systems programming already done in Go and proven in production.

      Given how much importance is being deservedly given to memory safety, Go should be a top candidate as a memory safe language that is also easier to be productive with.

  • est 3 hours ago
    oh no, not this again.

    There's a joke I forgot its name, something goes like

    - high performance language but hard-coded

    - xml/yaml configs

    - dynamic configs and codegen

    - metaprogramming or DSL, or just lua or python

    - let's static type to speed things up and use a compiler

    - high performance language but hard-coded

  • imperio59 4 hours ago
    I've been thinking about this and using Rust for my next backend. I think we still lack a true "all in one" web "batteries included" framework like Django or RoR for Rust.

    Maybe someone should use AI to write the code for that...

  • anon291 2 hours ago
    I mean the simple answer here is to just develop proper frameworks for Futamura projections. There's an exact one to one algorithmic correspondence between an interpreted program and the compiled version of that. GraalVM and PyPy are good options here.

    Using an LLM is overkill especially when correctness can never be guaranteed by systems who must sample from a probability distribution.