burntsushi a day ago

I mentioned this on reddit, but AFAIK, the uutils project doesn't yet support locales: https://github.com/uutils/coreutils/issues/3997

I'm not any more a fan of POSIX locales than the next person[1], but AIUI, that seems a likely requirement for uutils to be used in a distro like Ubuntu.

I'd be curious how they plan to address this. At least from my perspective, unless uutils has already been designed to account for locales from the start (I don't know if it has), it seems likely that a significant investment of time will be required to add support for it.

[1]: https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f02...

larsnystrom a day ago

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change.

Is performance a frequent rationale for rewriting C applications in Rust?

  • dralley a day ago

    No - unless the rationale was taking better advantage of multithreading, which Rust does make easier.

    But that's at least partially a maintainability argument, not just a performance one. Rust can make achieving higher levels of performance easier and less risky than doing so in C or C++ would have, but you do still have to work for it a little, it's not going to be magically faster.

    • eej71 a day ago

      I think that's only generally true for the period of time where the new tool has yet to achieve full functional parity with what it replaced. As that functionality gap is closed, the performance increase usually declines too.

      • burntsushi a day ago

        One counter example:

            $ curl -LO 'https://burntsushi.net/stuff/subtitles2016-sample.en.gz'
            $ gzip -d subtitles2016-sample.en.gz
            $ time rg -c 'Sherlock Holmes' subtitles2016-sample.en
            629
        
            real    0.099
            user    0.063
            sys     0.035
            maxmem  923 MB
            faults  0
            $ time LC_ALL=C grep -c 'Sherlock Holmes' subtitles2016-sample.en
            629
        
            real    0.368
            user    0.285
            sys     0.082
            maxmem  25 MB
            faults  0
            $ time rg -c '^\w{42}$' subtitles2016-sample.en
            1
        
            real    1.195
            user    1.162
            sys     0.031
            maxmem  928 MB
            faults  0
            $ time LC_ALL=en_US.UTF-8 grep -c -E '^\w{42}$' subtitles2016-sample.en
            1
        
            real    21.261
            user    21.151
            sys     0.088
            maxmem  25 MB
            faults  0
        
        (Yes, ripgrep is matching a Unicode-aware `\w` above, which is why I turned on GNU grep's locale feature. To make it apples-to-apples.)

        Now to be fair, you did say "usually." But actually, sometimes, even when functional parity[1] has been achieved (and then some), perf can still be wildly improved.

        [1]: ripgrep is not compatible with GNU grep, but there shouldn't be much you can do with grep that you can't do with ripgrep. The main thing would be stuff related to the marriage of locales and regexes, e.g., ripgrep can't do `echo 'pokémon' | LC_ALL=en_US.UTF-8 grep 'pok[[=e=]]mon'`. Conversely, there's oodles that ripgrep can do that GNU grep can't. For example, transparently searching UTF-16. POSIX forbids such wildly popular use cases (e.g., on Windows).

        • gibibit 19 hours ago

          I've been using ripgrep for years now and I'm still blown away by its performance.

          In the blink of an eye to search a couple of gigabytes.

          I just checked and did a full search across 100 gigabytes of files in only 21 seconds.

          The software is fantastic, and moreover it goes to show what our modern hardware is capable of. In these days of unbelievable software waste and bloat, stuff like ripgrep, dua, and fd reminds me there is hope for a better world.

        • hu3 21 hours ago

          I don't think rg speed can be attributed to Rust.

          ripgrep gains comes from a ton of brilliant optimizations and strategies done by it's author. They wrote articles about such tricks.

          • burntsushi 21 hours ago

            > I don't think rg speed can be attributed to Rust.

            I didn't say it was, and this isn't even remotely close to my point. The comment I was replying to wasn't even talking about Rust versus C. Just new tools versus old tools.

            > ripgrep gains comes from a ton of brilliant optimizations and strategies done by it's author. They wrote articles about such tricks.

            I know. I'm its author!

            • hu3 20 hours ago

              oh hi burntsushi! Good thing I applauded your skills to... yourself, without realizing. haha

              I was replying to this:

              > Is performance a frequent rationale for rewriting C applications in Rust?

              But I now realize your message was much more specific so I stand corrected with regards to context. Your point was indeed different.

      • nine_k 21 hours ago

        Not necessarily so. Sometimes a better architecture and addressing long-standing technical debt give large permanent gains. Compare yarn vs npm, or even quicksort vs bubble sort.

    • baseballdork a day ago

      It's also easier to do things in parallel in rust that might otherwise not have been considered in a C version.

      • pitaj a day ago

        And using more efficient algorithms or data structures that are painful and/or difficult to use in C.

    • the__alchemist a day ago

      Is that because of the Rayon lib?

      • sophacles a day ago

        It can be, but just plain multi-threading in rust is a lot easier to work with (correctly) than it is in C - just using stdlib and builtin features of rust.

  • vacuity a day ago

    No. It's normally memory safety and/or ease of tooling/coding/whatever.

  • josefx a day ago

    It might be a rationale for a rewrite and rust is just the language the people doing the rewrite wanted to use.

  • Octoth0rpe a day ago

    I think it's often a rationale for choosing rust over other languages once you've decided to rewrite.

  • klysm a day ago

    No I don’t believe so

glitchc a day ago

Are there any security vulnerabilities in ls, chgrp, chown etc. that requires this change? Or this just more Rust evangelism?

  • thyristan a day ago

    I'd say misplaced evangelism.

    ls, chgrp, chown, etc are applications that the current user will use to interactively work with them. Or maybe use in a shell script. They are not to be exposed to malicious inputs, so any bugs are usually not security problems, because you can't really do anything that the user couldn't do anyways. There is no security boundary, so nothing to secure there.

    However, if you e.g. write your webserver as a shell script or using shell commands you are doing it wrong and you deserve the evil things an attacker does to you.

    Also, very often, the insecurity is codified in the relevant standards such as POSIX, so you cannot really fix anything without breaking everything. E.g. newlines and various kinds of whitespace characters in filenames are a huge pain to handle safely in shell commands and especially shell scripts, if possible at all. Because shells do split things at any whitespace (unless quoted properly, which is hard to impossible to do) and shell commands usually separate their stdin/stdout at newlines (unless you know about -print0 and everything in your pipe does as well...). None of this is fixed by rust, everything could be fixed by throwing away existing standards first, but then the language doesn't matter.

    • yjftsjthsd-h a day ago

      > ls, chgrp, chown, etc are applications that the current user will use to interactively work with them. Or maybe use in a shell script. They are not to be exposed to malicious inputs, so any bugs are usually not security problems, because you can't really do anything that the user couldn't do anyways. There is no security boundary, so nothing to secure there.

      I dunno, I'd expect that you could

        ls -l untrusted.exe
        mv untrusted.exe untrusted.exe.donotrun
        chmod 600 untrusted.exe.donotrun
        sha256sum untrusted.exe.donotrun
      
      without anything bad happening. That said, I'd like some evidence that GNU's coreutils aren't already safe; I suspect that their limited scope means there's minimal exposure even when operating on untrusted inputs (note that in my example there, only hashing the file actually involves its contents).
      • thyristan 21 hours ago

        For the ls one, the name might contain characters that do bad things. A common prank in uni was to have a filename with an ascii BEL character, that will beep when you do ls in that directory. Terminal escape sequences can do a lot more. However, I think modern ls filters that, except with some POSIXLY_CORRECT environment variables set.

        That mv and chmod might operate on a whole different file due to a symlink or hardlink being in place, and due to the inherent race condition between all those operations that just use the file name. Also, mv might overwrite an existing file.

        sha256sum will run escaping on the filename output, thereby making automated comparisons fail if you are unaware of that feature. On the other hand, when turning off escaping with -z, you need to know how to handle the \0 separator.

      • nine_k 20 hours ago

        More intetesting adversarial inputs would be very long filenames, filenames with embedded \0x0, \x0A \x0D, BOM, etc.

        Not that GNU coreutils must be vulnerable to this. But I suppose they are explicitly hardened against such inputs, despite the C language not providing any guard rails.

  • ndiddy 14 hours ago

    I think this interview with the uutils project lead https://www.youtube.com/watch?v=5qTyyMyU2hQ is fairly telling about the motivation behind the project. He mentions that he's not motivated by memory safety and that the GNU coreutils don't have security problems. The discussion gets interesting when the hosts start talking about software licensing. The GNU coreutils are of course GPL licensed, while uutils is MIT licensed. The lead says that he doesn't care about how the project is licensed, as long as it's OSI-approved, and that talking about software licenses is a waste of time. However, at one point (while the hosts are talking about reasons why a user might choose uutils instead of the GNU coreutils), one of the hosts refers to a discussion they'd had with the lead prior to the recorded interview about how car companies were using uutils for "compliance reasons", and asks if that's related to some sort of EU regulation about memory safety. The lead has to correct him, and says that the car companies weren't concerned about memory safety compliance, but GPL compliance. Combined with other statements he makes about purposely not looking at the original coreutils code to avoid any claims of the implementation being "tainted" by GPL code (seems like a lot of work for someone who doesn't care about software licensing), it seems like a major motivating factor behind the uutils project is to create a permissively licensed drop-in replacement to the GNU coreutils.

  • stonemetal12 a day ago

    There have been CVEs in core utils. I am not aware of any that Rust would prevent.

lproven a day ago

I am curious -- I asked on Discourse as well...

How this will work on CPU architectures other than x86 and Arm? Ubuntu also supports ppc64le and IBM s390. Is LLVM usefully able to built binaries from Rust code for those architectures now?

  • steveklabnik 20 hours ago

    powerpc64le-unknown-linux-gnu is supported, s390x-unknown-linux-gnu isn't s390 but I think Ubuntu supports s390x, so I believe that is the case as well.

  • sophacles a day ago

    Well the oxidizr tool discussed in the article is for making the choice of the rust tools or the classic C ones simple. Presumably for architectures not supported by rust you'd do something like "oxidizr use gnu-tools".

pizlonator a day ago

I think that all of those tools can be recompiled with Fil-C today and you get memory safety while retaining the original functionality.

  • badmintonbaseba a day ago

    What about performance?

    • ape4 a day ago

      "Fil-C is currently 1.5x slower than normal C in good cases, and about 4x slower in the worst cases. I'm actively working on performance optimizations for Fil-C, so that 4x number will go down."

      source https://github.com/pizlonator/llvm-project-deluge/blob/delug...

      • pizlonator a day ago

        That’s for compute bound workloads. None of these tools that they’re oxidizing is compute bound

        • burntsushi 21 hours ago

          For the very common case of a file already being in RAM, tools like `sort` are absolutely compute bound. On my M2 mac mini:

              $ hyperfine "LC_ALL=C sort < subtitles2018.en" "LC_ALL=C gsort < subtitles2018.en"
              Benchmark 1: LC_ALL=C sort < subtitles2018.en
                Time (mean ± σ):      2.007 s ±  0.005 s    [User: 1.940 s, System: 0.064 s]
                Range (min … max):    1.997 s …  2.015 s    10 runs
          
              Benchmark 2: LC_ALL=C gsort < subtitles2018.en
                Time (mean ± σ):     898.5 ms ±   9.7 ms    [User: 2795.8 ms, System: 93.6 ms]
                Range (min … max):   875.0 ms … 906.9 ms    10 runs
          
              Summary
                LC_ALL=C gsort < subtitles2018.en ran
                  2.23 ± 0.02 times faster than LC_ALL=C sort < subtitles2018.en
          
          Info about the tools:

              $ which sort
              /usr/bin/sort
              $ which gsort
              /opt/homebrew/bin/gsort
              $ brew info coreutils | head -n3
              ==> coreutils: stable 9.6 (bottled), HEAD
              GNU File, Shell, and Text utilities
              https://www.gnu.org/software/coreutils/
          
          There are other tools in coreutils for which this applies as well.

          The GNU tools have overall been pretty heavily optimized. Why do you think they did that if it literally didn't matter? Just for shits & giggles?

          • pizlonator 20 hours ago

            I can't remember the last time I cared about how long the sort tool took to run. Probably never.

            You should try that benchmark with sort compiled with Fil-C.

            • burntsushi 20 hours ago

              If you had said, "The perf difference isn't enough for me to care about," then I wouldn't have responded or said anything. That's totally fair and reasonable. Who am I to say what you should care about? If you want to spend more time waiting for shit to finish, that's your prerogative.

              But what you said is (emphasis mine):

              > For these tools, you won’t notice.

              > None of these tools that they’re oxidizing is compute bound

              just blatantly wrong and a >2x difference in perf is absolutely relevant and something I would notice personally. Maybe I'm the only one who likes sorting to be as fast as possible, but I'd guess not.

              > You should try that benchmark with sort compiled with Fil-C.

              Feel free to post a complete MRE for doing this and I'd be happy to run it.

              • pizlonator 20 hours ago

                I don't know what two sorts you're comparing, but I'm guessing neither one of them is compiled in Fil-C. Is it really the case that the Rust one is 2x faster? Or is it that the C one is 2x faster? Also, it's just one benchmark of sort, so for all we know the two sort implementations really have the same perf if you test a broader set of cases.

                The interesting question - going back to my original post - is what the Fil-C slow down would be. Just because a program takes 100% CPU doesn't mean it'll experience bad overheads when compiled with Fil-C.

                I don't know what you mean by "complete MRE". You can download the Fil-C binaries and point coreutils' configure script at the compiler and see what happens. It's not hard.

                • burntsushi 19 hours ago

                  > I don't know what two sorts you're comparing

                  I showed you in my original comment. Both are C. One is the `sort` that comes with macOS, as shown at `/usr/bin/sort`, and the other is from GNU coreutils.

                  > Also, it's just one benchmark of sort, so for all we know the two sort implementations really have the same perf if you test a broader set of cases.

                  Sure, you're welcome to come up with a more comprehensive benchmark suite to support YOUR claim that tools like `sort` are not "compute bound" and you won't "notice" a difference in speed. I agree that 1 benchmark is insufficient to make generalized claims about performance. But it is very obviously better than 0 benchmarks (which is the number you have provided to support your claim).

                  See also: https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)

                  In the interest of over-communicating, please note the qualification I gave originally: for files cached in RAM.

                  > I don't know what you mean by "complete MRE". You can download the Fil-C binaries and point coreutils' configure script at the compiler and see what happens. It's not hard.

                  Cool, then you should be able to show me a transcript of the precise commands necessary to do this very easily!

    • pizlonator a day ago

      For these tools, you won’t notice.

      • badmintonbaseba 21 hours ago

        What modifications are needed to run through Fil-C, if any? It looks like you forked some of the software to run through Fil-C. Are you subsetting the language, or do those libraries/applications run into some "benign" UB under normal operation that is caught by Fil-C?

        • pizlonator 20 hours ago

          > What modifications are needed to run through Fil-C, if any?

          Most likely none.

          The most likely bugs you'll encounter are due to Fil-C using musl, not glibc (that always leads to some incompatibilities for GNU code).

          > Are you subsetting the language

          No.

          > or do those libraries/applications run into some "benign" UB under normal operation that is caught by Fil-C?

          Sometimes, but rarely.

          • badmintonbaseba 20 hours ago

            Super interesting. Tempting to hook it up with conan to build our dependencies and set it up with our unit tests at least. If incompatibilities are mostly due to the libc then I don't think any of our C++ dependencies would care.

_ink_ a day ago

Please fix fractional scaling first. The performance is still bad.

  • SAI_Peregrinus 21 hours ago

    Have you tried Plasma on Wayland? I've found it has very good fractional scaling support, better than Gnome or Windows. I don't own a Mac so can't compare to MacOS, and haven't used X in years, but fractional scaling with Plasma has been problem-free for me. No weird window resizing like Windows does, no blurry text, just everything on different resolution monitors staying the same visual size.

    • _ink_ 20 hours ago

      I haven't. Thanks for the suggestion, I might give it a go. I returned to Linux this year, after 10 or so years of Windows. My hope was in 2025 we have reached a state where you can install a distro and be done with it. Seems like endless hours of tweaking are still required :(

xiphias2 21 hours ago

While changing the core packages as a rewrite looks easy (,,just reimplement ls''), compatibility/stability may mean reproducing all the tiny but not safety critical bugs as well.

There's enough data from the Android ecosystem that it's much better to focus oxidisation on new software instead of old.

vimarsh6739 18 hours ago

To me, this feels less about Rust and more about moving away from copyleft.

amiga386 a day ago

Snaps were the first shot across the bow. This is another. Switch to Debian before this happens.

  • juujian a day ago

    Switched to Debian, not looking back. Everything just works, almost like Ubuntu had promised it would.

  • anilakar a day ago

    Ads in the system logs were the last straw for me.

    • everybodyknows a day ago

      Ah, you mean for their "security upgrades" to LTS versions -- presented by 'apt upgrade' since at least 20.04?

      • anilakar 5 hours ago

        Logging in triggers motd-news.service that prints whatever ad they're currently running into the journal.

      • knowitnone 21 hours ago

        because that's a great place to put it...in the logs...when we are troubleshooting an issue \s

    • mystified5016 a day ago

      Yup, I jumped ship after that one. Previously all of my servers ran Ubuntu, now it's straight Debian or Arch in some situations.

      Canonical is just so gross at every level. I saw a job posting in my area recently and it was the slimiest corpo-speak job post I've seen in.. well, a couple of weeks at least.

Spivak 21 hours ago

Why oxidizer over the existing Debian alternatives system? It's designed for this exact use case and already works with many existing packages. The author's response in the comments was a basically a non-answer which doesn't exactly inspire confidence.

> Long-term, my concern would be that this may somewhat muddy the picture for which packages need substantive fixes. If it is extremely easy to just revert, what is the benefit to switching?

??? Is this not the ideal situation? It provides low friction both for moving to the new cool thing and moving back to the existing tools when you have software that hard depends on GNU coreutils. You want the changeover to be high risk because it's hard to undo? I guess that's one way to force yourself to commit but the real users on the ground won't be happy when going to the LTS is substantially more work.

This would be 3 different symlink managers in Ubuntu all used for different sets of software. The alternatives system at least has the benefit of integrating tightly with apt.

superkuh a day ago

Porting of stable tools to an unstable (rapidly changing) language which can only successfully compile projects if the distro toolchain is rolling (or constantly updated outside of repos every few months, ala curl|sh, rustup, etc). Ubuntu is not a rolling distro. This is a bad match.

  • jerf a day ago

    It seems like the first step should be packaging this up as a metapackage or something for interested parties and having at least a full unstable release cycle where that's optional and tested before replacing such utilities for what is oestensibly an end-user distribution for less-technical people who will not be able to fix things going wrong deep in the heart of shell scripts.

    I'm all for Rusting more-or-less all the things, but Rust isn't actually magic or anything. Things aren't automatically better just because Rust. And as someone who keeps at least modest track of CVEs and such, these utilities aren't exactly throwing CVEs out left right front and center. I don't like the amount of C in the world but being run billions of times a day in every conceivable environment is itself a really, really good test. We well know from history, still not always perfect, but honestly these aren't the things that need a priority Rusting; it's the code that doesn't fit that description that really needs it.

    • HumanOstrich a day ago

      That's a lot of text to say "rewrite everything else in Rust, just not this".

      • jerf a day ago

        No, it's a lot of text to say "before replacing so many fundamental packages in an end-user distro at least beta test it".

        By all means, rewrite it in Rust, but don't go slamming it in to Ubuntu. Debian unstable. Gentoo. Something other than what I put on my boomer-generation, stereotypical "knows nothing about computers" father-in-law's laptop so I don't have to support him every week.

        • HumanOstrich a day ago

          > By all means, rewrite it in Rust

          No

        • sophacles 21 hours ago

          Good thing the actual article is introducing a tool that makes it easy for people to switch between the rust core-utils and the gnu ones. Almost as if it enables a wide-spread, opt-in beta...

  • tialaramex a day ago

    This project actually says "The current Minimum Supported Rust Version (MSRV) is 1.82.0"

    Rust 1.82 was released in October last year. It looks like this gets bumped periodically, but it isn't pulled to latest and presumably a bugfix release, if they were doing those, wouldn't change MSRV so that seems basically fine?

  • stonemetal12 a day ago

    Rust editions should take care of that. They currently claim they will support previous editions forever. Which is obviously silly, but hopefully ends up at least as good as GCC is at supporting old code.

    • steveklabnik a day ago

      It may sound obviously silly, but if you know how the edition system works, it’s not. Most things land in all editions, the only things that are edition specific are the “breaking” changes, and those have such limited scope that it’s not difficult to support all of them.

      • estebank a day ago

        Iirc when I checked there were about ~25 if checks per edition on the compiler. I feel like we can go for a while at that rate without it being a big maintenance burden.

        • steveklabnik a day ago

          Nice, yeah that's not too bad at all.

  • bitwize a day ago

    But the Rust toolchain adopted for a given Ubuntu version can compile the Rust core utilities for that version. I don't see what the stability problem is here. Furthermore, doing this in Ubuntu will nudge other distros to do the same, furthering the long-term goal of the Rust project: to rid the world of C, C++, and all their scourges.

    • TingPing a day ago

      Rust projects do not care about the limits of a distro, they just say use rustup, so future fixes will be left out of Ubuntu sometimes.

      • SAI_Peregrinus 19 hours ago

        That's hardly unique to Rust projects. Quite a few projects in other languages (e.g. Python) don't care about distro limitations & fixes get left out of Ubuntu. Even C has that issue. "Stable" distros only get the updates the maintainers can backport.

Y_Y a day ago

How about Ubuntu just sticks to their area of competency repainting Debian.

Remember when the switched they shell to dash? Not to mention, Upstart, Mir, Unity, Snap.

  • klysm a day ago

    Investor money throws out attempts like tentacles that get slapped away by the community. It will continue to happen

    • AdmiralAsshat 21 hours ago

      What "investors" are pushing rust? What is the end goal?

    • actionfromafar a day ago

      Didn’t slap systemd hard enough.

      • thyristan 21 hours ago

        Upstart was like slapping with a broken hand. Hurts the slapper much more than the slapee.

  • simion314 a day ago

    >Remember when the switched the shell to dash? Not to mention, Upstart, Mir, Unity, Snap.

    upstart was used by RedHat must have been good enough, before they NIH it Unity was actually better then GNOME at that time, but in the end RH NIH everything except Qt/KDE

  • actionfromafar a day ago

    They should have kept at Upstart, but instead we got punished for our sins.

Marlinski a day ago

[flagged]

  • dralley 20 hours ago

    People adopted systemd because it solved their problems.

    People adopted Rust because it solved their problems.

    Open-source has always been essentially a do-ocracy. Lots of work happens primarily because the people actually doing the work want it to happen, and they decide how to get it done, too. Systemd made maintaining a distribution vastly easier and solved problems that the alternatives weren't addressing, and the maintainers are doing the work, so they made the decision. This is no different.

    It's also pretty ironic to call the Rust community culty - in comparison to GNU!

  • ape4 a day ago

    Good idea, we should rewrite systemd in rust ;/

  • knowitnone 21 hours ago

    Don't like it, don't use it - same with systemd. I'm not sure what the complaint is. When Go is used to rewrite something, nobody complains. Also, you should find out what a cult is and do some research on past cults before you start accusing a project of being a cult.

    • amiga386 20 hours ago

      We're commenting on an Ubuntu discussion where an Ubuntu developer is proposing to make this mandatory for Ubuntu. Even when the alternatives system is suggested, that's quickly dismissed, because the real aim is to boot out coreutils permanently and make written-in-Rust the default.

      I don't mind people writing things in their favourite languages; I do mind them trying to compel me to adopt them through distro shenanigans. Even though Debian adopted systemd, it didn't do it without a _lot_ of discussion and voting.

      • steveklabnik 20 hours ago

        > Even when the alternatives system is suggested, that's quickly dismissed, because the real aim is to boot out coreutils permanently and make written-in-Rust the default.

        They have good technical reasons why they aren't using alternatives at this stage, and it's also suggested that if this gets closer to being real, the alternative system will be used.

        • yjftsjthsd-h 18 hours ago

          What are those technical reasons? The only answer I see in the linked thread is https://discourse.ubuntu.com/t/carefully-but-purposefully-ox... which points out that it requires cooperation from the existing package (fair), but also notes that "Diversions would work"... before moving on and not even hinting at why, then, diversions were not used and a new tool was thrown in the mix.

          • steveklabnik 17 hours ago

            > it requires cooperation from the existing package (fair),

            That is a technical reason. I think it's a good one, you may disagree. I think it's fair to start out this way and then eventually move into the alternative system once it's proven itself out.

            > but also notes that "Diversions would work"

            I've never even heard of Diversions, so I can't comment on that.

            • amiga386 16 hours ago

              https://wiki.debian.org/DpkgDiversions

              > Diversions allow files from a package pkg-A to be temporarily replaced by files from another package pkg-B. When pkg-B is uninstalled, the files from pkg-A are put back into place.

              > Do not attempt to divert a file which is vitally important for the system’s operation - when using dpkg-divert there is a time, after it has been diverted but before dpkg has installed the new version, when the file does not exist.

              • yjftsjthsd-h 16 hours ago

                Thanks, that explains it. Coreutils certainly counts as "vitally important for the system’s operation"... Though I do wonder whether it wouldn't be reasonable to fix that limitation rather than make a new tool. That said, even if that is the better solution I do kind of get not wanting to go to that trouble for this one off.

    • steveklabnik 20 hours ago

      > When Go is used to rewrite something, nobody complains.

      You know, before this week, I would have said that, but given the shitstorm over the TypeScript compiler in the past few days...

blankx32 a day ago

We can’t leave things alone

Kwpolska a day ago

On the one hand, having less RMS and GNU software in the world is better.

On the other hand, I'd prefer for basic system tools to be provided by an established and well-funded project, and not just switch to something written in Rust because Rust is the hype these days.

  • yjftsjthsd-h a day ago

    > On the one hand, having less RMS and GNU software in the world is better.

    Why?

    • Kwpolska 21 hours ago

      For RMS specifically: https://stallman-report.org/

      The GNU project has an extremely utopian and unrealistic vision of open-source.

      • amiga386 20 hours ago

        For the rebuttal to the Stallman Report: https://stallmansupport.org/

        For the details on the author of the Stallman Report -- Drew Devault -- who tried to pretend he didn't write it, see https://dmpwn.info/

        Also, please consider not using the term "open-source". It's a term designed to pander to businessmen, who are afraid of the Free Software movement's goals, and to muddy the waters as to what rights you should have with such software.

        • Kwpolska 18 hours ago

          The term "Free Software" sucks from the marketing point-of-view, as ~everyone except FSF fanboys understands it to mean "price = $0". Sorry, that ship has sailed.

      • sham1 21 hours ago

        Since we're talking about the "utopian and unrealistic vision" of the GNU project, I'd just like to pedantically note that the GNU project has no vision for "open source" but Free software.

        Indeed, the Chief GNUisance himself is very adamant about that[0]. Of course, the GNU project is a lot bigger than just RMS, and we shouldn't condemn the project and its struggle against non-free software just because RMS is problematic.

        [0]: <https://www.gnu.org/philosophy/open-source-misses-the-point....>