dvratil a day ago

The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions, functions returning bool, functions returning 0 on success, functions returning 0 on error, functions returning -1 on error, functions returning negative errno on error, functions taking optional pointer to bool to indicate error (optionally), functions taking reference to std::error_code to set an error (and having an overload with the same name that throws an exception on error if you forget to pass the std::error_code)...I understand there's 30 years of history, but it still is annoying, that even the standard library is not consistent (or striving for consistency).

Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".

  • jeroenhd a day ago

    The result type does make for some great API design, but SerenityOS shows that this same paradigm also works fine in C++. That includes something similar to the ? operator, though it's closer to a raw function call.

    SerenityOS is the first functional OS (as in "boots on actual hardware and has a GUI") I've seen that dares question the 1970s int main() using modern C++ constructs instead, and the API is simply a lot better.

    I can imagine someone writing a better standard library for C++ that works a whole lot like Rust's standard library does. Begone with the archaic integer types, make use of the power your language offers!

    If we're comparing C++ and Rust, I think the ease of use of enum classes/structs is probably a bigger difference. You can get pretty close, but Rust avoids a lot of boilerplate that makes them quite usable, especially when combined with the match keyword.

    I think c++, the language, is ready for the modern world. However, c++, the community, seems to be struck at least 20 years in the past.

    • jchw a day ago

      Google has been doing a very similar, but definitely somewhat uglier, thing with StatusOr<...> and Status (as seen in absl and protobuf) for quite some time.

      A long time ago, there was talk about a similar concept for C++ based on exception objects in a more "standard" way that could feasibly be added to the standard library, the expected<T> class. And... in C++23, std::expected does exist[1], and you don't need to use exception objects or anything awkward like that, it can work with arbitrary error types just like Result. Unfortunately, it's so horrifically late to the party that I'm not sure if C++23 will make it to critical adoption quickly enough for any major C++ library to actually adopt it, unless C++ has another massive resurgence like it did after C++11. That said, if you're writing C++ code and you want a "standard" mechanism like the Result type, it's probably the closest thing there will ever be.

      [1]: https://en.cppreference.com/w/cpp/utility/expected

      • CJefferson 20 hours ago

        I had a look. In classic C++ style, if you use *x to get the ‘expected’ value, when it’s an error object (you forgot to check first and return the error), it’s undefined behaviour!

        Messing up error handling isn’t hard to do, so putting undefined behaviour here feels very dangerous to me, but it is the C++ way.

        • dietr1ch 5 hours ago

          `StatusOr<T>::operator` there is akin to `Result<T, _>::unwrap()`. On C++ unwrapping looks like dereferencing a pointer which is scary and likely UB already.

          But as you learn to work with StatusOr you'll end up just using just ASSIGN_OR_RETURN everytime and dereferencing remains scary. I guess the complaint is that C++ won't guarantee that the execution will stop, but that's the C++ way after you drop all safety checks in `StatusOr::operator` to gain performance.

        • jchw 19 hours ago

          The reason it works this way is there's legitimately no easy way around it. You're not guaranteed a reasonable zero value for any type, so you can't do the slightly better Go thing (defined behavior but still wrong... Not great.) and you certainly can't do the Rust thing, because... There's no pattern matching. You can't conditionally enter a branch based on the presence of a value.

          There really is no reasonable workaround here, the language needs to be amended to make this safe and ergonomic. They tried to be cheeky with some of the other APIs, like std::variant, but really the best you can do is chuck the conditional branch into a lambda (or other function-based implementation of visitors) and the ergonomics of that are pretty unimpressive.

          Edit: but maybe fortune will change in the future, for anyone who still cares:

          https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...

          • CJefferson 19 hours ago

            You could assert. You could throw. I can’t understand how, this modern age where so many programs end up getting hacked, that introducing more UB seems like a good idea.

            This is one odd the major reasons I switched to rust, just to escape spending my whole life worrying about bugs caused by UB.

            • jchw 18 hours ago

              Assertions are debug-only. Exceptions are usually not guaranteed to be available and much of the standard library doesn't require them. You could std::abort, and that's about it.

              I think the issue is that this just isn't particularly good either. If you do that, then you can't catch it like an exception, but you also can't statically verify that it won't happen.

              C++ needs less of both undefined behavior and runtime errors. It needs more compile-time errors. It needs pattern matching.

              • CJefferson 17 hours ago

                I agree these things would be better, but I don’t understand how anyone can think UB is better than abort.

                (Going to moan for a bit, and I realise you aren’t responsible for the C++ standards mess!)

                I have been hearing for about… 20 years now that UB gives compilers and tools the freedom to produce any error catching they like, but all it seems to have done in the main is give them the freedom to produce hard to debug crash code.

                You can of course usually turn on some kind of “debug mode” in some compilers, but why not just enforce that as standard? Compilers would still be free to add a “standards non-compliant” go fast mode if they like.

                • affyboi 6 hours ago

                  > but why not just enforce that as standard

                  I don’t think people want that as standard. The whole point of using C++ tends to be because you can do whatever you need to for the sake of performance. The language is also heavily driven by firms that need extreme performance (because otherwise why not use a higher level language)

                  There are knobs like stdlib assertions and ubsan, but that’s opt-in because there’s a cost to it. Part of it is also the commitment to backwards compatibility and code that compiled before should generally compile now (though there are exceptions to that unofficial rule).

                  • jchw 34 minutes ago

                    There does not need to be an additional cost for this.

                    Most users will do this:

                    1. Check if there is a value

                    2. Get the value

                    There is nothing theoretically preventing the compiler from enforcing that step 1 happens before step 2, especially if the compiler is able to combine the control flow branch with the process of conditionally getting the value. The practical issue is that there's no way to express this in C++ at all. The best you can do is the visitor pattern, which has horrible ergonomics and you can only hope it doesn't cause worse code generation too.

                    Some users want to do this:

                    1. Grab the value without checking to see if it's valid. They are sure it will be valid and can't or don't want to eat the cost of checking.

                    There is nothing theoretically preventing this from existing as a separate method.

                    I'm not a rust fanboy (seriously, check my GitHub @jchv and look at how much Rust I write, it's approximately zero) but Rust has this solved six ways through Sunday. It can do both of these cases just fine. The only caveat is that you have to wrap the latter case in an unsafe, but either way, you're not eating any costs you don't want to.

                    C++ can do this too. C++ has an active proposal for a feature that can fix this problem and make much more ergonomic std::variant possible, too.

                    https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p26...

                    Of course, this is one single microcosm in the storied history of C++ failing to adequately address the problem of undefined behavior proliferating the language, so I don't have high hopes.

                • bluGill 5 hours ago

                  A lot UB is things you wouldn't do anyway. While it is possible to define divide by zero or integer overflow, what does it mean. If you code does either of those things you have a bug in your code (a few encryption algorithms depend on specific overflow behavior - if your language promises that same behavior it is useful).

                  Since CPUs handle such things differently whatever you define to happen means that the compiler as to insert a if to check on any CPU that doesn't work how you define it - all for something that you probably are not doing. The cost is too high in a tight loop when you know this won't even happen (but the compiler does not).

                  No

                  • jchw an hour ago

                    This is a bad answer too, IMO.

                    I think there is a solid case for the existence of undefined behavior; even Rust has it, it's nothing absurd in concept, and you do describe some reasoning for why it should probably exist.

                    However, and here's the real kicker, it really does not need to exist for this case. The real reason it exists for this case is due to increasingly glaring deficiencies in the C++ language, namely, again, the lack of any form of pattern matching for control flow. Because of this, there's no way for a library author, including the STL itself, to actually handle this situation succinctly.

                    Undefined behavior indeed should exist, but not for common cases like "oops, I didn't check to see if there was actually a value here before accessing it." Armed with a moderately sufficient programming language, the compiler can handle that. Undefined behavior should be more like "I know you (the compiler) can't know this is safe, but I already know that this unsafe thing I'm doing is actually correct, so don't generate safeguards for me; let what happens, happen." This is what modern programming languages aim to do. C++ does that for shit like basic arithmetic, and that's why we get to have the same fucking CVEs for 20+ years, over and over in an endless loop. "Just get better at programming" is a nice platitude, but it doesn't work. Even if it was possible for me to become absolutely perfect and simply just never make any mistakes ever (lol) it doesn't matter because there's no chance in hell you'll ever manage that across a meaningful segment of the industry, including the parts of the industry you depend on (like your OS, or cryptography libraries, and so on...)

                    And I don't think the issue is that the STL "doesn't care" about the possibility that you might accidentally do something that makes no sense. Seriously, take a look at the design of std::variant: it is pretty obvious that they wanted to design a "safe" union. In fact, what the hell would the point of designing another unsafe union be in the first place? So they go the other route. std::variant has getters that throw exceptions on bad accesses instead of undefined behavior. This is literally the exact same type of problem that std::expected has. std::expected is essentially just a special case of a type-safe union with exactly two possible values, an expected and unexpected value (though since std::variant is tagged off of types, there is the obvious caveat that std::expected isn't quite a subset of std::variant, since std::expected could have the same type for both the expected and unexpected values.)

                    So, what's wrong? Here's what's wrong. C++ Modules were first proposed in 2004[1]. C++20 finally introduced a version of modules and lo and behold, they mostly suck[2] and mostly aren't used by anyone (Seriously: they're not even fully supported by CMake right now.) Andrei Alexandrescu has been talking about std::expected since at least 2018[3] and it just now finally managed to get into the standard in C++23, and god knows if anyone will ever actually use it. And finally, pattern matching was originally proposed by none other than Bjarne himself (and Gabriel Dos Reis) in 2019[4] and who knows when it will make it into the standard. (I hope soon enough so it can be adopted before the heat death of the Universe, but I think that's only if we get exceptionally lucky.)

                    Now I'm not saying that adding new and bold features to a language as old and complex as C++ could possibly ever be easy or quick, but the pace that C++ evolves at is sometimes so slow that it's hard to come to any conclusion other than that the C++ standard and the process behind it is simply broken. It's just that simple. I don't care what changes it would take to get things moving more efficiently: it's not my job to figure that out. It doesn't matter why, either. The point is, at the end of the day, it can't take this long for features to land just for them to wind up not even being very good, and there are plenty of other programming languages that have done better with less resources.

                    I think it's obvious at this point that C++ will never get a handle on all of the undefined behavior; they've just introduced far too much undefined behavior all throughout the language and standard library in ways that are going to be hard to fix, especially while maintaining backwards compatibility. It should go without saying that a meaningful "safe" subset of C++ that can guarantee safety from memory errors, concurrency errors or most types of undefined behavior is simply never going to happen. Ever. It's not that it isn't possible to do, or that it's not worth doing, it's that C++ won't. (And yes, I'm aware of the attempts at this; they didn't work.)

                    The uncontrolled proliferation of undefined behavior is ultimately what is killing C++, and a lot of very trivial cases could be avoided, if only the language was capable of it, but it's not.

                    [1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n17...

                    [2]: https://vector-of-bool.github.io/2019/01/27/modules-doa.html

                    [3]: https://www.youtube.com/watch?v=PH4WBuE1BHI

                    [4]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p13...

                    • bluGill 11 minutes ago

                      I cannot follow your rant... I'll do my best to respond, but I'm probably not understanding something.

                      Divide by zero must be undefined behavior in any performant language. On x86 you either have a if before running the divide (which of course in some cases the compiler can optimize out, but only if it can determine the value is not zero); or you the CPU will trap into the OS - different OSes handle this in different ways, but most not in a while that makes it possible to figure out where you were and thus do something about it. This just came up in the C++ std-proposals mailing list in the past couple weeks.

                      AFAIK all common CPUs have the same behavior on integer overflow (two-complement). However in almost all cases (again, some encryption code is an exception) that behavior is useless to real code and so if it happens your code has a bug either way. Thus we may as well let compilers optimize assuming it cannot happen as it if it does you have a bug no matter what we define it as. (C++ is used on CPUs that are not two-complement as well, but we could call this implementation defined or unspecified, but it doesn't change that you have a bug if you invoke it.)

                      For std::expected - new benchmarks are proving in the real world, and with optimized exception handlers that exceptions are faster in the real world than systems that use things like expected. Microbenchmarks that show exceptions are slower are easy to create, but real world exceptions that unwind more than a couple function calls show different results.

                      As for modules, support is finally here and early adopters are using it. The road was long, but it is finally proving it worked.

                      Long roads are a good thing. C++ has avoided a lot of bad designs by spending a lot of time thinking about problems about things for a long time. Details often matter and move fast languages tend to run into problems when something doesn't work as well as they want. I'm glad C++ standardization is slow - it already is a mess without add more half backed features to the language.

              • lallysingh 7 hours ago

                Culturally, I think C++ has a policy of "there's no single right answer." Which leads to there being no wrong answers. We just need more answers so everyone's happy. Which is worse.

          • mgaunard 10 hours ago

            Of course you can do the Rust thing, it's just taking a function object.

      • a_t48 a day ago

        There’s a few backports around, not quite the same as having first class support, though.

        • jchw 20 hours ago

          I believe the latest versions of GCC, Clang, MSVC and XCode/AppleClang all support std::expected, in C++23 mode.

    • jll29 15 hours ago

      > I think c++, the language, is ready for the modern world. However, c++, the community, seems to be struck at least 20 years in the past.

      Good point. A language that gets updated by adding a lot of features is DIVERGING from a community that has mostly people that still use a lot of the C baggage in C++, and only a few folks that use a lot of template abstraction at the other end of the spectrum.

      Since in larger systems, you will want to re-use a lot of code via open source libraries, one is inevitably stuck in not just one past, but several versions of older C++, depending on when the code to be re-used was written, what C++ standard was stable enough then, and whether or not the author adopted what part of it.

      Not to speak of paradigm choice to be made (object oriented versus functional versus generic programmic w/ templates).

      It's easier to have, like Rust offers it, a single way of doing things properly. (But what I miss in Rust is a single streamlined standard library - organized class library - like Java has had it from early days on, it instead feels like "a pile of crates").

      • pjmlp 13 hours ago

        Just give Rust 36 years of field use, to see how it goes.

        • timschmidt 10 hours ago

          36 years is counting from the first CFront release. Counting the same way for Rust, it's been around since 2006. It's got almost 20 years under it's belt already.

          edit: what's with people downvoting a straight fact?

          • d_tr 9 hours ago

            Rust 0.1, the first public release, came out in January 2012. CFront 1.0, the first commercial release, came out in 1985.

            The public existence of Rust is 13 years, during which computing has not changed that much to be honest. Now compare this to the prehistory that is 1985, when CFront came out, already made for backwards compatibility with C.

            • timschmidt 8 hours ago

              I grew up with all the classic 8 bit micros, and to be honest, it doesn't feel like computing has changed at all since 1985. My workstation, while a billion times faster, is still code compatible with a Datapoint 2200 from 1970.

              The memory model, interrupt model, packetized networking, digital storage, all function more or less identically.

              In embedded, I still see Z80s and M68ks like nothing's changed.

              I'd love to see more concrete implementations of adiabatic circuits, weird architectures like the mill, integrated FPGAs, etc. HP's The Machine effort was a rare exciting new thing until they walked back all the exciting parts. CXL seems like about the most interesting new thing in a bit.

              • mazurnification 6 hours ago

                Does GPU thingy count as something that has changed with computing?

              • qznc 7 hours ago

                Today a byte is 8 bits. That was not always the case back then, for example.

                • timschmidt 7 hours ago

                  > I grew up with all the classic 8 bit micros

                  Meaning that all the machines I've ever cared about have had 8 bit bytes. The TI-99/4A, TRS-80, Commodore 64 and 128, Tandy 1000 8088, Apple ][, Macintosh Classic, etc.

                  Many were launched in the late 70s. By 1985 we were well into the era of PC compatibles.

                  • bluGill 5 hours ago

                    in 1985 PC compatibles were talked about, but systems like VAX, and mainframes were still very common and considered the real computers while PCs were toys for executives. PCs had already shown enough value (via word processors and spreadsheets) that everyone knew they were not going away. PCs lacked things like multi-tasking that even then "real" computers had for decades.

                    • timschmidt 5 hours ago

                      > in 1985 PC compatibles were talked about

                      My https://en.wikipedia.org/wiki/Tandy_1000 came out in 1984. And it was a relatively late entry to the market, it was near peak 8088 with what was considered high end graphics and sound for the day, far better than the IBM PC which debuted in 1981 and only lasted until 1987.

          • pjmlp 9 hours ago

            Because it is counting since CFront 2.0, the first official release with industry use in UNIX systems.

            So that would be Rust 1.0, released in 2015, not 2006, putting it down to a decade.

            And the point still stands when looking at any long enough ecosystem still in use, with strong backwards compatibility, not only the language, the whole ecosystem, eventually editions alone won't make it, and just like those languages, Rust will gain its own warts.

            • timschmidt 9 hours ago

              Fair enough. I can cop to getting the CFront date wrong. Still, a decade since 1.0 is non-trivial.

              > eventually editions alone won't make it, and just like those languages, Rust will gain its own warts.

              That's possible. Though C++ hasn't had editions, or the HLIR / MIR separation, the increased strictness, wonderful tooling, or the benefit of learning from the mistakes made with C++. Noting that, it seems reasonable to expect Rust to collect less cruft and paint itself into fewer corners over a similar period of time. Since C++ has been going for 36 years, it seems Rust will outlive me. Past that, I'm not sure I care.

              • pjmlp 8 hours ago

                C++ editions are -std=something, people keep forgeting Rust editions are quite limited in what they actually allow in grammar and semantic changes across versions, and they don't cover standard library changes.

                IDEs are wonderful tooling, maybe people should get their heads outside UNIX CLIs and MS-DOS like TUIs.

                Then there is the whole ecosystem of libraries, books, SDKs and industry standards.

                • timschmidt 8 hours ago

                  I'm not sure who in your mind is forgetting that, or what the rest of your comment means to communicate.

                  Who are you speaking to who hasn't explored all those things in depth?

                  I see Rust's restrictions as a huge advantage over C++ here. Even with respect to editions. Rust has always given me the impression of a language designed from the start to be approximately what C++ is today, without the cruft, in which safety is opt-out, not opt-in. And the restrictions seem more likely to preserve that than not.

                  C/C++ folks seem to see Rust's restrictions as anti-features without realizing that C/C++'s lack of restriction resulted in the situation they have today.

                  I only maintain a few projects in each language, so I haven't run into every sort of issue for either, but that's very much how it feels to me still, several years and several projects in.

                  • pjmlp 6 hours ago

                    Many of the members of the Rust Evangelism Strike Force, as main audience. That is to whom it is targeted for, given the usual kind of content that some write about.

                    I agree that Rust is designed to be like C++ is today, without the cruft, except all languages if they survive long enough in the market, beyond the adoption curve, they will eventually get their own cruft.

                    Not realizing this, will only make that 30 years from now, if current languages haven't yet been fully replaced by AI based tools, there will be that language designed to be like Rust is in 30 years, but without the cruft.

                    The strength of C++ code today is on the ecosystem, that is why we reach for it, having to write CUDA, DirectX, maybe dive into the innards of Java, CLR, V8, GCC, LLVM, doing HPC with OpenAAC, OpenMP, MPI, Metal, Unreal, Godot, Unity.

                    Likewise I don't reach for C for fun, the less the merrier, rather POSIX, OpenGL, Vulkan,....

                    • timschmidt 6 hours ago

                      > Many of the members of the Rust Evangelism Strike Force, as main audience.

                      Well I'm not them. I'm just a regular old software developer.

                      > The strength of C++ code today is on the ecosystem

                      Ecosystem is why I jumped ship from C++ to Rust. The difference in difficulty integrating a random library into my project is night and day. What might take a week or a month in C++ (integrating disparate build systems, establishing types and lifetimes of library objects and function calls, etc) takes me 20 minutes in Rust. And in general I find the libraries to be much smaller, more modular, and easier to consume piecemeal rather than a whole BOOST or QT at a time.

                      And while the Rust libraries are younger, I find them to be more stable, and often more featureful and with better code coverage. The language seems to lend itself to completionism.

      • mgaunard 10 hours ago

        A lot of people using C++ don't actually use any libraries. I've observed the opposite with Rust.

        People choose C++ because it's a flexible language that lets you do whatever you want. Meanwhile Rust is a constrained and opinionated thing that only works if you do things "the right way".

        • tialaramex 10 hours ago

          > People choose C++ because it's a flexible language that lets you do whatever you want.

          You went on a bit too long. C++ lets you do whatever. Whether you wanted that is not its concern. That's handily illustrated in Matt Godbolt's talk - you provided a floating point value but that's inappropriate? Whatever. Negative values for unsigned? Whatever.

          This has terrible ergonomics and the consequences were entirely predictable.

    • moomin 10 hours ago

      I’ve seen it argued that, in practice, there’s two C++ communities. One is fundamentally OK with constantly upgrading their code (those with enterprise refactoring tools are obviously in this camp, but it’s more a matter of attitude than technology) and those that aren’t. C++ is fundamentally caught between those two.

      • AndrewStephens 6 hours ago

        This is the truth. I interview a lot of C++ programmers and it amazes me how many have gone their whole careers barely touching C++11 let alone anything later. The extreme reach of C++ software (embedded, UIs, apps, high-speed networking, services, gaming) is both a blessing and a curse and I understand why the committee is hesitant to introduce breaking changes at the expense of slow progress on things like reflection.

    • Rucadi 21 hours ago

      I created a library "cpp-match" that tries to bring the "?" operator into C++, however it uses a gnu-specific feature (https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html), I did support msvc falling-back to using exceptions for the short-circuit mechanism.

      However it seems like C++ wants to only provide this kind of pattern via monadic operations.

      • tialaramex 10 hours ago

        You can't really do Try (which is that operator's name in Rust) because C++ lacks a ControlFlow type which is how Try reflects the type's decision about whether to exit early.

        You can imitate the beginner experience of the ? operator as magically handling trivial error cases by "just knowing" what should happen, but it's not the same thing as the current Try feature.

        Barry Revzin has a proposal for some future C++ (lets say C++ 29) to introduce statement expressions, the syntax is very ugly even by C++ standards but it would semantically solve the problem you had.

    • d_tr 21 hours ago

      C++ carries so much on its back and this makes its evolution over the past decade even more impressive.

      • pjmlp 13 hours ago

        Yes, people keep forgeting C++ was made public with CFront 2.0 back in 1989, 36 years of backwards compatibility, to certain extent.

        • bluGill 5 hours ago

          C++ is C compatible so more than 50 years of backward compatibility. Even today the vast majority of C programs can be compiled as C++ and they just work. Often such programs run faster because C++ a few additions that the compiler can use to optimize better, in practice C programs generally mean the stronger rules anyway (but of course when they don't the program is wrong).

          • KerrAvon an hour ago

            <pedantry corner>CFront was never compatible with K&R C to the best of my knowledge, so the actual start date would be whenever C89-style code in widespread use; I'm not sure how long before 1989 that was.

  • zozbot234 a day ago

    > The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions

    This isn't really true since Rust has panics. It would be nice to have out-of-the-box support for a "no panics" subset of Rust, which would also make it easier to properly support linear (no auto-drop) types.

    • bionhoward 6 hours ago

      This is already a thing, I do this right now. You configure the linter to forbid panics, unwraps, and even arithmetic side effects at compile time.

      You can configure your lints in your workspace-level Cargo.toml (the folder of crates)

      “””

      [workspace.lints.clippy]

      pedantic = { level = "warn", priority = -1 }

      # arithmetic_side_effects = "deny"

      unwrap_used = "deny"

      expect_used = "deny"

      panic = "deny"

      “””

      then in your crate Cargo.toml “””

      [lints]

      workspace = true

      “””

      Then you can’t even compile the code without proper error handling. Combine that with thiserror or anyhow with the backtrace feature and you can yeet errors with “?” operators or match on em, map_err, map_or_else, ignore them, etc

      [1] https://rust-lang.github.io/rust-clippy/master/index.html#un...

    • kelnos a day ago

      I wish more people (and crate authors) would treat panic!() as it really should be treated: only for absolutely unrecoverable errors that indicate that some sort of state is corrupted and that continuing wouldn't be safe from a data- or program-integrity perspective.

      Even then, though, I do see a need to catch panics in some situations: if I'm writing some sort of API or web service, and there's some inconsistency in a particular request (even if it's because of a bug I've written), I probably really would prefer only that request to abort, not for the entire process to be torn down, terminating any other in-flight requests that might be just fine.

      But otherwise, you really should just not be catching panics at all.

      • monkeyelite 9 hours ago

        > I probably really would prefer only that request to abort, not for the entire process to be torn down,

        This is a sign you are writing an operating system instead of using one. Your web server should be handling requests from a pool of processes - so that you get real memory isolation and can crash when there is a problem.

        • tsimionescu 9 hours ago

          Even if you used a pool of processes, that's still not one process per request, and you still don't want one request crashing to tear down unrelated requests.

          • monkeyelite 9 hours ago

            I question both things. I would first of all handle each request in its own process.

            If there was a special case that would not work, then the design dictates that requests are not independent and there must be risk of interference (they are in the same process!)

            What I definitely do not want is a bug ridden “crashable async sub task” system built in my web program.

            • tsimionescu 8 hours ago

              This is simply a wrong idea about how to write web servers. You're giving up scalability massively, only to gain a minor amount of safety - one that is virtually irrelevant in a memory safe language, which you should anyway use. The overhead of process-per-request, or even thread-per-request, is absurd if you're already using a memory safe language.

              • monkeyelite 8 hours ago

                > You're giving up scalability massively

                you’re vastly over estimating the overhead of processes and number of simultaneous web connections.

                > only to gain a minor amount of safety

                What you’re telling me is performance (memory?) is such a high priority you’re willing to make correctness and security tradeoffs.

                And I’m saying thats ok, one of those is crashing might bring down more than one request.

                > one that is virtually irrelevant in a memory safe language

                Your memory safe language uses C libraries in its process.

                Memory safe languages have bugs all the time. The attack surface is every line of your program and runtime.

                Memory is only one kind of resource and privilege. Process isolation is key for managing resource access - for example file descriptors.

                Chrome is a case study if these principles. Everybody thought isolating JS and HTML pages should be easy - nobody could get it right and chrome instead wrapped each page in a process.

                • simiones 7 hours ago

                  Please find one web server being actively developed using one process per request.

                  Handling thousands of concurrent requests is table stakes for a simple web server. Handling thousands of concurrent processes is beyond most OSs. The context switching overhead alone would consume much of the CPU of the system. Even hundreds of processes will mean a good fraction of the CPU being spent solely on context switching - which is a terrible place to be.

                  • monkeyelite 4 hours ago

                    > Handling thousands of concurrent processes is beyond most OS

                    It works fine on Linux - the operating system for the internet. Have you tried it?

                    > good fraction of the CPU being spent solely on context switching

                    I was waiting for this one. Threads and processes do the same amount of context switching. The overhead of processes switch is a little higher. The main cost is memory.

                  • nosefrog 7 hours ago

                    We did that at Dropbox in Python for a while. Though they switched to async after I left.

                • kevincox 8 hours ago

                  > you’re vastly over estimating the overhead of processes and number of simultaneous web connections.

                  It's less the actual overhead of the process but the savings you get from sharing. You can reuse database connections, have in-memory caches, in-memory rate limits and various other things. You can use shared memory which is very difficult to manage or an additional common process, but either way you are effectively back to square one with regards to shared state that can be corrupted.

                  • monkeyelite 4 hours ago

                    You certainly can get savings. I question how often you need that.

                    I just said one of the costs of those saving is crashing may bring down multiple requests - and you should design with that trade off.

      • willtemperley 13 hours ago

        Using a Rust lib from Swift on macOS I definitely want to catch panics - to access security scoped resources in Rust I need the Rust code to execute in process (I believe) but I’d also like it not to crash the entire app.

      • wyager 18 hours ago

        > only for absolutely unrecoverable errors

        Unfortunately even the Rust core language doesn't treat them this way.

        I think it's arguably the single biggest design mistake in the Rust language. It prevents a ton of useful stuff like temporarily moving out of mutable references.

        They've done a shockingly good job with the language overall, but this is definitely a wart.

      • j-krieger 7 hours ago

        Honestly, I don't think libraries should ever panic. Just return an UnspecifiedError with some sort of string. I work daily with rust, but I wish no_std and an arbitrary no_panic would have better support.

        • burntsushi 3 hours ago

          Example docs for `foo() -> Result<(), UnspecifiedError>`:

              # Errors
          
              `foo` returns an error called `UnspecifiedError`, but this only
              happens when an anticipated bug in the implementation occurs. Since
              there are no known such bugs, this API never returns an error. If
              an error is ever returned, then that is proof that there is a bug
              in the implementation. This error should be rendered differently
              to end users to make it clear they've hit a bug and not just a
              normal error condition.
          
          Imagine if I designed `regex`'s API like this. What a shit show that would be.

          If you want a less flippant take down of this idea and a more complete description of my position, please see: https://burntsushi.net/unwrap/

          > Honestly, I don't think libraries should ever panic. Just return an UnspecifiedError with some sort of string.

          The latter is not a solution to the former. The latter is a solution to libraries having panicking branches. But panics or other logically incorrect behavior can still occur as a result of bugs.

      • tcfhgj a day ago

        would you consider panics acceptable when you think it cannot panic in practice? e.g. unwraping/expecting a value for a key in a map when you inserted that value before and know it hasn't been removed?

        you could have a panic though, if you wrongly make assumptions

        • nextaccountic 16 hours ago

          Obviously yes. For the same reason it's acceptable that myvec[i] panics (it will panic if i is out of bounds - but you already figured out that i is in bounds) and a / b panic for a and b integers (it will panic if b is zero, but if your code is not buggy you already tested if b is zero prior to dividing right?)

          Panic is absolutely fine for bugs, and it's indeed what should happen when code is buggy. That's because buggy code can make absolutely no guarantees on whether it is okay to continue (arbitrary data structures may be corrupted for instance)

          Indeed it's hard to "treat an error" when the error means code is buggy. Because you can rarely do anything meaningful about that.

          This is of course a problem for code that can't be interrupted.. which include the Linux kernel (they note the bug, but continue anyway) and embedded systems.

          Note that if panic=unwind you have the opportunity to catch the panic. This is usually done by systems that process multiple unrelated requests in the same program: in this case it's okay if only one such request will be aborted (in HTTP, it would return a 5xx error), provided you manually verify that no data structure shared by requests would possibly get corrupted. If you do one thread per request, Rust does this automatically; if you have a smaller threadpool with an async runtime, then the runtime need to catch panics for this to work.

          • monkeyelite 9 hours ago

            > Note that if panic=unwind you have the opportunity to catch the panic.

            And now your language has exceptions - which break control flow and make reasoning about a program very difficult - and hard to optimize for a compiler.

        • conradludgate a day ago

          Not the same person, but I first try and figure out an API that allows me to not panic in the first place.

          Panics are a runtime memory safe way to encode an invariant, but I will generally prefer a compile time invariant if possible and not too cumbersome.

          However, yes I will panic if I'm not already using unsafe and I can clearly prove the invariant I'm working with.

        • pdimitar a day ago

          I don't speak for anyone else but I'm not using `unwrap` and `expect`. I understand the scenario you outlined but I've accepted it as a compromise and will `match` on a map's fetching function and will have an `Err` branch.

          I will fight against program aborts as hard as I possibly can. I don't mind boilerplate to be the price paid and will provide detailed error messages even in such obscure error branches.

          Again, speaking only for myself. My philosophy is: the program is no good for me dead.

          • von_lohengramm a day ago

            > the program is no good for me dead

            That may be true, but the program may actually be bad for you if it does something unexpected due to an unforeseen state.

            • pdimitar a day ago

              Agreed, that's why I don't catch panics either -- if we get to that point I'm viewing the program as corrupted. I'm simply saying that I do my utmost to never use potentially panicking Rust API and prefer to add boilerplate for `Err` branching.

    • codedokode a day ago

      It's pretty difficult to have no panics, because many functions allocate memory and what are they supposed to do when there is no memory left? Also many functions use addition and what is one supposed to do in case of overflow?

      • Arnavion a day ago

        >many functions allocate memory and what are they supposed to do when there is no memory left?

        Return an AllocationError. Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. It's now trying to add in explicit allocators and allocation failure handling (A:Allocator type param) at the cost of splitting the ecosystem (all third-party code, including parts of libstd itself like std::io::Read::read_to_end, only work with A=GlobalAlloc).

        Zig for example does it right by having explicit allocators from the start, plus good support for having the allocator outside the type (ArrayList vs ArrayListUnmanaged) so that multiple values within a composite type can all use the same allocator.

        >Also many functions use addition and what is one supposed to do in case of overflow?

        Return an error ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ) or a signal that overflow occurred ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ). Or use wrapping addition ( https://doc.rust-lang.org/stable/std/primitive.i64.html#meth... ) if that was intended.

        Note that for the checked case, it is possible to have a newtype wrapper that impls std::ops::Add etc, so that you can continue using the compact `+` etc instead of the cumbersome `.checked_add(...)` etc. For the wrapping case libstd already has such a newtype: std::num::Wrapping.

        Also, there is a clippy lint for disallowing `+` etc ( https://rust-lang.github.io/rust-clippy/master/index.html#ar... ), though I assume only the most masochistic people enable it. I actually tried to enable it once for some parsing code where I wanted to enforce checked arithmetic, but it pointlessly triggered on my Checked wrapper (as described in the previous paragraph) so I ended up disabling it.

        • kllrnohj 16 hours ago

          > Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. [...] Zig for example does it right by having explicit allocators from the start

          Rust picked the right default for applications that run in an OS whereas Zig picked the right default for embedded. Both are good for their respective domains, neither is good at both domains. Zig's choice is verbose and useless on a typical desktop OS, especially with overcommit, whereas Rust's choice is problematic for embedded where things just work differently.

          • Arnavion 15 hours ago

            Various kind of "desktop" applications like databases and video games use custom non-global allocators - per-thread, per arena, etc - because they have specific memory allocation and usage patterns that a generic allocator does not handle as well as targeted ones can.

            My current $dayjob involves a "server" application that needs to run in a strict memory limit. We had to write our own allocator and collections because the default ones' insistence on using GlobalAlloc infallibly doesn't work for us.

            Thinking that only "embedded" cares about custom allocators is just naive.

            • kllrnohj 6 hours ago

              > Thinking that only "embedded" cares about custom allocators is just naive.

              I said absolutely no such thing? In my $dayjob working on graphics I, too, have used custom allocators for various things, primarily in C++ though, not Rust. But that in no way makes the default of a global allocator wrong, and often those custom allocators have specialized constraints that you can exploit with custom containers, too, so it's not like you'd be reaching for the stdlib versions probably anyway.

            • simonask 10 hours ago

              I don't see why you would have to write your own - there are plenty of options in the crate ecosystem, but perhaps you found them insufficient?

              As a video game developer, I've found the case for custom general-purpose allocators pretty weak in practice. It's exceedingly rare that you really want complicated nonlinear data structures, such as hash maps, to use a bump-allocator. One rehash and your fixed size arena blows up completely.

              95% of use cases are covered by reusing flat data structures (`Vec`, `BinaryHeap`, etc.) between frames.

              • monkeyelite 9 hours ago

                > there are plenty of options in the crate ecosystem

                Who writes the crates?

              • Arnavion 10 hours ago

                The allocator we wrote for $dayjob is essentially a buffer pool with a configurable number of "tiers" of buffers. "Static tiers" have N pre-allocated buffers of S bytes each, where N and S are provided by configuration for each tier. The "dynamic" tier malloc's on demand and can provide up to S bytes; it tracks how many bytes it has currently allocated.

                Requests are matched against the smallest tier that can satisfy them (static tiers before dynamic). If no tier can satisfy it (static tiers are too small or empty, dynamic tier's "remaining" count is too low), then that's an allocation failure and handled by the caller accordingly. Eg if the request was for the initial buffer for accepting a client connection, the client is disconnected.

                When a buffer is returned to the allocator it's matched up to the tier it came from - if it came from a static tier it's placed back in that tier's list, if it came from the dynamic tier it's free()d and the tier's used counter is decremented.

                Buffers have a simple API similar to the bytes crate - "owned buffers" allow &mut access, "shared buffers" provide only & access and cloning them just increments a refcount, owned buffers can be split into smaller owned buffers or frozen into shared buffers, etc.

                The allocator also has an API to query its usage as an aggregate percentage, which can be used to do things like proactively perform backpressure on new connections (reject them and let them retry later or connect to a different server) when the pool is above a threshold while continuing to service existing connections without a threshold.

                The allocator can also be configured to allocate using `mmap(tempfile)` instead of malloc, because some parts of the server store small, infrequently-used data, so they can take the hit of storing their data "on disk", ie paged out of RAM, to leave RAM available for everything else. (We can't rely on the presence of a swapfile so there's no guarantee that regular memory will be able to be paged out.)

                As for crates.io, there is no option. We need local allocators because different parts of the server use different instances of the above allocator with different tier configs. Stable Rust only supports replacing GlobalAlloc; everything to do with local allocators is unstable, and we don't intend to switch to nightly just for this. Also FWIW our allocator has both a sync and async API for allocation (some of the allocator instances are expected to run at capacity most of the time, so async allocation with a timeout provides some slack and backpressure as opposed to rejecting requests synchronously and causing churn), so it won't completely line up with std::alloc::Allocator even if/when that does get stabilized. (But the async allocation is used in a localized part of the server so we might consider having both an Allocator impl and the async direct API.)

                And so because we need local allocators, we had to write our own replacements of Vec, Queue, Box, Arc, etc because the API for using custom A with them is also unstable.

            • michalsustr 11 hours ago

              Did you publish these by any chance?

              • Arnavion 10 hours ago

                Sorry, the code is closed source.

        • johnisgood 19 hours ago

          > Zig for example does it right by having explicit allocators from the start

          Odin has them, too, optionally (and usually).

        • smj-edison a day ago

          > Rust unfortunately picked the wrong default here

          I partially disagree with this. Using Zig style allocators doesn't really fit with Rust ergonomics, as it would require pretty extensive lifetime annotations. With no_std, you absolutely can roll your own allocation styles, at the price of more manual lifetime annotations.

          I do hope though that some library comes along that allows for Zig style collections, with the associated lifetimes... (It's been a bit painful rolling my own local allocator for audio processing).

          • Arnavion 21 hours ago

            Explicit allocators do work with Rust, as evidenced by them already working for libstd's types, as I said. The mistake was to not have them from day one which has caused most code to assume GlobalAlloc.

            As long as the type is generic on the allocator, the lifetimes of the allocator don't appear in the type. So eg if your allocator is using a stack array in main then your allocator happens to be backed by `&'a [MaybeUninit<u8>]`, but things like Vec<T, A> instantiated with A = YourAllocator<'a> don't need to be concerned with 'a themselves.

            Eg: https://play.rust-lang.org/?version=nightly&mode=debug&editi... do_something_with doesn't need to have any lifetimes from the allocator.

            If by Zig-style allocators you specifically mean type-erased allocators, as a way to not have to parameterize everything on A:Allocator, then yes the equivalent in Rust would be a &'a dyn Allocator that has an infectious 'a lifetime parameter instead. Given the choice between an infectious type parameter and infectious lifetime parameter I'd take the former.

            • smj-edison 20 hours ago

              Ah, my bad, I guess I've been misunderstanding how the Allocator proposal works all along (I thought it was only for 'static allocators, this actually makes a lot more sense!).

              I guess all that to say, I agree then that this should've been in std from day one.

              • steveklabnik 18 hours ago

                The problem is, everything should have been there since day 1. It’s still unclear which API Rust should end up with, even today, which is why it isn’t stable yet.

                • JBits 17 hours ago

                  Looking forward to the API when it's stabilised. Have there been any updates on the progress of allocators of this general area of Rust over the past year?

                  • steveklabnik 17 hours ago

                    I haven’t paid that close of attention, but there have been two major APIs that people seem to be deciding between. We’ll see.

        • imtringued 9 hours ago

          >Return an AllocationError. Rust unfortunately picked the wrong default here for the sake of convenience, along with the default of assuming a global allocator. It's now trying to add in explicit allocators and allocation failure handling

          Going from panic to panic free in Rust is as simple as choosing 'function' vs 'try_function'. The actual mistakes in Rust were the ones where the non-try version should have produced a panic by default. Adding Box::try_new next to Box::new is easy.

          There are only two major applications of panic free code in Rust: critical sections inside mutexes and unsafe code (because panic safety is harder to write than panic free code). In almost every other case it is far more fruitful to use fuzzing and model checking to explicitly look for panics.

          • estebank 3 hours ago

            In order to have true ergonomic no_panic code in Rust you'd need to be able to have parametricity on the panic behavior: have a single Box::new that can be context determined to be panicky or Result based. It has to be context determined and not explicitly code determined so that the top most request for the no_panic version to be propagated all the way down to stdlib through the entire stack. If you squint just a bit, you can see this is the same as maybe async, and maybe const, and maybe allocate, and maybe wrapping/overflowing math, etc. So there's an option to just add try_ methods on the entire stdlib, which all the code between your API and the underlying API need to use/expose, or push for a generic language level mechanism for this. Which then complicates the language, compiler and library code further. Or do both.

      • PhilipRoman a day ago

        >what are they supposed to do when there is no memory left

        Well on Linux they are apparently supposed to return memory anyway and at some point in the future possibly SEGV your process when you happen to dereference some unrelated pointer.

        • tialaramex 17 hours ago

          You can tell Linux that you don't want overcommit. You will probably discover that you're now even more miserable and change it back, but it's an option.

          • Shorel 5 hours ago

            I did that and even with enormous amounts of free memory, Chrome and other Chromium browsers just die.

            They require overcommit just to open an empty window.

          • wizzwizz4 11 hours ago

            Whenever I switch off overcommitting, every program on my system (that I'm using) dies, one by one, over the course of 2–5 seconds, followed by Xorg. It's quite pretty.

      • nicce a day ago

        Additions are easy. By default they are wrapped, and you can make them explicit with checked_ methods.

        Assuming that you are not using much recursion, you can eliminate most of the heap related memory panics by adding limited reservation checks for dynamic data, which is allocated based on user input/external data. You should also use statically sized types whennever possible. They are also faster.

        • codedokode a day ago

          Wrapping on overflow is wrong because this is not the math we expect. As a result, errors and vulnerabilities occur (look at Linux kernel for example).

          • nicce a day ago

            It depends on the context. Of course the result may cause vulnerabilities if the program logic in bad context depends on it. But yeah, generally I would agree.

      • kllrnohj 16 hours ago

        > Also many functions use addition and what is one supposed to do in case of overflow?

        Honestly this is where you'd throw an exception. It's a shame Rust refuses to have them, they are absolutely perfect for things like this...

        • brooke2k 14 hours ago

          I'm confused by this, because a panic is essentially an exception. They can be thrown and caught (although it's extremely discouraged to do so).

          The only place where it would be different is if you explicitly set panics to abort instead of unwind, but that's not default behavior.

      • pdimitar a day ago

        Don't know about your parent poster but I didn't take it 100% literally. Obviously if there's no memory left then you crash; the kernel would likely murder your program half a second later anyway.

        But for arithmetics Rust has non-aborting bound checking API, if my memory serves.

        And that's what I'm trying hard to do in my Rust code f.ex. don't frivolously use `unwrap` or `expect`, ever. And just generally try hard to never use an API that can crash. You can write a few error branches that might never get triggered. It's not the end of the world.

        • tialaramex 17 hours ago

          Rust provides a default integer of each common size and signedness, for which overflow is prohibited [but this prohibition may not be enforced in release compiled binaries depending on your chosen settings for the compiler, in this case what happens is not promised but today it will wrap - it's wrong to write code which does this on purpose - see the wrapping types below if you want that - but it won't cause UB if you do it anyway]

          Rust also provides Wrapping and Saturating wrapper types for these integers, which wrap (255 + 1 == 0) or saturate (255 + 1 == 255). Depending on your CPU either or both of these might just be "how the computer works anyway" and will accordingly be very fast. Neither of them is how humans normally think about arithmetic.

          Furthermore, Rust also provides operations which do all of the above, as well as the more fundamental "with carry" type operations where you get two results from the operation and must write your algorithms accordingly, and explicitly fallible operations where if you would overflow your operation reports that it did not succeed.

        • wahern 20 hours ago

          Dealing with integer overflow is much more burdensome than dealing with allocation failure, IME. Relatively speaking, allocation failure is closer to file descriptor limits in terms of how it effects code structure. But then I mostly use C when I'm not using a scripting language. In languages like Rust and C++ there's alot of hidden allocation in the high-level libraries that seem to be popular, perhaps because the notion that "there's nothing you can do" has infected too many minds.

          Of course, just like with opening files or integer arithmetic, if you don't pay any attention to handling the errors up front when writing your code, it can be an onerous if not impossible to task to refactor things after the fact.

          • uecker 12 hours ago

            In C I simply use -fsanitize=signed-integer-overflow if I expect no overflow and checked arithmetic when I need to handle overflow. I do not think this is worse than in any other languages and seems less annoying than Rust. If I am lazy, I let allocation failure trap on null pointer dereference which is also safe, out-of-bounds accesses are avoided by -fsanitize=bounds (I avoid pointer arithmetic and unsafe casts where I can and essentially treat it like Rust's "unsafe").

          • pdimitar 20 hours ago

            Oh I agree, don't get me wrong. Both are pretty gnarly.

            I was approaching these problems strictly from the point of view of what can Rust do today really, nothing else. To me having checked and non-panicking API for integer overflows / underflows at least gives you some agency.

            If you don't have memory, well, usually you are cooked. Though one area where Rust can become even better there is to give us some API to reserve more memory upfront, maybe? Or I don't know, maybe adopt some of the memory-arena crates in stdlib.

            But yeah, agreed. Not the types of problems I want to have anymore (because I did have them in the past).

    • arijun a day ago

      `panic` isn’t really an error that you have to (or can) handle, it’s for unrecoverable errors. Sort of like C++ assertions.

      Also there is the no_panic crate, which uses macros to require the compiler to prove that a given function cannot panic.

      • josephg 19 hours ago

        You can handle panics. It’s for unrecoverable errors, but internally it does stack unwinding by default like exceptions in C++.

        You see this whenever you use cargo test. If a single test panics, it doesn’t abort the whole program. The panic is “caught”. It still runs all the other tests and reports the failure.

        • swiftcoder 5 hours ago

          > but internally it does stack unwinding by default

          Although as a library vendor, you kind have to assume your library could be compiled into an app configured with panic=abort, in which case it will not do that

      • marcosdumay a day ago

        Well, kinda. It's more similar to RuntimeException in Java, in that there are times where you do actually want to catch and recover from them.

        But on those places, you better know exactly what you are doing.

      • nicce a day ago

        I would say that Segmentation Fault is better comparison with C++ :-D

    • alexeldeib a day ago

      that's kind of a thing with https://docs.rs/no-panic/latest/no_panic/ or no std and custom panic handlers.

      not sure what the latest is in the space, if I recall there are some subtleties

      • zozbot234 a day ago

        That's a neat hack, but it would be a lot nicer to have explicit support as part of the language.

        • kbolino a day ago

          That's going to be difficult because the language itself requires panic support to properly implement indexing, slicing, and integer division. There are checked methods that can be used instead, but to truly eliminate panics, the ordinary operators would have to be banned when used with non-const arguments, and this restriction would have to propagate to all dependencies as well.

          • josephg 19 hours ago

            Yes that’s right. The feature really wants compiler support for that reason. The simplest version wouldn’t be too hard to implement. Every function just exports a flag on whether or not it (or any callees) can panic. Then we have a nopanic keyword which emits a compiler error if the function (or any callee) panics.

            It would be annoying to use - as you say, you couldn’t even add regular numbers together or index into an array in nopanic code. But there are ways to work around it (like the wrapping types).

            One problem is that implicit nopanic would add a new way to break semver compatibility in APIs. Eg, imagine a public api that just happens to not be able to panic. If the code is changed subtly, it could easily start panicing again. That could break callers, so it has to be a major version bump. You’d probably have to require explicit nopanic at api boundaries. (Else assume all public functions from other crates can panic). And because of that, public APIs like std would need to be plastered with nopanic markers everywhere. It’s also not clear how that works through trait impls.

          • j-krieger 7 hours ago

            Yeah, this is how it works with no_std.

            • kbolino 3 hours ago

              No? https://godbolt.org/z/jEc36vP3P

              As far as I can tell, no_std doesn't change anything with regard to either the usability of panicking operators like integer division, slice indexing, etc. (they're still usable) nor on whether they panic on invalid input (they still do).

        • nicce a day ago

          The problem is with false positives. Even if you clearly see that some function will never panic (but it uses some feature which may panic), compiler might not always see that. If compiler says that there are no panics, then there are no panics, but is it enough to add as part of the language if you need to mostly avoid using features that might panic?

    • johnisgood 19 hours ago

      I do not want a library to panic though, I want to handle the error myself.

  • dvt a day ago

    Maybe contrarian, but imo the `Result` type, while kind of nice, still suffers from plenty of annoyances, including sometimes not working with the (manpages-approved) `dyn Error`, sometimes having to `into()` weird library errors that don't propagate properly, or worse: `map_err()` them; I mean, at this point, the `anyhow` crate is basically mandatory from an ergonomics standpoint in every Rust project I start. Also, `?` doesn't work in closures, etc.

    So, while this is an improvement over C++ (and that is not saying much at all), it's still implemented in a pretty clumsy way.

    • singingboyo a day ago

      There's some space for improvement, but really... not a lot? Result is a pretty basic type, sure, but needing to choose a dependency to get a nicer abstraction is not generally considered a problem for Rust. The stdlib is not really batteries included.

      Doing error handling properly is hard, but it's a lot harder when error types lose information (integer/bool returns) or you can't really tell what errors you might get (exceptions, except for checked exceptions which have their own issues).

      Sometimes error handling comes down to "tell the user", where all that info is not ideal. It's too verbose, and that's when you need anyhow.

      In other cases where you need details, anyhow is terrible. Instead you want something like thiserror, or just roll your own error type. Then you keep a lot more information, which might allow for better handling. (HttpError or IoError - try a different server? ParseError - maybe a different parse format? etc.)

      So I'm not sure it's that Result is clumsy, so much that there are a lot of ways to handle errors. So you have to pick a library to match your use case. That seems acceptable to me?

      FWIW, errors not propagating via `?` is entirely a problem on the error type being propagated to. And `?` in closures does work, occasionally with some type annotating required.

      • josephg 19 hours ago

        I agree with you, but it’s definitely inconvenient. Result also doesn’t capture a stack trace. I spent a long time tracking down bugs in some custom binary parsing code awhile ago because I had no idea which stack trace my Result::Err’s were coming from. I could have switched to another library - but I didn’t want to inflict extra dependencies on people using my crate.

        As you say, it’s not “batteries included”. I think that’s a fine answer given rust is a systems language. But in application code I want batteries to be included. I don’t want to need to opt in to the right 3rd party library.

        I think rust could learn a thing or two from Swift here. Swift’s equivalent is better thought through. Result is more part of the language, and less just bolted on:

        https://docs.swift.org/swift-book/documentation/the-swift-pr...

    • ackfoobar a day ago

      > the `anyhow` crate is basically mandatory from an ergonomics standpoint in every Rust project I start

      If you use `anyhow`, then all you know is that the function may `Err`, but you do not know how - this is no better than calling a function that may `throw` any kind of `Throwable`. Not saying it's bad, it is just not that much different from the error handling in Kotlin or C#.

      • dwattttt 14 hours ago

        I find myself working through a hierarchy of error handling maturity as a project matures.

        Initial proof of concepts just get panics (usually with a message).

        Then functions start to be fallible, by adding anyhow & considering all errors to still be fatal, but at least nicely report backtraces (or other things! context doesn't have to just be a message)

        Then if a project is around long enough, swap anyhow to thiserror to express what failure modes a function has.

      • Yoric 21 hours ago

        Yeah, `anyhow` is basically Go error handling.

        Better than C, sufficient in most cases if you're writing an app, to be avoided if you're writing a lib. There are alternatives such as `snafu` or `thiserror` that are better if you need to actually catch the error.

      • jbritton a day ago

        I know a ‘C’ code base that treats all socket errors the same and just retries for a limited time. However there are errors that make no sense to retry, like invalid socket or socket not connected. It is necessary to know what socket error occurred. I like how the Posix API defines an errno and documents the values. Of course this depends on accurate documentation.

        • XorNot 20 hours ago

          This is an IDE/documentation problem in a lot of cases though. No one writes code badly intentionally, but we are time constrained - tracking down every type of error which can happen and what it means is time consuming and you're likely to get it wrong.

          Whereas going with "I probably want to retry a few times" is guessing that most of your problems are the common case, but you're not entirely sure the platform you're on will emit non-commoncases with sane semantics.

      • efnx a day ago

        Yes. I prefer ‘snafu’ but there are a few, and you could always roll your own.

        • smj-edison a day ago

          +1 for snafu. It lets you blend anyhow style errors for application code with precise errors for library code. .context/.with_context is also a lovely way to propagate errors between different Result types.

          • bonzini a day ago

            How does that compare to "this error for libraries and anyhow for applications"?

            • smj-edison a day ago

              You don't have to keep converting between error types :)

        • shepmaster a day ago

          Yeah, with SNAFU I try to encourage people going all-in on very fine-grained error types. I love it (unsurprisingly).

    • maplant a day ago

      ? definitely works in closures, but it often takes a little finagling to get working, like specifying the return type of the closure or setting the return type of a collect to a Result<Vec<_>>

    • skrtskrt 21 hours ago

      A couple of those annoyances are just library developers being too lazy to give informative error types which is far from a Rust-specific problem

  • mdf a day ago

    Generally, I agree the situation with errors is much better in Rust in the ways you describe. But, there are also panics which you can catch_unwind[1], set_hook[2] for, define a #[panic_handler][3] for, etc.

    [1] https://doc.rust-lang.org/std/panic/fn.catch_unwind.html

    [2] https://doc.rust-lang.org/std/panic/fn.set_hook.html

    [3] https://doc.rust-lang.org/nomicon/panic-handler.html

    • ekidd a day ago

      Yeah, in anything but heavily multi-threaded servers, it's usually best to immediately crash on a panic. Panics don't mean "a normal error occurred", they mean, "This program is cursed and our fundamental assumptions are wrong." So it's normal for a unit test harness to catch panics. And you may occasionally catch them and kill an entire client connection, sort of the way Erlang handles major failures. But most programs should just exit immediately.

  • kccqzy a day ago

    The Result type isn't really enough for fun and easy error handling. I usually also need to reach for libraries like anyhow https://docs.rs/anyhow/latest/anyhow/. Otherwise, you still need to think about the different error types returned by different libraries.

    Back at Google, it was truly an error handling nirvana because they had StatusOr which makes sure that the error type is just Status, a standardized company-wide type that stills allows significant custom errors that map to standardized error categories.

  • jasonjmcghee a day ago

    unfortunately it's not so simple. that's the convention. depending on the library you're using it might be a special type of Error, or special type of Result, something needs to be transformed, `?` might not work in that case (unless you transform/map it), etc.

    I like rust, but its not as clean in practice, as you describe

    • ryandv a day ago

      There are patterns to address it such as creating your own Result type alias with the error type parameter (E) fixed to an error type you own:

          type Result<T> = result::Result<T, MyError>;
      
          #[derive(Debug)]
          enum MyError {
              IOError(String)
              // ...
          }
      
      Your owned (i.e. not third-party) Error type is a sum type of error types that might be thrown by other libraries, with a newtype wrapper (`IOError`) on top.

      Then implement the `From` trait to map errors from third-party libraries to your own custom Error space:

          impl From<io::Error> for MyError {
              fn from(e: io::Error) -> MyError {
                  MyError::IOError(e.to_string())
              }
          }
      
      Now you can convert any result into a single type that you control by transforming the errors:

          return sender
              .write_all(msg.as_bytes())
              .map_err(|e| e.into());
      
      There is a little boilerplate and mapping between error spaces that is required but I don't find it that onerous.
    • Cloudef 13 hours ago

      You can use anyhow, but yeah zig generally does errors better IMO

      • ziml77 5 hours ago

        Errors are where I find zig severely lacking. They can't carry context. Like if you're parsing a JSON file and it fails, you can know that it failed but not where it failed within the file. Their solution in the standard library for cases like this was to handle printing to stderr internally, but that is incredibly hacky.

    • koakuma-chan a day ago

      You can use anyhow::Result, and the ? will work for any Error.

  • fpoling a day ago

    Result type still requires quite a few lines of boilerplate if one needs to add custom data to it. And as a replacement of exceptions with automatic stack trace attachment it is relatively poor.

    In any case I will take Rust Result over C++ mess at any time especially given that we have two C++, one with exception support and one without making code incompatible between two.

    • jandrewrogers 16 hours ago

      FWIW, stack traces are part of C++ now and you can construct custom error types that automagically attach them if desired. Result types largely already exist in recent C++ editions if you want them.

      I use completely custom error handling stacks in C++ and they are quite slick these days, thanks to improvements in the language.

      • fpoling 6 hours ago

        What I really like to see is stack traces annotated with values of selected local values. A few years ago I tried that in a C++ code base where exceptions were disabled using macros and something like error context passed by references. But the result was ugly and I realized that I had zero chances to adopt it.

        With Rust Result and powerful macros it easier to implement.

  • stodor89 11 hours ago

    Failure is not an option, it's a Result<T,E>

  • loeg 20 hours ago

    I work in a new-ish C++ codebase (mid-2021 origin) that uses a Result-like type everywhere (folly::Expected, but you get std::expected in C++23). We have a C pre-processor macro instead of `?` (yes, it's a little less ergonomic, but it's usable). It makes it relatively nice to work in.

    That said, I'd prefer to be working in Rust. The C++ code we call into can just raise exceptions anywhere implicitly; there are a hell of a lot of things you can accidentally do wrong without warning; class/method syntax is excessively verbose, etc.

  • hoppp 3 hours ago

    Its true but using unwrap is a bit boring , I mean...boring is good but its also boring.

    • craftkiller 2 hours ago

      You shouldn't be using unwrap.

  • 0x1ceb00da a day ago

    Proper error handling is the biggest problem in a vast majority of programs and rust makes that straightforward by providing a framework that works really well. I hate the `?` shortcut though. It's used horribly in many rust programs that I've seen because the programmers just use it as a half assed replacement for exceptions. Another gripe I have is that most library authors don't document what errors are returned in what situations and you're left making guesses or navigating through the library code to figure this out.

  • fooker 9 hours ago

    One of the strengths of C++ is the ability to build features like this as a library, and not hardcode it into the language design.

    Unless you specifically want the ‘?’ operator, you can get pretty close to this with some clever use of templates and operator overloading.

    If universal function call syntax becomes standardized, this will look even more functional and elegant.

    • steveklabnik 2 hours ago

      Rust also started with it as a library, as try!, before ?. There were reasons why it was worth making syntax, after years of experience with it as a macro.

  • flohofwoe 12 hours ago

    IMHO the ugly thing about Result and Option (and a couple of other Rust features) is that they are stdlib types, basic functionality like this should be language syntax (this is also my main critique of 'modern C++').

    And those 'special' stdlib types wouldn't be half as useful without supporting language syntax, so why not go the full way and just implement everything in the language?

    • choeger 12 hours ago

      Uh, nope. Your language needs to be able to define these types. So they belong into the stdlib because they are useful, not because they are special.

      You might add syntactic sugar on top, but you don't want these kinds of things in your fundamental language definition.

  • chickenzzzzu 10 hours ago

    why not just read the function you are calling to determine the way it expects you to handle errors?

    after all, if a library exposes too many functions to you, it isn't a good library.

    what good is it for me to have a result type if i have to call 27 functions with 27 different result types just to rotate a cube?

  • dabinat a day ago

    I wish Option and Result weren’t exclusive. Sometimes a method can return an error, no result or a valid result. Some crates return an error for “no result”, which feels wrong to me. My solution is to wrap Result<Option>, but it still feels clunky.

    I could of course create my own type for this, but then it won’t work with the ? operator.

    • estebank 3 hours ago

      For things like this I find that ? still works well enough, but I tend to write code like

          match x(y) {
              Ok(None) => "not found".into(),
              Ok(Some(x)) => x,
              Err(e) => handle_error(e),
          }
      
      Because of pattern matching, I often also have one arm for specific errors to handle them specifically in the same way as the ok branches above.
    • atoav a day ago

      I think Result<Option> is the way to go. It describes precisely that: was it Ok? if yes, was there a value?

      I could imagine situations where an empty return value would constitute an Error, but in 99% of cases returning None would be better.

      Result<Option> may feel clunky, but if I can give one recommendation when it comes to Rust, is that you should not value your own code-aesthetical feelings too much as it will lead to a lot of pain in many cases — work with the grain of the language not against it even if the result does not satisfy you. In this case I'd highly recommend just using Result<Option> and stop worrying about it.

      You being able to compose/nest those base types and unwraping or matching them in different sections of your code is a strength not a weakness.

    • vjerancrnjak a day ago

      This sounds valid. Lookup in a db can be something or nothing or error.

      Just need a function that allows lifting option to result.

  • ryandrake 21 hours ago

    Error handling and propagation is one of those things I found the most irritating and struggled[1] with the most as I learned Rust, and to be honest, I'm still not sure I understand or like Rust's way. Decades of C++ and Python has strongly biased me towards the try/except pattern.

    1: https://news.ycombinator.com/item?id=41543183

    • zaphar 18 hours ago

      Counterpoint: Decades of C++/Python/Java/... has strongly biased me against the try/except pattern.

      It's obviously subjective in many ways. However, what I dislike the most is that try/except hides the error path from me when I'm reading code. Decades of trying to figure out why that stacktrace is happening in production suddenly has given me a strong dislike for that path being hidden from me when I'm writing my code.

      • sham1 5 hours ago

        There should be a way to have the function/method document what sort of stuff can go wrong, and what kinds of exceptions you can get out of it.

        It could be some kind of an exception check thing, where you would either have to make sure that you handle the error locally somehow, or propagate it upwards. Sadly programming is not ready for such ideas yet.

        ---

        I jest, but this is exactly what checked exceptions are for. And the irony of stuff like Rust's use of `Result<T, E>` and similarly ML-ey stuff is that in practice they end up with what are essentially just checked exceptions, except with the error type information being somewhere else.

        Of course, people might argue that checked exceptions suck because they've seen the way Java has handled them, but like... that's Java. And I'm sorry, but Java isn't the definition of how checked exceptions can work. But because of Java having "tainted" the idea, it's not explored any further, because we instead just assume that it's bad by construction and then end up doing the same thing anyway, only slightly different.

        • Analemma_ 3 hours ago

          > There should be a way to have the function/method document what sort of stuff can go wrong, and what kinds of exceptions you can get out of it.

          The key phrase you're looking for is "algebraic effect systems". Right now they're a pretty esoteric thing only really seen in PL research, but at one point so was most of the stuff we now take for granted in Rust. Maybe someday they'll make their way to mainstream languages in an ergonomic way.

    • skrtskrt 21 hours ago

      there are answers in the thread you linked that show how easy and clean the error handling can be.

      it can look just like a more-efficient `except` clauses with all the safety, clarity, and convenience that enums provide.

      Here's an example:

      * Implementing an error type with enums: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/... * Which derives from a more general error type with even more helpful enums: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/... * then some straightforward handling of the error: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/...

  • 90s_dev a day ago

    I like so much about Rust.

    But I hear compiling is too slow.

    Is it a serious problem in practice?

    • Seattle3503 a day ago

      Absolutely, the compile times are the biggest drawback IMO. Everywhere I've been that built large systems in Rust eventually ends up spending a good amount of dev time trying to get CI/CD pipeline times to something sane.

      Besides developer productivity it can be an issue when you need a critical fix to go out quickly and your pipelines take 60+ minutes.

      • nicoburns 20 hours ago

        If you have the money to throw at it, you can get a long way optimising CI pipelines just by throwing faster hardware at it. The sort of server you could rent for ~$150/month might easily be ~5x faster than your typical Github Actions hosted runner.

        • hobofan 7 hours ago

          Besides faster hardware, one of the main features (and drawbacks) you get with self-hosted runners is the option to break through build isolation, and have performant caches between builds.

          With many other build systems I'd be hesitant to do that, but since Cargo is very good about what to rebuild for incremental builds, keeping the cache around is a huge speed boost.

        • Seattle3503 19 hours ago

          Yes, this is often the best "low-hanging fruit" option, but it can get expensive. It depends how you value your developer time.

      • lilyball a day ago

        Don't use a single monolithic crate. Break your project up into multiple crates. Not only does this help with compile time (the individual crate compiles can be parallelized), it also tends to help with API design as well.

        • Seattle3503 a day ago

          Every project I've worked on used a workspace with many crates. Generally that only gets you so far on large projects.

        • mixmastamyk a day ago

          It compiles different files separately, right?

          With some exceptions for core data structures, it seems that if you only modified a few files in a large project the total compilation time would be quick no matter how slow the compiler was.

          • conradludgate a day ago

            Sorta. The "compilation unit" is a single crate, but rustc is now also parallel, and LLVM can also be configured to run in parallel IIRC.

            Rust compile times have been improving over time as the compiler gets incrementally rewritten and optimised.

      • sethammons 9 hours ago

        We have 60 minutes deploy pipelines and are in python. Just mentioning that since, in theory, we are not penalized for long compile times.

        Fast ability to quickly test and get feedback is mana from the gods in software development. Organizations should keep it right below customer satisfaction and growth as a driving metric.

    • juliangmp a day ago

      I can't speak for a bigger rust project, but my experience with C++ (mostly with cmake) is so awful that I don't think it can get any worse.

      Like with any bigger C++ project there's like 3 build tools, two different packaging systems and likely one or even multiple code generators.

      • thawawaycold 6 hours ago

        that does not answer at all OP's question.

    • conradludgate a day ago

      It is slow, and yes it is a problem, but given that typical Rust code generally needs fewer full compiles to get working tests (with more time spent active in the editor, with an incremental compiler like Rust Analyzer) it usually balances out.

      Cargo also has good caching out of the box. While cargo is not the best build system, it's an easy to use good system, so you generally get good compile times for development when you edit just one file. This is along made heavy use of with docker workflows like cargo-chef.

    • throwaway76455 20 hours ago

      Compile times are the reason why I'm sticking with C++, especially with the recent progress on modules. I want people with weaker computers to be able to build and contribute to the software I write, and Rust is not the language for that.

    • mynameisash a day ago

      It depends on where you're coming from. For me, Rust has replaced a lot of Python code and a lot of C# code, so yes, the Rust compilation is slow by comparison. However, it really hasn't adversely affected (AFAICT) my/our iteration speed on projects, and there are aspects of Rust that have significantly sped things up (eg, compilation failures help detect bugs before they make it into code that we're testing/running).

      Is it a serious problem? I'd say 'no', but YMMV.

    • kelnos a day ago

      Compilation is indeed slow, and I do find it frustrating sometimes, but all the other benefits Rust brings more than make up for it in my book.

    • zozbot234 a day ago

      People who say "Rust compiling is so slow" have never experienced what building large projects was like in the mid-1990s or so. It's totally fine. Besides, there's also https://xkcd.com/303/

      • creata a day ago

        Or maybe they have experienced what it was like and they don't want to go back.

      • kelnos a day ago

        Not really relevant. The benchmark is how other language toolchains perform today, not what they failed to do 30 years ago. I don't think we'd find it acceptable to go back to mid-'90s build times in other languages, so why should we be ok with it with something like Rust?

    • cmrdporcupine a day ago

      I worked in the chromium C++ source tree for years and compiling there was orders of magnitude slower than any Rust source tree I've worked in so far.

      Granted, there aren't any Rust projects that large yet, but I feel like compilation speeds are something that can be worked around with tooling (distributed build farms, etc.). C++'s lack of safety and a proclivity for "use after free" errors is harder to fix.

      • gpderetta 21 hours ago

        Are there rust projects that are within orders of magnitude of Chromium?

  • tubs a day ago

    And panics?

    • epage 7 hours ago

      Those are generally used as asserts, not control flow / error handling.

  • scotty79 8 hours ago

    > Then you top it on with `?` shortcut

    I really wish java used `?` as a shorthand to declare and propagate checked exceptions of called function.

  • tomp 6 hours ago

    Did you ever actually program in Rust?

    In my experience, a lot of the code is dedicated to "correctly transforming between different Result / Error types".

    Much more verbose than exceptions, despite most of the time pretending they're just exceptions (i.e. the `?` operator).

    Why not just implement exceptions instead?

    (TBH I fully expect this comment to be downvoted, then Rust to implement exceptions in 10 years... Something similar happened when I suggested generics in Go.)

  • bena a day ago

    Ok, I'm at like 0 knowledge on the Rust side, so bear that in mind. Also, to note that I'm genuinely curious about this answer.

    Why can't I return an integer on error? What's preventing me from writing Rust like C++?

    • tczMUFlmoNk a day ago

      You can write a Rust function that returns `i32` where a negative value indicates an error case. Nothing in Rust prevents you from doing that. But Rust does have facilities that may offer a nicer way of solving your underlying problem.

      For instance, a common example of the "integer on error" pattern in other languages is `array.index_of(element)`, returning a non-negative index if found or a negative value if not found. In Rust, the return type of `Iterator::position` is instead `Option<usize>`. You can't accidentally forget to check whether it's present. You could still write your own `index_of(&self, element: &T) -> isize /* negative if not found */` if that's your preference.

      https://doc.rust-lang.org/std/iter/trait.Iterator.html#metho...

    • bonzini a day ago

      Nothing prevents you, you just get uglier code and more possibility of confusion.

quietbritishjim 9 hours ago

I always enjoy reading articles like this. But the truth is, having written several 100s of KLOC in C++ (i.e., not an enormous amount but certainly my fair share) I just almost never have problems with this sort accidental conversion in practice. Perhaps it might trip me up occasionally, but will be noticed by literally just running the code once. Yes, that is an extra hurdle to trip over and resolve but that is trivial compared to the alternative of creating and using wrapper types - regardless of whether I'm using Rust or C++. And the cost of visual noise of wrapper types, already higher just at the writing stage, then continues to be a cost every time you read the code. It's just not worth it for the very minor benefit it brings.

(Named parameters would definitely be great, though. I use little structs of parameters where I think that's useful, and set their members one line at a time.)

I know that this is an extremist view, but: I feel the same way about Rust's borrow checker. I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers. Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs. Why would I want to totally reshape the way that I code in order to fix one of the least significant problems I encounter? I actually wish there were a variant of Rust with all its nice clean improvements over C++ except for lifetime annotations and the borrow checker.

Perhaps this is a symptom of the code I tend to write: code that has a lot of tricky mathematical algorithms in it, rather than just "plumbing" data between different sources. But actually I doubt it, so I'm surprised this isn't a more common view.

  • lmm 9 hours ago

    > I just almost never have problems with this sort accidental conversion in practice.

    95% of C++ programmers claim this, but C++ programs continue to be full of bugs, and they're usually exactly this kind of dumb bug.

    > will be noticed by literally just running the code once.

    Maybe. If what you're doing is "tricky mathematical algorithms", how would you even know if you were making these mistakes and not noticing them?

    > the cost of visual noise of wrapper types, already higher just at the writing stage, then continues to be a cost every time you read the code. It's just not worth it for the very minor benefit it brings.

    I find wrapper types are not a cost but a benefit for readability. They make it so much easier to see what's going on. Often you can read a function's prototype and immediately know what it does.

    > Certainly, lifetime bugs are massively overshadowed by logic and algorithmic bugs.

    Everyone claims this, but the best available research shows exactly the opposite, at least when it comes to security bugs (which in most domains - perhaps not yours - are vastly more costly): the most common bugs are still the really dumb ones, null pointer dereferences, array out of bounds, and double frees.

    • davemp 8 hours ago

      My current project is a huge C++ physics sim written over 15+ years. The most common and difficult to diagnose bug I’ve found is unit conversation mistakes. We likely wouldn’t even find them if we didn’t have concrete data to compare against.

      • j16sdiz 5 hours ago

        There are a few unit library in C++.

        Type checking in compile time is do-able with templates, even better with constexpr.

        The problem is, of course, each library have its own set of rules and they won't interop with each other.

        • bluGill 5 hours ago

          I wrote such a type library myself, and it worked great. However we eventually realized it was the wrong answer because you so commonly want to display that thing and nobody wanted to write each widget to have a different api for each other the thousands of different types in my library.

          The current system is a runtime system which has one type, and you set what the unit system is in the constructor. However it means adding a meter to a gallon is a runtime error.

    • blub 8 hours ago

      It’s the sociology of software development.

      The guild of software developers has no real standards, no certification, no proven practices outside <book> and <what $company is doing> while continuing to depend on the whims of project managers, POs and so-caled technical leaders and others which can’t tell quality code from their own ass.

      There’s usually no money in writing high-quality software and almost everything in a software development project conspires against quality. Languages like Rust are a desperate attempt at fixing that with technology.

      I guess it works, in a way, but these kind of blog posts just show us how inept most programmers are and why the Rust band-aid was needed in the first place.

      • tormeh 6 hours ago

        Maybe. But I wouldn't diss better languages, linters, and other tool inprovements. These systematically increase quality at very low cost. It boggles my mind that the whole industry is not falling over itself to continuously embrace better tools and technology.

  • kevincox 7 hours ago

    I had a friend who noticed that people were often mixing up the arguments to some std constructor (I think it was string with a char and other integer argument getting swapped.) He searched across Google's codebase and found many (I don't remember the exact number) cases of this, many that he could confirm to be real bugs. He spent months fixing them and I think eventually got some check added to prevent this in the future.

    So this definitely isn't some theoretical problem. I wouldn't even be surprised if you had made this mistake just hadn't noticed.

    • humanrebar 7 hours ago

      I understand this concern, but at the same time it's not hard to write clang-query statements for the ones you care about. Sometimes it is even a regex! And it's not too expensive to upstream universally relevant checks to clang-tidy.

      The main problem is that too many C++ engineers don't do any of that. They have some sort of learned helplessness when it comes to tooling. Rust for now seems to have core engineers in place that will do this sort of on behalf of everyone else. Language design aside, if it can find a way to sustain that kind of solid engineering, it will be hard to argue against.

  • viraptor 9 hours ago

    > but will be noticed by literally just running the code once.

    I assure you that's not the case. Maybe you didn't make that mistake, but if you did I'm sure it sometimes went unnoticed. I've found those issues in my code and in other projects. Sometimes they even temporarily don't matter, because someone did a func(CONST, 0) instead of func(0, CONST) and it turns out CONST is 0 - however the next person gets a crash because they change 0 to 1. A lot of similar issues come from the last line effect https://medium.com/@Code_Analysis/the-last-line-effect-7b1cb... and can last for years without being noticed.

  • devnullbrain 8 hours ago

    >code that has a lot of tricky mathematical algorithms in it, rather than just "plumbing" data between different sources

    Your hierarchy is backwards. Borrowing for algorithmic code is easy, it's for writing libraries that can be used by others where it's hard. Rust lets you - makes you - encode in in the API in a way C++ can't yet express.

    > I just very rarely have problems with memory errors in C++ code bases with a little thought applied to lifetimes and use of smart pointers

    If these are sparing you C++ bugs but causing you to struggle with the borrow checker, it's because you're writing code that depends on constraints that you can't force other contributors (or future you) to stick to. For example, objects are thread-unsafe by default. You can use expensive locks, or you can pray that nobody uses it wrong, but you can't design it so it can only be used correctly and efficiently.

  • spacechild1 9 hours ago

    > Why would I want to totally reshape the way that I code in order to fix one of the least significant problems I encounter?

    I feel the same. Rust certainly has many nice properties and features, but the borrow checker is a huge turn-off for me.

  • blub 8 hours ago

    This article presents something I’d expect competent C++ programmers with a few years of experience to know.

    Unfortunately, many programmers are not competent. And the typical modern company will do anything in its power to outsource to often the lowest bidder, mismanage projects and generally reduce quality to the minimum acceptable to make money. That’s why one needs tools like Rust, Java, TypeScript, etc.

    Unfortunately, Rust is still too hard for the average programmer, but at least it will hit them over the hands with a stick when they do something stupid. Another funny thing about Rust is that it’s attracting the functional programming/metaprogramming astronauts in droves, which is at odds with it being the people’s programming language.

    I still don’t think it’s a valuable skill. Before it was lack of jobs and projects, which is still a problem. Now it’s the concern that it’s as fun as <activity>, except in a straitjacket.

choeger 12 hours ago

All this has been known in the PL design community for decades if not half a century by now.

Two things are incredibly frustrating when it comes to safety in software engineering:

1. The arrogance that "practitioners" have against "theorists" (everyone with a PhD in programming languages)

2. The slowness of the adoption of well-tested and thoroughly researched language concepts (think of Haskell type classes, aka, Rust traits)

I like that Rust can pick good concepts and design coherent language from them without inventing its own "pragmatic" solution that breaks horribly in some use cases that some "practitioners" deem "too theoretical."

  • bigbuppo 2 hours ago

    It's weird that this sort of debate around C++ often leaves out the fact that many of the problems with C++ were known before C++ even existed. Outside of a few specific buckets, there is no reason to use C++ for any new projects, and really, there never has been. If you can't stomach Rust for some reason, and I'm one of those people, there are plenty of choices out there without all the pitfalls of C++ or C.

    • ivmaykov an hour ago

      > If you can't stomach Rust for some reason, and I'm one of those people, there are plenty of choices out there without all the pitfalls of C++ or C.

      Unless you are doing embedded programming ...

      • fsloth 42 minutes ago

        I think embedded is one of the specific buckets.

        You target the compiler your client uses for their platform. There is very little choice there.

  • sanderjd 5 hours ago

    Yep, this article is a good example of one way that c++ is bad, but it's not really a great example of rust being particularly good; many other languages support this well. I'm very glad Rust is one of those languages though!

    • groos 2 hours ago

      I had the same thought - what Matt's examples required was strong typing and that has existed for very long time outside of the C family world.

  • Ygg2 11 hours ago

    > I like that Rust can pick good concepts and design coherent language from them without inventing its own "pragmatic" solution that breaks horribly in some use cases that some "practitioners" deem "too theoretical."

    I've thought Rust picked some pretty nifty middle ground. On one side, it's not mindfucking unsafe like C. It picked to remove a set of problems like memory safety. On the other side, Rust didn't go for the highest of theoretical grounds. It's not guaranteeing much outside of it, and it also relies a bit on human help (unsafe blocks).

    • eptcyka 10 hours ago

      As per the article, Rust has benefits beyond the ones afforded by the borrow checker.

      • Ygg2 9 hours ago

        Sure, but it is pragmatic in other ways as well :)

        It takes ADT, but not function currying, and so on.

        • jeffparsons 9 hours ago

          I've occasionally wondered about the lack of currying in Rust; it feels like something that can be done mechanically at compile time, so why not support it? Perhaps to do it cleanly (without more magical privileged functions in core) would require variadic generics?

          • bunderbunder 3 hours ago

            You can absolutely curry functions in Rust. You just have to do it manually because there's no syntactic sugar for it.

            I think that's a good thing. A curried function is a function that takes one argument, and returns a closure that captures the argument. That closure might itself take another argument, and return a new closure that captures both the original argument and the second one. And so on ad infinitum. How ownership and borrowing works across that chain of closures could easily become a touchy issue, so you probably want to be making it as explicit as possible.

            Or perhaps better yet, find an easier way to accomplish the same task. Maybe use a struct to explicitly carry the arguments along until you're ready to call the function.

          • skybrian an hour ago

            This isn’t Rust-specific, but one reason a language designer might deliberately not implement currying is that it makes function call sites a bit harder to read. You need to already know how many arguments a function takes to make sense of the call’s return type.

        • andrepd 8 hours ago

          I don't think currying is that big a deal, it's just syntactic sugar that might or might not make things easier to read, unlike ADTs or closures which are important core concepts.

          I'd love to have a syntax like

              { foo(%1, bar) }
          
          standing for

              |x| { foo(x, bar) }
          
          though. I'm not aware of any language that has this!
          • tormeh 6 hours ago

            I'd argue that currying is actively harmful (I suspect you agree). I've seen functions take one argument in one source file, and the next argument in another source file. In JavaScript no less. Horrendous stuff. One of my most hated anti-features, along with Scala's implicits. These kinds of features are mostly misused rather than used.

            • bunderbunder 3 hours ago

              From what I've seen, partial application tends to be used for utmost good in dialects of ML, and utmost evil most everywhere else. Chaotic neutral in R/tidyverse.

          • icen 4 hours ago

            In K the arguments are named x y z by default, so you just write:

                { foo[x, bar] }
          • Munksgaard 8 hours ago

            Elixir has this, which is close:

                &foo(&1, bar)
          • masijo 6 hours ago

            Clojure has this

                user=> (#(println %1 %2) "Hello " "Clojure")
                Hello Clojure
          • tcfhgj 7 hours ago

            Powershell has this, why do you like this?

  • blub 4 hours ago

    If the practitioners haven’t adopted what you’re offering for 50+ years, that thing can’t be good.

    Rust is also struggling with its “too theoretical” concepts by the way. The attempts of the community to gaslight the practitioners that the concepts are in fact easy to learn and straightforward are only enjoying mild success, if I may call it that.

    • fsloth 32 minutes ago

      ”If the practitioners haven’t adopted what you’re offering for 50+ years, that thing can’t be good.”

      I don’t think what features are popular in C++ is good indication of anything. The language is good only due to the insane amounts of investment to the ecosystem, not because of the language features due to design.

      For an industrial language inventory of ”nice features to have” F# and C# are mostly my personal gold standard.

      ”Too theoretical” is IMO not the correct lens to use. I would propose as a better lens a) which patterns you often use b) how to implement them in language design itself.

      A case in point is the gang-of-four book. It mostly gives names to things in C++ that are language features in better languages.

    • db48x 3 hours ago

      I disagree. The advertising and hype pushing people to use C++ is insane. There are hundreds of magazines that exist solely to absorb the advertising budget of Microsoft (and to a lesser extent Intel). Hundreds of conferences every year. You could be writing code in ML at your startup with no complaints and demonstrable success but as soon as your company gets big enough to send your CEO to an industry conference you’ll be switching to C++. The in–flight magazine will extol the benefits of MSVC, speakers like Matt Godbolt will preach Correct by Construction in C++, etc, etc. By the time he gets back he’s been brainwashed into thinking that C++ is the next best thing.

socalgal2 15 hours ago

I already hated C++ (having written 100s of thousands of lines of it in games and at FAANG)

I'd be curious to know what if any true fixes are coming down the line.

This talk: "To Int or to Uint, This is the Question - Alex Dathskovsky - CppCon 2024" https://www.youtube.com/watch?v=pnaZ0x9Mmm0

Seems to make it clear C++ is just broken. That said, and I wish he'd covered this, he didn't mention if the flags he brings up would warn/fix these issues.

I don't want a C++ where I have to remember 1000 rules and if I get one wrong my code is exploitable. I want a C++ where I just can't break the rules except when I explicitly opt into breaking them.

speaking of which, according to another C++ talk, something like 60% of rust crates are dependent on unsafe rust. The point isn't to diss rust. The point is that a safe C++ with opt into unsafe could be similar to rust's opt into unsafe

  • aw1621107 12 hours ago

    > speaking of which, according to another C++ talk, something like 60% of rust crates are dependent on unsafe rust.

    It's probably not the source of the stats you had in mind since it's discussing something slightly different, but the Rust Foundation built a tool called Painter [0] for this kind of analysis. According to that [1]:

    > As of May 2024, there are about 145,000 crates; of which, approximately 127,000 contain significant code. Of those 127,000 crates, 24,362 make use of the unsafe keyword, which is 19.11% of all crates. And 34.35% make a direct function call into another crate that uses the unsafe keyword. Nearly 20% of all crates have at least one instance of the unsafe keyword, a non-trivial number.

    > Most of these Unsafe Rust uses are calls into existing third-party non-Rust language code or libraries, such as C or C++.

    To be honest, I would have expected that 60% number to be higher if it were counting unsafe anywhere due to unsafe in the stdlib for vocabulary types and for (presumably) common operations like iterator chains. There's also a whole other argument that the hardware is unsafe so all Rust code will depend on unsafe somewhere or another to run on actual hardware, but that's probably getting a bit into the weeds.

    [0]: https://github.com/rustfoundation/painter

    [1]: https://rustfoundation.org/media/unsafe-rust-in-the-wild-not...

    • eptcyka 10 hours ago

      Memory allocation is unsafe, so any container in the stdlib end up using unsafe at some point. This does not mean that safe Rust is useless without unsafe - the utility of Rust is that it allows one to create sage interfaces around an unsafe construct.

      • aw1621107 9 hours ago

        Right, I totally agree. I suppose I was kind of trying to express a bit of confusion at the 60% number due to the lack of specifics of what it encompasses (e.g., how far down the dependency chain is that number looking?), unlike the stats I quoted.

    • Ygg2 11 hours ago

      > there's also a whole other argument that the hardware is unsafe so all Rust code will depend on unsafe somewhere or another to run on actual hardware, but that's probably getting a bit into the weeds.

      That's not going into the weeds, by that logic (Nirvana fallacy) no language is safe, you're going to die, so why bother about anything? Just lie down and wait for bugs to eat you.

      • aw1621107 9 hours ago

        Perhaps I got the quip wrong. I was basically trying to reference discussions I've seen elsewhere along the lines of "Rust is not actually memory-safe because it needs unsafe", sometimes followed by the argument you outlined. Those discussions can get a bit involved and I don't think this is a good time/place for them, so I more or less just wanted to reference it without spending much time/words on actually delving into it.

  • eslaught 13 hours ago

    There has been talk of new language frontends for C++:

    Cpp2 (Herb Sutter's brainchild): https://hsutter.github.io/cppfront/

    Carbon (from Google): https://github.com/carbon-language/carbon-lang

    In principle those could enable a safe subset by default, which would (except when explicitly opted-out) provide similar safety guarantees to Rust, at least at the language level. It's still up to the community to design safe APIs around those features, even if the languages exist. Rust has a massive advantage here that the community built the ecosystem with safety in mind from day 1, so it's not just the language that's safe, but the APIs of various libraries are often designed in an abuse-resistant way. C++ is too much of a zoo to ever do that in a coherent way. And even if you wanted to, the "safe" variants are still in their infancy, so the foundations aren't there yet to build upon.

    I don't know what chance Cpp2 or Carbon have, but I think you need something as radical as one of these options to ever stand a chance of meaningfully making C++ safer. Whether they'll take off (and before Rust eats the world) is anyone's guess.

    • aw1621107 12 hours ago

      I don't think Carbon is a C++ frontend like cppfront. My impression is that cppfront supports C++ interop by transpiling/compiling to C++, but Carbon compiles straight to LLVM and supports C++ interop through built-in language mechanisms.

thrwyexecbrain a day ago

The C++ code I write these days is actually pretty similar to Rust: everything is explicit, lots of strong types, very simple and clear lifetimes (arenas, pools), non-owning handles instead of pointers. The only difference in practice is that the build systems are different and that the Rust compiler is more helpful (both in catching bugs and reporting errors). Neither a huge deal if you have a proper build and testing setup and when everybody on your team is pretty experienced.

By the way, using "atoi" in a code snippet in 2025 and complaining that it is "not ideal" is, well, not ideal.

  • mountainriver 19 hours ago

    I still find it basically impossible to get started with a C++ project.

    I tried again recently for a proxy I was writing thinking surely things have evolved at this point. Every single package manager couldn’t handle my very basic and very popular dependencies. I mean I tried every single one. This is completely insane to me.

    Not to mention just figuring out how to build it after that which was a massive headache and an ongoing one.

    Compared to Rust it’s just night and day.

    Outside of embedded programming or some special use cases I have literally no idea why anyone would ever write C++. I’m convinced it’s a bunch of masochists

    • morsecodist 15 hours ago

      Agreed. I have had almost the same experience. The package management and building alone makes Rust worth it for me.

    • runevault 15 hours ago

      When I've dabbled in C++ recently it has felt like using CMake fetching github repos has been the least painful thing I've tried (dabbled in vcpkg and conan a bit), since most libraries are cmake projects.

      I am no expert so take it with a grain of salt, but that was how it felt for me.

      • kylereeve 3 hours ago

        Do you have CMake actually run `git clone`, or do you clone separately and point CMake at the `FIND_X` files?

        • runevault 2 hours ago

          I was using fetch content or the like, there is a package that comes after a certain version of cmake where you can tell it this is a git repo and it handles all that for you. It has been a few months since I did this so I don't remember the details fully

    • ValtteriL 12 hours ago

      Felt the same pain with vcpkg. Ended up using OS packages and occasionally simply downloading a pure header based dependency.

      With Nix, the package selection is great and repackaging is fairly straight forward.

    • almostgotcaught 16 hours ago

      > Every single package manager couldn’t handle my very basic and very popular dependencies

      Well there's your problem - no serious project uses one.

      > I’m convinced it’s a bunch of masochists

      People use cpp because it's a mature language with mature tooling and an enormous number of mature libraries. Same exact reason anyone uses any language for serious work.

      • cratermoon 15 hours ago

        How can you simultaneously call cpp a mature language with mature tooling and acknowledge that there's no working package manager used by any "serious" project?

        • const_cast 14 hours ago

          Package managers per language are a (relatively) new endeavor. The oldest language I can think of that widely adopted it was Perl. Although, perl was quite ahead of it's time in a lot of ways, and php undid some the work of perl and went back to popularizing include type dependencies instead of formal modules with a package manager.

          C++ "gets away" with it because of templates. Many (most?) libraries are mostly templates, or at the very least contain templates. So you're forced into include-style dependencies and it's pretty painless. For a good library, it's often downloading a single file and just #include-ing it.

          C++ is getting modules now, and maybe that will spur a new interest in package managers. Or maybe not, it might be too late.

          • simonask 10 hours ago

            It's a relatively new endeavor, but it's also a requirement in 2025 if you want to be portable. The Linux ecosystem was focusing on installing dependencies system-wide for decades (that's how traditional `./configure.sh` expects things to work), and this approach is just inferior in so many ways.

            The shenanigans people get into with CMake, Conan, vcpkg, and so on is a patchwork of nightmares and a huge time sink compared to superior solutions that people have gotten used to in other languages, including Rust.

        • tdiff 14 hours ago

          Because cpp is not meant for "rapid prototyping" involving importing half of github with single command. And the reality is that it works.

          • simonask 10 hours ago

            Works for whom?

            C++ build systems are notoriously brittle. When porting a project to a new platform, you're never just porting the code, you are also porting your build system. Every single project is bespoke in some way, sometimes because of taste, but most of the time because of necessity.

            It works because people spend a huge amount of time to make it work.

            • affyboi 6 hours ago

              > Works for whom?

              FAANG, hedge funds/HFT, game studios

            • tdiff 8 hours ago

              Works for numerous projects which "run the world".

              Everyone know the system is brittle, but somehow manage to handle it.

            • tubs 9 hours ago

              This seems hyperbolic. At work we cross compile the same code for a decent number of different platform - six different OS (Linux Mac windows and some embedded ones) over 20odd cpu architectures.

              It’s the same build system for all of them.

        • guappa 11 hours ago

          apt install xxxxx-dev

        • almostgotcaught 14 hours ago

          Do you people really not realize how completely asinine you sound with these lowbrow comments? I'll give you a hint: did you know that C also has no package manager?

          • adgjlsfhk1 14 hours ago

            Yeah, and it's also much worse for it. There's a reason everyone in C uses their own linked list implementation and it's not because it's a platonic ideal of perfect software.

            • almostgotcaught 14 hours ago

              The question wasn't whether C/C++ are platonic ideals, the question was whether a language can be mature without a package manager.

              • josephg 13 hours ago

                If we take “mature” to mean “old” then yes - C and C++ are certainly old. If we take “mature” to mean “good”, then my answer changes.

                • imtringued 8 hours ago

                  Agreed. Getting started with a C or C++ project is such a pain in the ass that I won't even bother. Then there is the fact that unless you have special requirements that necessitate C/C++, those languages have nothing going for them.

          • hlpn 14 hours ago

            [flagged]

  • kanbankaren 20 hours ago

    The C++ code I wrote 20 years ago also had strong typing and clear lifetimes.

    Modern C++ has reduced a lot of typing through type inference, but otherwise the language is still strongly typed and essentially the same.

    • pjmlp 13 hours ago

      Unfortunely thanks to the "code C in C++ crowd", there is this urban myth that goodies like proper encapsulation, stronger types, RAII, were not even available in pre-C++98, aka C++ARM.

      Meanwhile it was one of the reasons after Turbo Pascal, my next favourite programming language became C++.

      For me mastering C, after 1992, only became mattered because as professional, that is something that occasionally I have to delve into so better know your tools even if the grip itself has sharp corners, otherwise everytime the option was constrained to either C or C++, I always pick C++.

    • simonask 10 hours ago

      The strong/weak distinction is a bit fuzzy, but reasonable people can have the opinion that C++ is, in fact, loosely/weakly typed. There are countless ways to bypass the type system, and there are implicit conversions everywhere.

      It _is_ statically typed, though, so it falls in a weird category of loosely _and_ statically typed languages.

  • taylorallred 21 hours ago

    Cool that you're using areas/pools for lifetimes. Are you also using custom data structures or stl (out of curiosity)?

    • thrwyexecbrain 3 hours ago

      Nothing fancy, I found that one can do almost anything with std::vector, a good enough hash map and a simple hand-rolled intrusive list.

  • yodsanklai a day ago

    > The C++ code I write these days

    Meaning you're in a context where you have control on the C++ code you get to write. In my company, lots of people get to update code without strict guidelines. As a result, the code is going to be complex. I'd rather have a simpler and more restrictive language and I'll always favor Rust projects to C++ ones.

    • bluGill 21 hours ago

      That is easy to say today, but I guarantee in 30 year Rust will have rough edges too. People always want some new feature and eventually one comes in that cannot be accommodated nicely.

      Of course it will probably not be as bad as C++, but still it will be complex and people will be looking for a simpler language.

      • simonask 10 hours ago

        Rust has rough edges today. The field of programming is still only a few decades old, and there's no doubt that something even shinier will come along, we just don't know yet what that looks like.

        That's not a good reason to stick with inferior tools now, though.

        • bluGill 6 hours ago

          What does inferior mean?

          Rust is inferior to C++ for my needs. This is just a reflection on we started a large project in C++ before rust existed, and now have millions of lines. Getting Rust to work with our existing C++ is hard enough as to not be worth it. Rewriting in Rust would cost 1 billion dollars. Thus despite all the problems we have with C++ that Rust would solve, rust is inferior.

          (Rust is working on their C++ interoperability story and we are making changes that will allow using Rust in the future so I reserve the right to change this story in a few years, but only time will tell)

      • timbit42 20 hours ago

        How many rough edges will C++ have in another 30 years?

        • bluGill 6 hours ago

          Who knows. It will likely have more than any other language. Though it will also continue to not get credit for things it got right.

          There will always remain two types of languages: those that nobody uses and those that everybody complains about.

  • andrepd 8 hours ago

    Lack of pattern matching and move only types means you physically cannot code in C++ as you would in Rust, even ignoring all the memory safety stuff.

bunderbunder 3 hours ago

This is actually the point where Rust starts to frustrate me a little bit.

Not because Rust is doing anything wrong here, but because the first well-known language to really get some of these things right also happens to be a fairly low-level systems language with manual memory management.

A lot of my colleagues seem to primarily be falling in love with Rust because it's doing a good job at some basic things that have been well-known among us "academic" functional programming nerds for decades, and that's good. It arguably made inroads where functional programming languages could not because it's really more of a procedural language, and that's also good. Procedural programming is a criminally underrated and misunderstood paradigm. (As much as I love FP, that level of standoffishness about mutation and state isn't any more pragmatic than OOP being so hype about late binding that every type must support it regardless of whether it makes sense in that case.)

But they're also thoroughly nerdsniped by the borrow checker. I get it, you have to get cozy with the borrow checker if you want to use Rust. But it seems like the moral opposite of sour grapes to me. The honest truth is that, for most the software we're writing, a garbage collected heap is fine. Better, even. Shared-nothing multithreading is fine. Better, even.

So now we're doing more and more things in Rust. Which I understand. But I keep wishing that I could also have a Rust-like language that just lets me have a garbage collector for the 95% of my work where the occasional 50ms pause during run-time just isn't a big enough problem to justify a 50% increase in development and maintenance effort. And then save Rust for the things that actually do need to be unmanaged. Which is maybe 5% of my actual work, even if I have to admit that it often feels like 95% of the fun.

  • Starlevel004 2 hours ago

    > Not because Rust is doing anything wrong here, but because the first well-known language to really get some of these things right also happens to be a fairly low-level systems language with manual memory management.

    It also has half implementations of all the useful features (no distinct enum variant types, traits only half-exist) because you have to code to the second, hidden language that it actually compiles to.

  • legobmw99 3 hours ago

    Based on the rest of your comment I suspect you're already familiar, but a decent candidate for "Rust with a GC" is OCaml, the language the first Rust compiler was written in.

    • bunderbunder 3 hours ago

      It's close. Perhaps there's an interesting conversation to be had about why OCaml hasn't taken over the world the way Rust has.

      The toolchain might be a first candidate. Rust's toolchain feels so very modern, and OCaml's gives me flashbacks to late nights trying to get my homework done on the department's HP-UX server back in college.

jpc0 a day ago

Amazing example of how easy it is to get sucked into the rust love. Really sincerely these are really annoying parts of C++.

The conversation function is more language issue. I don’t think there is a simple way of creating a rust equivalent version because C++ has implicit conversions. You could probably create a C++ style turbofish though, parse<uint32_t>([your string]) and have it throw or return std::expected. But you would need to implement that yourself, unless there is some stdlib version I don’t know of.

Don’t conflate language features with library features.

And -Wconversion might be useful for this but I haven’t personally tried it since what Matt is describing with explicit types is the accepted best practice.

  • ujkiolp a day ago

    meh, rust is still better cos it’s friendlier

    • jpc0 a day ago

      I don’t disagree. Rust learnt a ton from C++.

      I have my gripes with rust, more it’s ecosystem and community that the core language though. I won’t ever say it’s a worse language than C++.

      • noelnh 20 hours ago

        Could you elaborate on those points, I'm genuinely curious? So far, I have found the Rust community to be immensely helpful, much more so than I experienced the C++ community. Granted, that's quite some time ago and might be at least partially caused by me asking fewer downright idiotic questions. But still, I'm interested in hearing about your experiences.

        • jpc0 16 hours ago

          Rust libraries tend to over abstract and then need large refactors when those abstractions fall apart. When I’ve complained about it in the past I’ve been met with “You would need the abstraction eventually”. Maybe, but I’m also capable of building it myself it it gets to that.

          Maybe that’s more of a bias with rust media stuff, seems to be going deeper into that rabbit hole though.

          The community was at least, may still be, very sensitive to rust being criticised. I genuinely brought an example of a provably correct piece of code that the borrow checker wouldn’t accept, interior mutability problem. I was I should build a massive abstraction to avoid the problem and that I’m holding it wrong… Put me off the language for a few years, it shouldn’t have, I should have just ignored the people and continued on but we all get older and learn things.

          • simonask 10 hours ago

            I think there's a bit of mismanaged expectations, combined with a community that, while generally helpful, suffers from a bit of fatigue from constantly dispelling myths and falsehoods about the language, often presented in bad faith.

            My favorite is when Rust gets dragged into weird American "culture wars" - somehow, it's a "woke" language? (And somehow, that's a problem?)

            But yeah, the language docs are pretty up front about the fact that the borrow checker sometimes rejects code that is provably fine, so it's a weird criticism. The nontrivial breakthrough was that Rust proved that a huge amount of nontrivial code can be written within the restrictions of the borrow checker, eliminating swaths of risk factors without a resource penalty.

        • BlueTemplar 17 hours ago

          For where I am concerned, I don't want to have anything to do with the kind of developers that still think that it's acceptable to use Github, VS Code, or Discord in 2025 in a professional setting, much worse teach a new generation of developers to use them : that's like being a doctor and giving out cigarettes to children.

        • bigstrat2003 18 hours ago

          The Rust community is helpful... but also quite political and extremely hostile to anyone who doesn't share those politics. Even something as anodyne as saying "let's keep politics out of technical discussion" is frequently met with hostility (because many community members believe that tech is inherently political and that trying to keep politics out is really just a bad faith attempt to frame things in terms of the requester's preferred politics). It's also full of drama in a way that other communities online simply aren't. For example, the drama that happened when the guy behind thephd.dev got invited to give the keynote at rustconf, then his talk was downgraded from the keynote - everyone involved in that mess (including other bloggers who weighed in) came off as immature and not someone you would ever want to work with.

          I like the Rust language quite a bit. I find the Rust community to be one of the most toxic places in the entire tech business. Your mileage may vary and that's fine of course - but plenty of people want to stay far away from a community that acts like the Rust community does.

          • kragen 14 hours ago
            • Ygg2 11 hours ago

              To be clear, everyone in Rust community (on Reddit, Twitter, etc) was shocked by this, and people started asking for explanations. This led to several people stepping down, and seems to have been miscommunication between Rust Foundation and Rust Project.

              • kragen 5 hours ago

                I'd like to read the other sides of the story; do you have any recommendations?

                On the surface it sounds like a community with such deep pathology that it will take at least a generation following a complete change of leadership to have a chance at recovery. But there are three sides to every story.

                • Ygg2 4 hours ago

                  I think the best summary is this: https://fasterthanli.me/articles/the-rustconf-keynote-fiasco...

                  > On the surface it sounds like a community with such deep pathology

                  First what sort of pathology? You're confusing community with leadership.

                  The community didn't want this, and leadership was doing a restructuring due to change from Foundation and Project. Welcome to OSS projects.

                  Second as opposed to what?

                  A community at the beck and call of your CEO dictator? I'm a Java dev, so all it takes for Java to die is for One Rich Asshole Called Larry Ellison to decide that they (ORACLE) are inserting two mandatory ads to be watched during each Java compiler run. Or god forbid that they will monetize Java.

                  Plus if I had 24/7 insight into how Oracle worked, I'd probably also be much less inclined to join Java as a new dev.

                  To paraphrase Tolstoy: (All perfect languages are dead;) Each imperfect language is imperfect in its own way.

GardenLetter27 a day ago

It's a shame Rust doesn't have keyword arguments or named tuples to make handling some of these things easier without Args/Options structs boilerplate.

  • frankus a day ago

    I work all day in Swift (which makes you go out of your way to omit argument labels) and I'm surprised they aren't more common.

    • kevincox 7 hours ago

      Yeah, this is one of the few things that I love about Swift. I think it gets it exactly right that keyword arguments should be the default and you can opt out in cases where the keyword is really unnecessary.

  • jsat a day ago

    Had the same thought... It's backwards that any language isn't using named parameters at this point.

    • Ygg2 a day ago

      Named parameters do come with a large footgun. Renaming your parameters is a breaking change.

      Especially if you're coming from different langs.

      • const_cast 14 hours ago

        This only really applies to languages that don't check this at compile-time. I don't consider compile-time errors a foot gun. I mean, it should be impossible for that kind of bad code to ever get merged in most reasonable CI/CD processes.

        • Ygg2 11 hours ago

          No? This happens in any language that has keyword args.

          If I delete/rename a field of a class in any statically checked language, it's going to report a compile error, and it's still a breaking change. Same thing with named arguments.

      • Spivak 8 hours ago

        I guess but you're changing your user-visible API so it should be a breaking change. In languages that don't have this type/arity is all that matters and the name is just nice sugar for the implementor who doesn't have to bind them to useful names.

        Even if you don't use keyword args your parameter names are still part of your API surface in Python because callers can directly name positional args. Only recently have you been able enforce unnamed positional only args as well as the opposite.

  • Gazoche 9 hours ago

    Agreed, coming from Python it's one of the main things I miss in Rust. You can achieve something similar with the builder pattern or with structs + the Default trait, but it takes much more effort.

  • shpongled a day ago

    Yep, I would love anonymous record types, ala StandardML/OCaml

grumbel a day ago

There is '-Wconversion' to catch things like this. It will however not trigger in this specific case since g++ assumes converting 1000.0 to 1000 is ok due to no loss in precision.

Quantity(100) is counterproductive here, as that doesn't narrow the type, it does the opposite, it casts whatever value is given to the type, so even Quantity(100.5) will still work, while just plain 100.5 would have given an error with '-Wconversion'.

  • Arnavion a day ago

    The reason to introduce the Quantity wrapper is to not be able to swap the quantity and price arguments.

  • b5n a day ago

    > -Wconversion ... assumes converting 1000.0 to 1000 is ok due to no loss in precision.

    Additionally, `clang-tidy` catches this via `bugprone-narrowing-conversions` and your linter will alert if properly configured.

    • kelnos a day ago

      My opinion is that if you need to run extra tools/linters in order to catch basic errors, the language & its compiler are not doing enough to protect me from correctness bugs.

      I do run clippy on my Rust projects, but that's a matter of style and readability, not correctness (for the most part!).

      • uecker 11 hours ago

        The reason certain warnings are on or off by default in compilers in certain warnings modes depends on whether enough people find them useful enough or not. Rust caters to people who want strictness which makes it annoying to use for others, but if you want this you can also - to a large degree - have this in C and C++.

      • b5n a day ago

        There's a bit more nuance here than 'basic errors', and modern c compilers offer a lot of options _if you need to use them_.

        I appreciate that there are guardrails in a tool like rust, I also appreciate that sharp tools like c exist, they both have advantages.

        • Arnavion 19 hours ago

          To be clear, the only difference between Rust and C here is whether the conversion happens by default or not. Rust doesn't do the conversion by default but will let you do it if you want to, with `as`.

          There are also more type-safe conversion methods that perform a more focused conversion. Eg a widening conversion from i8 -> i16 can be done with .into(), a narrowing conversion from i16 -> i8 can be done with .try_into() (which returns a Result and forces you to handle the overflow case), a signed to unsigned reinterpretation like i64 -> u64 can be done with .cast_unsigned(), and so on. Unlike `as` these have the advantage that they stop compiling if the original value changes type; eg if you refactor something and the i8 in the first example becomes an i32, the i32 -> i16 conversion is no longer a widening conversion so the `.into()` will fail to compile.

          • renox 9 hours ago

            That's funny, if memory serves 'as' should be avoided in Rust and other casts should be used. That's a Rust wart which cannot be fixed..

            • Arnavion 3 hours ago

              Yes that's correct, for exactly the reason that it is more likely to keep compiling and possibly not do what you intended if the original value's type changes due to refactoring. However there are still a few conversions that don't have alternatives to `as` - truncating conversions (eg i64 -> i32 that intentionally discards the upper half), int <-> float conversions (eg i64 -> f64, both truncating and checked conversions), unsized pointer casts (eg *const [MaybeUninit<u8>] -> *const [u8], `.cast()` only works for Sized target), and probably a few more.

      • jpc0 a day ago

        How much of what Rust the language checks is actually linter checks implemented in the compiler?

        Conversions may be fine and even useful in many cases, in this case it isn’t. Converting to std::variant or std::optional are some of those cases that are really nice.

      • throwaway76455 21 hours ago

        Setting up clang-tidy for your IDE isn't really any more trouble than setting up a LSP. If you want the compiler/linter/whatever to reject valid code to protect you from yourself, there are tools you can use for that. Dismissing them just because they aren't part of the language (what, do you expect ISO C++ to enforce clang-tidy usage?) is silly.

      • pjmlp 12 hours ago

        I beg to differ, the same reasoning applies to Rust, otherwise there would not be a clippy at all.

favorited a day ago

Side note, if anyone is interested in hearing more from Matt, he has a programming podcast with Ben Rady called Two's Complement.

https://www.twoscomplement.org

karel-3d 10 hours ago

Well, if you don't want to confuse parameters, you should use Objective-C.

You would do

[orderbook sendOrderWithSymbol:"foo" buy:true quantity:100 price:1000.00]

Cannot confuse that!

(I never used swift, I think it retains this?)

writebetterc a day ago

Yes, Rust is better. Implicit numeric conversion is terrible. However, don't use atoi if you're writing C++ :-). The STL has conversion functions that will throw, so separate problem.

  • roelschroeven 20 hours ago

    The numeric conversion functions in the STL are terrible. They will happily accept strings with non-numeric characters in them: they will convert "123abc" to 123 without giving an error. The std::sto* functions will also ignore leading whitespace.

    Yes, you can ask the std::sto* functions for the position where they stopped because of invalid characters and see if that position is the end of the string, but that is much more complex than should be needed for something like that.

    These functions don't convert a string to a number, they try to extract a number from a string. I would argue that most of the time, that's not what you want. Or at least, most of the time it's not what I need.

    atoi has the same problem of course, but even worse.

    • nerpaskteipntei 8 hours ago

      Now there is also std::from_chars function

      • jeroenhd 6 hours ago

        std::from_chars will still accept "123abc". You have to manually check if all parts of the string have been consumed. On the other hand, " 123" is not accepted, because it starts with an invalid character, so the behaviour isn't "take the first acceptable number and parse that" either.

        To get the equivalent of Rust's

            if let Ok(x) = input.parse::<i32>() {
                 println!("You entered {x}");
            } else {
                eprintln!("You did not enter a number");
            }
        
        you need something like:

             int x{};
             auto [ptr, ec] = std::from_chars(input.data(), input.data() + input.size(), x);
             if (ec == std::errc() && ptr == input.data() + input.size()) { 
                 std::cout << "You entered " << x << std::endl;
             } else {
                 std::cerr << "You did not enter a valid number" << std::endl;
             }
        
        I find the choice to always require a start and and end position, and not to provide a method that simply passes or fails, to be quite baffling. In C++26, they also added an automatic boolean conversion for from_chars' return type to indicate success, which considers "only consumed half the input from the start" to be a success.

        Maybe I'm weird for mostly writing code that does straightforward input-to-number conversions and not partial string parsers, but I have yet to see a good alternative for Rust's parse().

        • roelschroeven 6 hours ago

          I guess there's a place for functions that extract or parse partially, but IMO there is a real need for an actual conversion function like Rust's parse() or Python's int() or float(). I think it's a real shame C++ (and C as well) only offers the first and not the second.

  • titzer a day ago

    > Implicit numeric conversion is terrible.

    It's bad if it alters values (e.g. rounding). Promotion from one number representation to another (as long as it preserves values) isn't bad. This is trickier than it might seem, but Virgil has a good take on this (https://github.com/titzer/virgil/blob/master/doc/tutorial/Nu...). Essentially, it only implicitly promotes values in ways that don't lose numeric information and thus are always reversible.

    In the example, Virgil won't let you pass "1000.00" to an integer argument, but will let you pass "100" to the double argument.

    • plus a day ago

      Aside from the obvious bit size changes (e.g. i8 -> i16 -> i32 -> i64, or f32 -> f64), there is no "hierarchy" of types. Not all ints are representable as floats. u64 can represent up to 2^64 - 1, but f64 can only represent up to 2^53 with integer-level precision. This issue may be subtle, but Rust is all about preventing subtle footguns, so it does not let you automatically "promote" integers to float - you must be explicit (though usually all you need is an `as f64` to convert).

      • titzer 20 hours ago

        Yep, Virgil only implicitly promotes integers to float when rounding won't change the value.

             // OK implicit promotions
             def x1: i20;
             def f1: float = x1;
             def x2: i21;
             def f2: float = x2;
             def x3: i22;
             def f3: float = x3;
             def x4: i23;
             def f4: float = x4;
        
             // compile error!
             def x5: i24;
             def f5: float = x5; // requires rounding
        
        
        This also applies to casts, which are dynamically checked.

             // runtime error if rounding alters value
             def x5: i24;
             def f5: float = float.!(x5);
      • mananaysiempre a day ago

        > Aside from the obvious bit size changes (e.g. i8 -> i16 -> i32 -> i64, or f32 -> f64), there is no "hierarchy" of types.

        Depends on what you want from such a hierarchy, of course, but there is for example an injection i32 -> f64 (and if you consider the i32 operations to be undefined on overflow, then it’s also a homomorphism wrt addition and multiplication). For a more general view, various Schemes’ takes on the “numeric tower” are informative.

        • titzer 20 hours ago

          Virgil allows the maximum amount of implicit int->float injections that don't change values and allows casts (in both directions) that check if rounding altered a value. It thus guarantees that promotions and (successful) casts can't alter program behavior. Given any number in representation R, promotion or casting to type N and then casting back to R will return the same value. Even for NaNs with payloads (which can happen with float <-> double).

    • dzaima 15 hours ago

      Even that is somewhat bad, e.g. it means you miss "some_u64 = some_u32 * 8" losing bits due to promoting after the arith op, not before.

    • renox 10 hours ago

      I disagree: when you use floats, you implicitly accept the precision loss/roundings that comes with using floats.. IMHO int to float implicit conversion is fine as long as you have explicit float to int conversion.

  • im3w1l 20 hours ago

    Forcing people to explicitly casts everything all the time means that dangerous casts don't stand out as much. That's an L for rust imo.

    • thesuperbigfrog 19 hours ago

      The idiomatic Rust way to do conversions is using the From and TryFrom traits:

      https://doc.rust-lang.org/stable/rust-by-example/conversion/...

      https://doc.rust-lang.org/stable/rust-by-example/conversion/...

      If the conversion will always succeed (for example an 8-bit unsigned integer to a 32-bit unsigned integer), the From trait would be used to allow the conversion to feel implicit.

      If the conversion could fail (for example a 32-bit unsigned integer to an 8-bit unsigned integer), the TryFrom trait would be used so that an appropriate error could be returned in the Result.

      These traits prevent errors when converting between types and clearly mark conversions that might fail since they return Result instead of the output type.

      • im3w1l 19 hours ago

        Thanks yeah I probably misremembered or misunderstood.

kasajian a day ago

This seems a big silly. This is not a language issue. You can have a C++ library that does exactly all the things being shown here so that the application developer doesn't worry about. There would no C++ language features missing that would accomplish what you're able to do on the Rust side.

So is this really a language comparison, or what libraries are available for each language platform? If the latter, that's fine. But let's be clear about what the issue is. It's not the language, it's what libraries are included out of the box.

  • lytedev a day ago

    The core of this argument taken to its extreme kind of makes the whole discussion pointless, right? All the languages can do all the things, so why bother differentiating them?

    To entertain the argument, though, it may not be a language issue, but it certainly is a selling point for the language (which to me indicates a "language issue") to me if the language takes care of this "library" (or good defaults as I might call them) for you with no additional effort -- including tight compiler and tooling integration. That's not to say Rust always has good defaults, but I think the author's point is that if you compare them apples-to-oranges, it does highlight the different focuses and feature sets.

    I'm not a C++ expert by any stretch, so it's certainly a possibility that such a library exists that makes Rust's type system obsolete in this discussion around correctness, but I'm not aware of it. And I would be incredibly surprised if it held its ground in comparison to Rust in every respect!

  • sdenton4 a day ago

    If the default is a loaded gun pointed at your foot, you're going to end up with lots of people missing a foot. "just git gud" isn't a solution.

    • cbsmith a day ago

      That's an entirely different line of reasoning from the article though, and "just git gud" isn't really the solution here any more than it is to use Rust. There are facilities for avoiding these problems that you don't have to learn how to construct yourself in either language.

  • Etheryte a day ago

    Just like language shapes the way we think and talk about things, programming languages shape both what libraries are written and how. You could write anything in anything so long as it's Turing complete, but in real life we see clearly that certain design decisions at the language level either advantage or disadvantage certain types of solutions. Everyone could in theory write C without any memory issues, but we all know how that turns out in practice. The language matters.

    • Maxatar 18 hours ago

      The Sapir Whorf hypothesis has long been debunked:

      https://en.m.wikipedia.org/wiki/Linguistic_relativity

      • oasisaimlessly 15 hours ago

        The strong hypothesis has been debunked, yes, but nobody is asserting it.

        From your link:

        > Nevertheless, research has produced positive empirical evidence supporting a weaker version of linguistic relativity:[5][4] that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them.

  • LinXitoW 21 hours ago

    Sure, you can emulate some of the features and hope that everyone using your library is doing it "right". Just like you could just use a dynamic language, tag every variable with a type, and hope everyone using your library does the MANUAL work of always doing it correct. Guess we don't need types either.

    And while we're at it, why not use assembly? It's all just "syntactic sugar" over bits, doesn't make any difference, right?

  • cbsmith a day ago

    Yeah, I kept thinking, "doesn't mp-units basically address this entirely"?

jsat a day ago

I see an article about how strict typing is better, but what would really be nice here is named parameters. I never want to go back to anonymous parameters.

  • kelnos a day ago

    Yes, this is one of the few things that I think was a big mistake in Rust's language design. I used to do a lot of Scala, and really liked named parameters there.

    I suppose it could still be added in the future; there are probably several syntax options that would be fully backward-compatible, without even needing a new Rust edition.

    • quietbritishjim 9 hours ago

      I suppose the sense it is backwards incompatible is that library authors have named their parameters without intended to make them part of the public interface that they commit to maintaining. Perhaps it could be made backwards compatible by being opt in function declarations but that would seem like a bit of a pain.

  • sophacles 21 hours ago

    Why? In 2025 we have tooling available for most every editor that will annotate that information into the display without needing them present in the file. When I autocomplete a function name, all the parameters are there for me to fill in, and annotated into the display afterwards. It seems like an unnecessary step to reify it and force the bytes to be present in the the saved file.

    • skywhopper 6 hours ago

      So those editors could just insert the names for you. The bytes in the source file are not a serious concern, are they?

  • codedokode a day ago

    When there are 3-4 parameters it is too much trouble to write the names.

    • bsder 20 hours ago

      > When there are 3-4 parameters it is too much trouble to write the names.

      Sorry, I don't agree.

      First, code is read far more often than written. The few seconds it takes to type out the arguments are paid again and again each time you have to read it.

      Second, this is one of the few things that autocomplete is really good at.

      Third, almost everybody configures their IDE to display the names anyway. So, you might as well put them into the source code so people reading the code without an IDE gain the benefit, too.

      Finally, yes, they are redundant. That's the point. If the upstream changes something and renames the argument without changing the type I probably want to review it anyway.

      • monkeyelite 9 hours ago

        Making something longer doesn’t make it easier to read, especially in repetition.

    • noitpmeder 21 hours ago

      Not OP, but I imagine he's arguing for something like python's optional named arguments.

brundolf 19 hours ago

Wait, Godbolt is someone's name?

  • frankwiles 19 hours ago

    Yes and he’s a really cool nice guy to boot!

    • brundolf 18 hours ago

      I almost want to call this nominative determinism

      I always thought it was called godbolt because it's like... Zeus blowing away the layers of compilation with his cosmic power, or something. Like it's a herculean task

      • tialaramex 8 hours ago

        Another example: eBay is because its founders solo consulting business "Echo Bay Technology Group" owned ebay.com and so when he built his auction web site on that domain everybody just called the auction site "eBay" anyway.

mattgodbolt a day ago

Wow that guy, eh? He seems to turn up everywhere :D

  • oconnor663 10 hours ago

    This account's been pretending to be Matt Godbolt since 2014. Knows him better than he knows himself. Absolute commitment to the bit.

mrsofty 3 hours ago

Godbolt is a legend and I don't care what the others say. I believe him when he said he was "just helping" that sheep through the hedge. Well deserved praise Matty-poo and well written article.

morning-coffee a day ago

Reading "The Rust Book" sold me on Rust (after programming in C++ for over 20 years)

  • mixmastamyk a day ago

    Am about finished, but several chapters near the end seriously put me to sleep. Will try again some other day I suppose.

DrBazza 7 hours ago

The implicit problem here (pun intended) in the given examples are implicitness vs. explicitness.

Rust chose (intentionally or otherwise) to do the opposite of the many things that C++ does, because C++ does it wrong. And C++ does it wrong because we didn't know any better at the time, and the world, pre-internet, was much less connected. Someone had to do it first (or first-ish).

The main thing I like about Rust is the tooling. C++ is death by a thousand build systems and sanitizers.

TinkersW 9 hours ago

Dunno about this, the example C++ code is so obviously bad that I had no desire to watch the video.

Creating strong types for currency seems like common sense, and isn't hard to do. Even the Rust code shouldn't be using basic types.

  • jk3000 9 hours ago

    Somehow this is the pattern when comparing C++ to Rust: write outrageously bad C++ in the first place, then complain about it.

markus_zhang a day ago

What if we have a C that removes the quirks without adding too much brain drain?

So no implicit type conversions, safer strings, etc.

  • cogman10 a day ago

    I've seen this concept tried a few times (For example, MS tried it with Managed C++). The inevitable problem you run into is any such language isn't C++. Because of that, you end up needing to ask, "why pick this unpopular half C/C++ implementation and not Rust/go/D/Java/python/common lisp/haskell."

    A big hard to solve problem is you are likely using a C because of the ecosystem and/or the performance characteristics. Because of the C header/macro situation that becomes just a huge headache. All the sudden you can't bring in, say, boost because the header uses the quirks excluded from your smaller C language.

  • o11c a day ago

    I too have been thinking a lot about a minimum viable improvement over C. This requires actually being able to incrementally port your code across:

    * "No implicit type conversions" is trivial, and hardly worth mentioning. Trapping on both signed and unsigned overflow is viable but for hash-like code opting in to wrapping is important.

    * "Safer strings" means completely different things to different people. Unfortunately, the need to support porting to the new language means there is little we can do by default, given the huge amount of existing code. We can however, add new string types that act relatively uniformly so that the code can be ported incrementally.

    * For the particular case of arrays, remember that there are at least 3 different ways to compute its length (sentinel, size, end-pointer). All of these will need proper typing support. Particularly remember functions that take things like `(begin, middle end)`, or `(len, arr1[len], arr2[len])`.

    * Support for nontrivial trailing array-or-other datums, and also other kinds of "multiple objects packed within a single allocation", is essential. Again, most attempted replacements fail badly.

    * Unions, unfortunately, will require much fixing. Most only need a tag logic (or else replacement with bitcasting), but `sigval` and others like it are fundamentally global in nature.

    * `va_list` is also essential to support since it is very widely used.

    * The lack of proper C99 floating-point support, even in $CURRENTYEAR, means that compile-to-C implementations will not be able to support it properly either, even if the relevant operations are all properly defined in the new frontend to take an extra "rounding mode" argument. Note that the platform ABI matters here.

    * There are quite a few things that macros are used for, but ultimately this probably is a finite set so should be possible to automatically convert with a SMOC.

    Failure to provide a good porting story is the #1 mistake most new languages make.

    • trealira 3 hours ago

      > The lack of proper C99 floating-point support, even in $CURRENTYEAR

      What do you mean? What's wrong with floating point numbers in C99?

      • o11c an hour ago

        I mean things like: compilers don't support the pragmas, and if the compiler can "see" constants they are often evaluated with the wrong rounding mode.

        I'm far from an expert but I've seen enough to know it's wrong.

    • uecker 12 hours ago

      I have a plan for a safe C and also type-safe generic and bounds-checked containers. Here is some experimental (!) example: https://godbolt.org/z/G4ncoYjfW

      Except for some missing pieces, this is safe and I have a prototype based on GCC that would warn about any unsafe features. va_list can be safely used at least with format strings and for union I need an annotations. Life times are the bigger outstanding issue.

  • wffurr a day ago

    This seems like such an obvious thing to have - where is it? Zig, Odin, etc. all seem much more ambitious.

    • steveklabnik a day ago

      There have been attempts over the years. See here, a decade ago: https://blog.regehr.org/archives/1287

      > eventually I came to the depressing conclusion that there’s no way to get a group of C experts — even if they are knowledgable, intelligent, and otherwise reasonable — to agree on the Friendly C dialect. There are just too many variations, each with its own set of performance tradeoffs, for consensus to be possible.

      • wffurr 6 hours ago

        That was fascinating reading and a graveyard of abandoned "better C" dialects: SaferC, Friendly C, Checked C, etc.

    • IshKebab a day ago

      I think if you are going to fix C's footguns you'll have to change so much you end up with a totally new language anyway, and then why not be ambitious? It costs a lot to learn a new language and people aren't going to bother if the only benefit it brings is things that can sort of mostly be caught with compiler warnings and static analysis.

    • zyedidia a day ago

      I think the only "C replacement" that is comparable in complexity to C is [Hare](https://harelang.org/), but several shortcomings make it unsuitable as an actual C replacement in many cases (little/no multithreading, no support for macOS/Windows, no LLVM or GCC support, etc.).

      • Zambyte 21 hours ago

        And why do you think Zig (and Odin, but I'm not really familiar with that one) is not comparable in complexity to C? If you start with C, replace the preprocessor language with the host language, replace undefined behavior with illegal behavior (panics in debug builds), add different pointer types for different types of pointers (single object pointers, many object pointers, fat many object pointers (slices), nullable pointers), and make a few syntactic changes (types go after the names of values in declarations, pointer dereference is a postfix operator, add defer to move expressions like deallocation to the end of the scope) and write a new standard library, you pretty much have Zig.

  • mamcx a day ago

    If you can live without much of the ecosystem (specially if has async) there is way to write rust very simple.

    The core of Rust is actually very simple: Struct, Enum, Functions, Traits.

    • monkeyelite 9 hours ago

      But unfortunately you will encounter async in libraries you want to use, so this approach is difficult

  • uecker 11 hours ago

    I have a plan for a safe subset of C which would just require a compiler to warn about certain constructs. I also have a proposal for a safe string type. I am not so sure about type conversions though, you get useful warnings already with existing compiler flags and you can solve the problem in the article already just by wrapping the types in structs.

  • nitwit005 a day ago

    Because it's easier to add a warning or error. Don't like implicit conversions? Add a compiler flag, and the issue is basically gone.

    Safer strings is harder, as it gets into the general memory safety problem, but people have tried adding safer variants of all the classic functions, and warnings around them.

  • dlachausse a day ago

    Swift is really great these days and supports Windows and Linux. It almost feels like a scripting language other than the compile time of course.

    • kelnos a day ago

      I still have a hard time adopting a language/ecosystem that was originally tied to a particular platform, and is still "owned" by the owners of that platform.

      Sun actually did it right with Java, recognizing that if they mainly targeted SunOS/Solaris, no one would use it. And even though Oracle owns it now, it's not really feasible for them to make it proprietary.

      Apple didn't care about other platforms (as usual) for quite a long time in Swift's history. Microsoft was for years actively hostile toward attempts to run .NET programs on platforms other than Windows. Regardless of Apple's or MS's current stance, I can't see myself ever bothering with Swift or C#/F#/etc. There are too many other great choices with broad platform and community support, that aren't closely tied to a corporation.

      • hmry 21 hours ago

        .NET recently had a (very) minor controversy for inserting what amounts to a GitHub Copilot ad into their docs. So yeah, it sure feels like "once a corporate language, always a corporate language", even if it's transferred to a nominally independent org. It might not be entirely rational, but I certainly feel uncomfortable using Swift or .NET.

      • neonsunset 19 hours ago

        > Microsoft was for years actively hostile toward attempts to run .NET programs on platforms other than Windows

        It's been 10 years. Even before that, no action was ever taken against Mono nor any restriction put or anything else. FWIW Swift shares a similar story, except Apple started to care only quite recently about it working anywhere else beyond their platforms.

        Oh, and by the way, you need to look at these metrics: https://dotnet.microsoft.com/en-us/platform/telemetry

        Maybe take off the conspiracy hat?

        > There are too many other great choices with broad platform and community support

        :) No, thanks, I'm good. You know why I stayed in .NET land and didn't switch to, say, Go? It's not that it's so good, it's because most alternatives are so bad in one or another area (often many at the same time).

    • smt88 a day ago

      There is no universe where I'm doing to use Apple tooling on a day to day basis. Their DX is the worst among big tech companies by far.

      • dlachausse a day ago

        They have quite robust command line tooling and a good VS Code plugin now. You don’t need to use Xcode anymore for Swift.

  • Certhas 21 hours ago

    Maybe just unsafe rust?

  • alexchamberlain a day ago

    I'm inferring that you think Rust adds too much brain drain? If so, what?

    • GardenLetter27 a day ago

      The borrow checker rejects loads of sound programs - just read https://rust-unofficial.github.io/too-many-lists/

      Aliasing rules can also be problematic in some circumstances (but also beneficial for compiler optimisations).

      And the orphan rule is also quite restrictive for adapting imported types, if you're coming from an interpreted language.

      https://loglog.games/blog/leaving-rust-gamedev/ sums up the main issues nicely tbh.

      • IshKebab a day ago

        > The borrow checker rejects loads of sound programs

        I bet assembly programmers said the same about C!

        Every language has relatively minor issues like these. Seriously pick a language and I can make a similar list. For C it will be a very long list!

      • oconnor663 11 hours ago

        > The borrow checker rejects loads of sound programs - just read https://rust-unofficial.github.io/too-many-lists/

        It's important to be careful here: a lot (most? all?) of these rejections are programs that could be sound in a hypothetical Rust variant that didn't assert the unique/"noalias" nature of &mut reference, but are in fact unsound in actual Rust.

    • leonheld a day ago

      I love Rust, but I after doing it for a little while, I completely understand the "brain drain" aspect... yes, I get significantly better programs, but it is tiring to fight the borrow-checker sometimes. Heck, I currently am procrastinating instead of going into the ring.

      Anyhow, I won't go back to C++ land. Better this than whatever arcane, 1000-line, template-hell error message that kept me fed when I was there.

simpaticoder 19 hours ago

I don't get it. Isn't this a runtime problem and not a compile-time problem? buy() or sell() is going to be called with dynamic parameters at runtime, in general. That is, calls with concrete values are NOT going to be hard-coded into your program. I would write the function to assert() invariants within the function, and avoid chasing compile-time safety entirely. If parameter order was a concern, then I'd modify the function to take a struct, or similar.

  • brundolf 19 hours ago

    > Isn't this a runtime problem and not a compile-time problem? buy() or sell() is going to be called with dynamic parameters at runtime, in general.

    Yes, but the strength of Rust's type system means you're forced to handle those bad dynamic values up front (or get a crash, if you don't). That means the rest of your code can rest safe, knowing exactly what it's working with. You can see this in OP's parsing example, but it also applies to database clients and such

    • simpaticoder 18 hours ago

      What if the valid input for quantity must be greater than 0? A reasonable constraint, I think. The OP's example is contrived to line up with Rust's built-in types, and ignores the general problem.

      • brundolf 13 hours ago

        It's a common fallacy to equate "there's a limit to how much we can guarantee" with "guaranteeing anything is a waste of time". Each guarantee we can make eliminates a whole class of possible bugs

        That said, Rust also makes it very easy to define your own types that can only be constructed/unpacked in limited ways, which can enforce special constraints on their contents. And it has a cultural norm of doing this in the standard library and elsewhere

        Eg: a sibling poster noted the NonZero<T> type. Another example is that Rust's string types are guarantees to always contain valid UTF-8, because whenever you try and convert a byte array into a string, it gets checked and possibly rejected.

      • siev 16 hours ago

        Rust also has the standard NonZero<T> type for that use case.

tumdum_ a day ago

The one thing that sold me on Rust was that I no longer had to chase down heisenbugs caused by memory corruption.

adamc a day ago

Coming from python (or Common Lisp, or...), I wasn't too impressed. In Python I normally make args for any function with more than a couple be keyword arguments, which guarantees that you are aware of how the arguments are being mapped to inputs.

Even Rust's types aren't going to help you if two arguments simply have the same types.

  • rq1 a day ago

    Just create dummy wrappers to make a type level distinction. A Height and a a Width can be two separate types even if they’re only floats basically.

    Or another (dummy) example transfer(accountA, accountB). Make two types that wrap the same type but one being a TargetAccount and the other SourceAccount.

    Use the type system to help you, don’t fight it.

    • jpc0 a day ago

      Do you really want width and height or do you actually want dimensions or size? Same with transfer, maybe you wanted a transaction that gets executed. Worst case here use a builder with explicit function names.

      • rq1 a day ago

        I don’t really understand your point there.

        Sound type systems are equivalent to proof systems.

        You can use them to design data structures where their mere eventual existence guarantee the coherence and validity of your program’s state.

        The basic example is “Fin n” that carries at compile time the proof that you made the necessary bounds checks at runtime or by construction that you never exceeded some bound.

        Some languages allow you to build entire type level state machines! (eg. to represent these transactions and transitions)

        • jpc0 16 hours ago

          My point is a Width type is usually not the sound type you are looking for. What probably wanted was a size type which is width and height. Or a dimensions type which is width and height. The problem was maybe not two arguments being confused but in reality a single thing with two elements…

          • rq1 8 hours ago

            Ah I see, it’s a solution too!

nmeofthestate 7 hours ago

If I tried to repro this problem in C++ code I believe it would fail clang tidy checks because of the implicit numeric casts.

xarope 15 hours ago

This reminds me of SQL's constraints, or pydantic's custom types and validators, which can validate that a value should be an int, and between 0-999, and not exceed, e.g. -1 or 1000.

pydantic is a library for python, but I'm not aware of anything similar in rust or golang that can do this yet? (i.e. not just schema validation, but value range validation too)

zombot 9 hours ago

Types are a catastrophe in C++ that cannot be fixed, no matter how much cruft you bolt on after the fact. It's time to leave this dinosaur and use a sane language.

nyanpasu64 20 hours ago

The problem I've always had with unit type wrappers is you can't convert between a &[f32] and a &[Amplitude<f32>] like you can convert a single scalar value.

ModernMech a day ago

What sold me on Rust is that I'm a very bad programmer and I make a lot of mistakes. Given C++, I can't help but hold things wrong and shoot myself in the foot. My media C++ coding session is me writing code, getting a segfault immediately, and then spending time chasing down the reason for that happening, rinse and repeat.

My median Rust coding session isn't much different, I also write code that doesn't work, but it's caught by the compiler. Now, most people call this "fighting with the borrow checker" but I call it "avoiding segfaults before they happen" because when I finally get through the compiler my code usually "just works". It's that magical property Haskell has, Rust also has it to a large extent.

So then what's different about Rust vs. C++? Well Rust actually provides me a path to get to a working program whereas C++ just leaves me with an error message and a treasure map.

What this means is that although I'm a bad programmer, Rust gives me the support I need to build quite large programs on my own. And that extends to the crate ecosystem as well, where they make it very easy to build and link third party libraries, whereas with C++ ld just tells you that it had a problem and you're left on your own to figure out exactly what.

  • jpc0 a day ago

    Using your media example since I have a decent amount of experience there. Did you just use off the shelf libraries, because effectively all the libraries are written in or expose a C api. So now you not only need to deal with Rust, you need to also deal with rust ffi.

    There are some places I won’t be excited to use rust, and media heavy code is one of those places…

    • sophacles 21 hours ago

      Given that the second paragraph starts with "my median rust..." i assume the "media C++" is actually a typo for "median C++".

  • kanbankaren 20 hours ago

    > Given C++, I can't help but hold things wrong and shoot myself

    Give an example. I have been programming in C/C++ for close to 30 years and the places where I worked had very strict guidelines on C++ usage. We could count the number of times we shot ourselves due to the language.

    • mb7733 20 hours ago

      Isn't that their point though? They don't have 30 years of C/C++ experience and a workplace with very strict guidelines. They are just trying to write some code, and they run into trouble on C++'s sharper edges.

      • kanbankaren 19 hours ago

        > with very strict guidelines.

        Low level languages always came with a set of usage guidelines. Either you make the language safe for anyone that they can't shoot themselves in the foot and end up sacrificing performance, or provide guidelines on how to use it while retaining the ability to extract maximum performance from the hardware.

        C/C++ shouldn't be approached like programming in Javascript/Python/Perl.

        • zaphar 18 hours ago

          And yet, Rust doesn't sacrifice performance and it has all kinds of those guardrails.

          • uecker 12 hours ago

            I think it is a bit myth that Rust does not sacrifice performance. If you stick to the safe part Rust usually does not seem to achieve the performance of C/C++ according to what I have seen and read. I agree that the cleaner separation of unsafe and safe parts is an advantage of Rust.

feverzsj 10 hours ago

The C++ code is just pure nonsense. If you really want to be fool-proofing, C++ offers concept, which is far superior to what rust offers.

antirez 21 hours ago

You can have two arguments that are semantically as distinct and important as quantity and price and be both integers, and if you swap them is a big issue anyway. And you would be forced, if you like this kind of programming, to create distinct types anyway. But I never trust this kind of "toy" defensive programming. The value comes from testing very well the code, from a rigorous quality focus.

TnS-hun 13 hours ago

Examples miss angle brackets. For example:

  template  explicit Quantity(T quantity) : m_quantity(quantity) {

  sendOrder("GOOG", false, Quantity(static_cast(atoi("-100"))),
monkeyelite 9 hours ago

This article compares onlye one specific feature of C++ with Rust - integral type conversions.

  • tialaramex 8 hours ago

    All of C++ has implicit type conversions. The language even has a keyword (explicit) to try to reign this in because it's so obviously a bad idea.

    In Rust what they'd do if they realised there's a problem like this is make explicit conversion the default, with a language Edition, and so within a few years just everybody is used to the improved language. In C++ instead you have to learn to write all the appropriate bugfix keywords all over your software, forever.

    • monkeyelite 4 hours ago

      > because it's so obviously a bad idea.

      Agreed. The history here is compatibility with C type conversion.

      I just expected a more compelling Rust /C++ comparison but we got an emphasis of a poorly designed feature which the standard has taken steps to improve already.

      • tialaramex 3 hours ago

        No, implicit conversion is a deliberate C++ feature and no analog existed in C. Like a lot of awful things about C++ this is their own choice and it's frustrating that they try to blame C for their choices.

        In C++ when we define a class Foo (a thing which doesn't exist in C) and we write a constructor Foo(Bar x) (which doesn't exist in C) which takes a single parameter [in this case a Bar named x], that is implicitly adopted as a conversion for your new user defined type and by default without any action on your part the compiler will just invoke that constructor to make a Bar into a Foo whenever it thinks that would compile.

        This is a bad choice, and it's not a C choice, it's not about "compatibility".

        • monkeyelite 3 hours ago

          > No, implicit conversion is a deliberate C++ feature and no analog existed in C

          No.

          > it's not a C choice, it's not about "compatibility".

          One of the design of C++ classes is that you can create a class as powerful as int - you can’t do that without implicit conversion.

          • tialaramex 3 hours ago

            It would have been perfectly possible - not to mention obviously better - to make people actually write what they meant when defining the new type and this has no impact on the dubious priority of being able to make your own int type by gifiting your type implicit conversions in use if that's what you want which it often is not.

            This is just another thing on the deep pile of wrong defaults in C++.

gbin a day ago

Interestingly in Rust I would immediately use an Enum for the Order! Way more powerful semantically.

time4tea 12 hours ago

aka use Tiny Types, they will help you.

Been true in all statically typed languages for decades!

It's good advice.

mansoor_ 10 hours ago

FYI newer builds of GCC have this functionality.

nailer 14 hours ago

> Before we go any further, let me just say he acknowledges floating point is not right for price and later talks about how he usually deals with it. But it makes for a nice example, bear with us.

OK but "this makes for a nice example" is silly, given that the only reason the example throws an error is that you used a float here, when both `quantity` and `price` would have been ints.

    error[E0308]: arguments to this function are incorrect
    --> order/order-1.rs:7:5
      |
    7 |     send_order("GOOG", false, 1000.00, 100); // Wrong
      |     ^^^^^^^^^^                -------  --- expected `f64`, found `{integer}`
      |                               |
      |                               expected `i64`, found `{float}`
I love Rust, but this is artificial.
  • tialaramex 4 hours ago

    Why are they "ints" ? One of the first things we realise, in both C++ and Rust, is that we mostly don't want these primitives like "int", we want our own user defined types, and Rust is better at that in practice.

    In the C and C++ people tend to actually write the file descriptor will be an int, and the timeout will be an int, and the user account number will be an int, and the error code will be an int... because the language doesn't help much when you don't want that.

    In the Rust people actually write the file descriptor will be an OwnedFd (from the stdlib) and the timeout will be a Duration (from the stdlib), and user account number might be their own AcctNo and that error code is maybe MyCustomError

    This is a language ethos thing, C++ got string slices after Rust despite the language being much older. String slices which are a really basic central idea, but eh, C++ programmers a decade ago just had char * pointers instead and tried not to think about it too much. Still today plenty of C++ APIs don't use string slices, don't work with a real duration type, and so on. It's technically possible but the language doesn't encourage this.

    What C++ does encourage is magic implicit conversion, as with this f64 versus i64 case.

spyrja 20 hours ago

To be fair, this sort of thing doesn't have to be so much worse in C++ (yes, it would have been nice if it had been built into the language itself to begin with). You just need a function to do a back-and-forth conversion which then double-check the results, ie:

  #include <exception>  
  #include <sstream>

  template <typename From, typename To>
  void convert_safely_helper_(From const& value, To& result) {
    std::stringstream sst;
    sst << value;
    sst >> result;
  }

  // Doesn't throw, just fails
  template <typename From, typename To>
  bool convert_safely(From const& value, To* result) {
    From check;
    convert_safely_helper_(value, *result);
    convert_safely_helper_(*result, check);
    if (check != value) {
      *result = To();
      return false;
    }
    return true;
  }

  // Throws on error
  template <typename To, typename From>
  To convert_safely(From const& value) {
    To result;
    if (!convert_safely(value, &result))
      throw std::logic_error("invalid conversion");
    return result;
  }

  #include <iostream>

  template <typename Buy, typename Quantity, typename Price>
  void sendOrder(const char* symbol, Buy buy, Quantity quantity, Price price) {
    std::cout << symbol << " " << convert_safely<bool>(buy) << " "
            << convert_safely<unsigned>(quantity) << " " << convert_safely<double>(price)
            << std::endl;
  }

  #define DISPLAY(expression)         \
    std::cout << #expression << ": "; \
    expression

  template <typename Function>
  void test(Function attempt) {
    try {
      attempt();
    } catch (const std::exception& error) {
      std::cout << "[Error: " << error.what() << "]" << std::endl;
    }
  }

  int main(void) {
    test([&] { DISPLAY(sendOrder("GOOG", true, 100, 1000.0)); });
    test([&] { DISPLAY(sendOrder("GOOG", true, 100.0, 1000)); });
    test([&] { DISPLAY(sendOrder("GOOG", true, -100, 1000)); });
    test([&] { DISPLAY(sendOrder("GOOG", true, 100.5, 1000)); });
    test([&] { DISPLAY(sendOrder("GOOG", 2, 100, 1000)); });
  }

Output:

  sendOrder("GOOG", true, 100, 1000.0): GOOG 1 100 1000
  sendOrder("GOOG", true, 100.0, 1000): GOOG 1 100 1000
  sendOrder("GOOG", true, -100, 1000): GOOG 1 [Error: invalid conversion]
  sendOrder("GOOG", true, 100.5, 1000): GOOG 1 [Error: invalid conversion]
  sendOrder("GOOG", 2, 100, 1000): GOOG [Error: invalid conversion]

Rust of course leaves "less footguns laying around", but I still prefer to use C++ if I have my druthers.
  • pjmlp 12 hours ago

    Yes, from safety point of view Rust is much better option, however from the ecosystems I care about (language runtimes and GPU coding), both professionally and as hobby, C++ is the systems language to go, using Rust in such contexts would require me to introduce extra layers and do yak shaving instead of the actual problem that I want to code for.

mempko a day ago

Rust does seem to have a lot of nice features. My biggest blocker for me going to Rust from C++ is that C++ has much better support for generic programming. And now that Concepts have landed, I'm not aware of any language that can compete in this area.

atemerev a day ago

Right. I attempted using Rust for trading-related code as well. However, I failed to write a dynamically linked always sorted order book where you can splice orders in the middle. It is just too dynamic for Rust. Borrow checker killed me.

And don't get me started on dynamic graphs.

I would happily use Rust over C++ if it had all other improvements but similar memory management. I am completely unproductive with Rust model.

  • kelnos a day ago

    The nice thing is that you can always drop down to unsafe and use raw pointers if your data structure is truly not suited to Rust's ownership rules.

    And while unsafe Rust does have some gotchas that vanilla modern C++ does not, I would much rather have a 99% memory-safe code base in Rust than a 100% "who knows" code base in C++.

    • atemerev 20 hours ago

      I have read the "too many linked lists" story and I think the other commenters here are right; the less pointers the better. Even with unsafe, there's just too much ceremony.

  • 0x1ceb00da a day ago

    > Borrow checker killed me.

    You gotta get your timing right. Right hook followed by kidney shot works every time.

  • sunshowers a day ago

    I apologize for the naive question, but that sounds like a heap?

    • jpc0 a day ago

      In my experience you need to approach this with vec or arrays of some sort and pass indices around… “We have pointers at home” behaviour. This is fine but coming from C++ it definitely feels weird…

      • sunshowers 21 hours ago

        I agree in general Rust makes you use arrays and indexes, but heaps are traditionally implemented that way in any language.

      • bigstrat2003 a day ago

        Why not just use pointers? Rust has them, they aren't evil or anything. If you need to make a data structure that isn't feasible with references due to the borrow checker (such as a linked list), there's absolutely nothing wrong with using pointers.

        • atemerev 20 hours ago

          And it will look like this: https://rust-unofficial.github.io/too-many-lists/sixth-final...

          (filled with boilerplate, strange Rust idioms, borrow_unchecked, phantomdata, and you still have to manage lifetimes annotations).

          • bigstrat2003 18 hours ago

            And? I don't really see the issue. It works, it is sound, and it has a nice clean interface for safe code to use. That's all I really ask for. Lots of useful things in programming are quite gnarly under the hood, but that doesn't mean those things aren't worth using.

            • uecker 11 hours ago

              It is fine, there is just not much Rust safety advantage left then. Also in C/C++ the errors do not usually occur when using a nicely defined API, but when doing the low-level gnarly stuff and getting it wrong. As said before, I think there is some advantage of Rust having a safe and unsafe subset, but is is nowhere as big as people claim it is.

              • sunshowers 41 minutes ago

                Is there a safety advantage to using Java given that the HotSpot JVM is written in C++?

                All safe code is built on a foundation of unsafe code.

              • bigstrat2003 an hour ago

                > It is fine, there is just not much Rust safety advantage left then.

                There's exactly as much as there was before though. The entire point of the Rust safety paradigm is that you can guarantee that unsafe code is confined to only where it is needed. Nobody ever promised "you will never have to write unsafe code", because that would be clearly unfeasible for the systems programming domain Rust is trying to work in.

                I frankly cannot understand why people are so willing to throw the baby out with the bathwater when it comes to Rust safety. It makes no sense to me to say "my code needs to have some % unsafe, so I'll just make it 100% unsafe then" (which is effectively what one does when they use C or C++ instead). Why insist on not taking any safety gains at all when one can't have 100% gain?

    • atemerev 20 hours ago

      We have to do arbitrary insertions/deletions from the middle, many of them. I think it is more like BTreeMap, but we need either sorting direction or rev(), and there were some problems with both approaches I tried to solve, but eventually gave up.

  • hacker_homie a day ago

    I have run into similar issues trying to build real applications. You end up spending more time arguing with the borrow checker than writing code.

    • lytedev a day ago

      I think this is true initially and Rust didn't "click" for me for a long time.

      But once you are _maintaining_ applications, man it really does feel like absolute magic. It's amazing how worry-free it feels in many respects.

      Plus, once you do embrace it, become familiar, and start forward-thinking about these things, especially in areas that aren't every-nanosecond-counts performance-wise and can simply `Arc<>` and `.clone()` where you need to, it is really quite lovely and you do dramatically less fighting.

      Rust is still missing a lot of features that other more-modern languages have, no doubt, but it's been a great ride in my experience.

      • skippyboxedhero a day ago

        Using reference counts is a real issue.

        The idea with Rust is that you get safety...not that you get safety at the cost of performance. The language forces you into paying a performance cost for using patterns when it is relatively easy for a human to reason about safety (imo).

        You can use `unsafe` but you naturally ask yourself why I am using Rust (not rational, but true). You can use lifetimes but, personally, every time I have tried to use them I haven't been able to indicate to the compiler that my code is actually safe.

        In particular, the protections for double-free and free before use are extremely limiting, and it is possible to reason about these particular bugs in other ways (i.e. defer in Go and Zig) in a way that doesn't force you to change the way you code.

        Rust is good in many ways but the specific problem mentioned at the top of this chain is a big issue. Just saying: don't use this type of data structure unless you pay performance cost isn't an actual solution to the problem. The problem with Rust is that it tries to force safety but doesn't have good ways for devs to tell the compiler code is safe...that is a fundamental weakness.

        I use Rust quite a bit, it isn't a terrible language and is worth learning but these are big issues. I would have reservations using the language in my own company, rather than someone else's, and if I need to manage memory then I would look elsewhere atm. Due to the size of the community, it is very hard not to use Rust too (for example, Zig is great...but no-one uses it).

        • lytedev a day ago

          The idea with rust is that you _can_ have safety with no performance cost if you need it, but depending on what you're building, of course, that may imply extra work.

          The pragmatism of Rust means that you can use reference counting if it suits your use case.

          Unsafe also doesn't mean throwing out the Rustiness of Rust, but others have written more extensively about that and I have no personal experience with it.

          > The problem with Rust is that it tries to force safety but doesn't have good ways for devs to tell the compiler code is safe...that is a fundamental weakness.

          My understanding is that this is the purpose of unsafe, but again, I can't argue against these points from a standpoint of experience, having stuck pretty strictly to safe Rust.

          Definitely agree that there are issues with the language, no argument there! So do the maintainers!

          > if I need to manage memory then I would look elsewhere atm

          Haha I have the exact opposite feeling! I wouldn't try to manage memory any other way, and I'm guessing it's because memory management is more intuitive and well understood by you than by me. I'm lazy and very much like having the compiler do the bulk of the thinking for me. I'm also happy that Rust allows for folks like me to pay a little performance cost and do things a little bit easier while maintaining correctness. For the turbo-coders out there that want the speed and the correctness, Rust has the capability, but depending on your use case (like linked lists) it can definitely be more difficult to express correctness to the compiler.

          • skippyboxedhero 21 hours ago

            Agree, that is the purpose of unsafe but there is a degree of irrationality there, which I am guilty of, about using unsafe in Rust. I also worry about unsafe leaking if I am using raw pointer on a struct...but stdlib uses a lot of unsafe code, so I should be too.

            I think the issue that people have is that they come into Rust with the expectation that these problems are actually solved. As I said, it would be nice if lifetimes weren't so impossible to use.

            The compiler isn't doing the thinking if you have to change your code so the compiler is happy. The problem with Rust is too much thinking: you try something, compiler complains, what is the issue here, can i try this, still complain, what about this, etc. There are specific categories of bugs that Rust is trying to fix that don't require the changes that Rust requires in order to ensure correctness...if you use reference counter, you can have more bugs.

jovial_cavalier 5 hours ago

Hey, check this out:

#include <iostream>

struct Price { double x; };

struct Quantity { int x; };

void sendOrder(const char *symbol, bool buy, Quantity quantity, Price price) {

    std::cout << symbol << " " << buy << " " << quantity.x << " " << price.x

    << std::endl;
}

int main(void) {

    sendOrder("GOOG", false, Quantity{100}, Price{1000.00}); // Correct

    sendOrder("GOOG", false, Price{1000.00}, Quantity{100}); // compiler error
}

If you're trying to get it to type check, you have to make a type first.

I don't appreciate these arguments, and view them as disingenuous.

codedokode a day ago

What about catching integer overflow? Free open-source languages still cannot do it unlike they commercial competitors like Swift?

  • ultimaweapon 11 hours ago

    Rust is the only language I can easily control how integer overflow should behave. I can use `var1.wrapping_add(var2)` if I want the result to be wrapped or `var1.checked_add(var2)` if I don't want it to overflow.

    • codedokode 9 hours ago

      The functions are so verbose and inconvenient that even Rust developers themselves do not use them. For example, in this code [1] they used a wrapping addition instead of "checked_add" because it is faster to write.

      For comparison, Swift uses "+" for checked addition and as a result, majority of developers use checked addition by default. And in Rust due to its poor design choices most developers use wrapping addition even where a checked addition should be used.

      [1] https://doc.rust-lang.org/src/alloc/vec/mod.rs.html#2010

      • genrilz 2 hours ago

        The reason '+ 1' is fine in the example you gave is that length is always less than or equal to capacity. If you follow 'grow_one' which was earlier in the function to grow the capacity by one if needed, you will find that it leads to the checked addition in [0], which returns an error that [1] catches and turns into a panic. So using '+1' prevents a redundant check in release mode while still adding the check in debug mode in case future code changes break the 'len <= capacity' invariant.

        Of course, if you don't trust the standard library, you can turn on overflow checks in release mode too. However, the standard library is well tested and I think most people would appreciate the speed from eliding redundant checks.

          [0]: https://doc.rust-lang.org/src/alloc/raw_vec.rs.html#651
          [1]: https://doc.rust-lang.org/src/alloc/raw_vec.rs.html#567
      • ultimaweapon 8 hours ago

        Checked addition by default will have too much overhead and it will hurt performance, which unacceptable in Rust since it was designed as a system language. Swift can use checked add by default since it was designed for application software.

        Your example code is not because it is faster to write, it is because it is impossible for its to overflow on that line.

        • codedokode 38 minutes ago

          Why should checked addition have any overhead? You should just use checked addition instruction (on architectures that support it) instead of wrapping addition.

          Or just because on Intel CPUs it has overhead, we must forget about writing safer code?

  • kelnos a day ago

    Rust does have checked arithmetic operations (that return Result), but you have to explicitly opt in to them, of course, and they're not as ergonomic to use as regular arithmetic.

    • trealira 21 hours ago

      But, by default, normal arithmetic operations trap on overflow in debug mode, although they wrap with optimizations on.

    • codedokode 9 hours ago

      As a result, Rust developers themselves use wrapping addition where a checked addition should be used: https://doc.rust-lang.org/src/alloc/vec/mod.rs.html#2010

      • tialaramex 8 hours ago

        That's not a wrapping addition, that addition will never overflow so it has no overflow behaviour.

        This line could only overflow after we need to grow the container, so immediately this means the type T isn't a ZST as the Vec for ZSTs doesn't need storage and so it never grows.

        Because its not a ZST the maximum capacity in Rust is never bigger than isize::MAX which is an entire binary order of magnitude smaller than usize::MAX, as a result len + 1 can't overflow the unsigned type, so this code is correct as written.

  • lytedev a day ago

    I'm not sure if this is what you mean, exactly, but Rust indeed catches this at compile time.

    https://play.rust-lang.org/?version=stable&mode=debug&editio... https://play.rust-lang.org/?version=stable&mode=debug&editio...

    • codedokode a day ago

      I meant panic if during any addition (including in runtime) an overflow occurs.

      • genrilz 21 hours ago

        If you obscure the implementation a bit, you can change GP's example to a runtime overflow [0]. Note that by default the checks will only occur when using the unoptimized development profile. If you want your optimized release build to also have checks, you can put 'overflow-checks = true' in the '[profile.release]' section of your cargo.toml file [1].

          [0]: https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=847dc401e16fdff14ecf3724a3b15a93
          [1]: https://doc.rust-lang.org/cargo/reference/profiles.html
        • codedokode 9 hours ago

          This is bad because the program behaves different depending on build flags. What's the point of having "use unsafe addition" flag, and having it enabled by default?

          Rust developers made a poor choice. They should have made a special function for unchecked addition and have "+" operator always panic on overflow.

          • genrilz 4 hours ago

            The point I'm sure was to prevent the checks from incurring runtime overhead in production. Even in release mode, the overflow will only wrap rather than trigger undefined behavior, so this won't cause memory corruption unless you are writing unsafe code that ignores the possibility of overflow.

            The checks being on in the debug config means your tests and replications of bug reports will catch overflow if they occur. If you are working on some sensitive application where you can't afford logic bugs from overflows but can afford panics/crashes, you can just turn on checks in release mode.

            If you are working on a library which is meant to do something sensible on overflow, you can use the wide variety of member functions such as 'wrapping_add' or 'checked_add' to control what happens on overflow regardless of build configuration.

            Finally, if your application can't afford to have logic bugs from overflows and also can't panic, you can use kani [0] to prove that overflow never happens.

            All in all, it seems to me like Rust supports a wide variety of use cases pretty nicely.

            [0]: https://github.com/model-checking/kani

      • trealira 21 hours ago

        You can set a flag for that: https://doc.rust-lang.org/rustc/codegen-options/index.html#o...

        By default, they're on during debug mode and off in release mode.

        • codedokode 9 hours ago

          The choice doesn't make sense because you want the program to always behave correctly and not only during development.

          • lytedev 3 hours ago

            Eh, maybe. There's a performance tradeoff here and maintainers opted for performance. I'm sure many folks would agree with you that it was the wrong choice, and I'm sure many folks would disagree with you that it was the wrong choice.

            There are also specific methods for doing *erflow-checked arithmetic if you like.

            • codedokode 29 minutes ago

              Why should there be a performance tradeoff? Because Intel CPU doesn't have add-with-overflow-check instruction, we make our languages less safe?

      • tialaramex 8 hours ago

        Why do you want a panic? Shift left. Overflow can be rejected at compile time for a price that you might be able to afford - generality.

        Just insist that the programmer prove that overflow can't occur, and reject programs where the programmer couldn't or wouldn't do this.

        • codedokode 26 minutes ago

          Programmer has other things to do. Computer or CPU should do the checks.

  • z_open a day ago

    assembly catches integer overflow. You just need to check the flag.

codr7 a day ago

[flagged]

Calliope1 7 hours ago

[flagged]

  • kccqzy 6 hours ago

    Undefined behavior can appear with or without templates. But other than that, yes, it is viscerally shocking to see on Compiler Explorer that your entire function has been compiled to a single instruction, ud2.

globalnode 10 hours ago

My admittedly uninformed impression of Rust is that its a lot like Go (in spirit?), a language invented to shepherd novice programmers into not making mistakes with resource usage.

I imagine faceless shameless mega-corps with thousands of Rust/Go peons coding away on the latest soulless business apps. Designed to funnel the ignorant masses down corridors of dark pattern click bait and confusing UX.

Having exposed my biases, happy to be proven wrong. Why are game studios still using C++? Because that's the language game programmers know and feel comfortable with? Or some other reason?

Embedded is still C, games are C++, scientific and data are Python and R (I'm talking in general here). What is the niche for Rust?

  • simonask 10 hours ago

    Novice programmers will take longer to be productive in Rust compared to Go. Rust primarily improves the productivity of people who know what they are doing, because it gives them much better tools to manage complexity.

    Games are written in C++ because game engines and tooling have person-centuries of work poured into them. Reimplementing Unreal Engine in Rust would require another few person-centuries of work, which is an investment that doesn't really make sense. Economically, dealing with the shortcomings of C++ is much, much cheaper.

    But Rust is definitely encroaching in all of these areas. Embedded Rust is doing great, scientific Rust is getting there (check pola.rs). Rust is an obvious candidate for the next big game engine, and it is already quite viable for indie undertakings, though it is still early days.

    • amai 8 hours ago

      Though in the future people might simply ask an AI to convert a codebase from C++ to Rust.

  • gwd 10 hours ago

    > novice programmers

    I think Rust has too high a learning curve, and too many features, for novice programmers in general.

    > Embedded is still C, games are C++, scientific and data are Python and R (I'm talking in general here). What is the niche for Rust?

    Rust has already made huge inroads in CLIs and TUIs, as far as I can tell. Embedded is a slow-moving beast by design, but it seems to me (as someone in an adjacent area) that it could be a big win there, particularly in places that need safety certification.

    All the stories of people using Rust for game development are about people who tried it and find that it doesn't fit: It makes experimentation and exploration slow enough that the reduction in minor bugs in game logic isn't really worth it.

  • Havoc 8 hours ago

    Go is seeing more traction in the web space - api backends and other stuff that needs lots of concurrency like that. Seen as easier to learn than rust but not quite as fine grained low level control.

    Rust is a bit more systems focused for low level stuff. See inclusions in the Linux kernel. Also seeing some traction in the WASM space given that it’s not GC

    They’re both quite versatile though so above are pretty gnarly generalisations.

    Zig is in a similar space as these

  • chickenbuckcar 10 hours ago

    Economic inertia alone can already enough.

    Numpy use C/C++ because BLAS use C/C++ Torch originally use Lua, then switch to Python because popularity

  • notimetorelax 10 hours ago

    Those mega corps that you talk about use C++ too. It’s just a false dichotomy argument you’re making.

    • globalnode 4 hours ago

      True enough, I was going down a path there and got a little excited :S

  • imtringued 8 hours ago

    The quality of AMD's software stack speaks for itself. It's all C++ and the quality is exactly poor as you'd expect.