I wonder how true the assertion "This performance is partly achieved by sacrificing memory safety" is today. I suspect a sufficiently advanced compiler can remove bounds checks where they are provably unnecessary, and a sufficiently advanced CPU can run the remaining checks in parallel with the array accesses. But it'd be interesting if there's been any research on that.
(A sufficiently advanced programming language can avoid the entire issue by writing loops as map, fold, etc. but we're talking about C here.)
Much like how Intel Itanium would've taken over the world with the VLIW revolution if only we had one of these 'sufficiently advanced compilers'. It's a very vague statement that doesn't really mesh with reality. Decades of research in compilers have yet to yield one.
I think you're right, and it's up to the programming language to provide optimization friendly primitives that fit into the mold of what we can prove as in-bounds.
Speculative execution will never be free because those bounds checks will take up space in the execution pipeline that could've been used for other instructions.
VLIW isn't bottlenecked on sufficiently advanced compilers. That's actually nonsense. VLIW is non-viable because it exposes micro-architectural aspects of the CPU, which require a new version of the ISA for every CPU generation to be taken advantage of and for all applications to be recompiled for the specific CPU you have.
In Arm and x86 land you get CPUs that run even the old code 10% faster with every CPU generation. Meanwhile with say AMD NPUs XDNA1 land is not the same as XDNA2 land. You have to rewrite your algorithms if you want things to get faster. The promised VLIW benefits work as intended. You can run loads, stores and vector operations all in the same cycle without any issue.
Then there is the crazy world of TTAs (transport triggered architectures), which take the VLIW stuff and crank it up to extreme levels. Instead of having an ISA, you directly control the buses connecting the function units. If you have multiple buses, then you can perform multiple transfers in parallel. You can build a custom set of function units specifically for your application and the compiler will automatically turn your C code into transfer instructions for any design you can come up with.
Now the first idea you get is to obviously just crank up the number of buses from 1 to 2 to 4 to 8, but then you notice that the number of cycles to run your algorithm doesn't go down as quickly as you'd hoped. There are a number of reasons for this, but if I had to choose a reason that favors dynamic scheduling, it would be that most sequential programming languages, especially C, don't expose enough parallelism to the compiler to be able to take advantage of it at compile time. If you could prove to the compiler that two functions f and g do not mutate the same data (local mutable state is allowed), then the TTA compiler could produce a mixed instruction stream that blends both functions to be executed in parallel rather than sequentially. This is similar to SMT with the exception that the overhead is zero and that you are allowed to run nano threads that run for a few nano seconds.
> I suspect a sufficiently advanced compiler can remove bounds checks where they are provably unnecessary,
That’s true by definition, isn’t it?
> and a sufficiently advanced CPU can run the remaining checks in parallel with the array accesses.
But it still would slow down the program, as the CPU would have to commit resources to that bound checking that it then cannot use for doing other things.
There are a few things that cannot be done as fast in rust, but those are rare to need in the real world. Most of the things rust cannot do are around sharing memory between threads with locks - Humans have a very hard time getting code that does this to work correctly and usually have race conditions because they analysed the problem wrong.
That's actually easy to do, if you care about concurrency safety at first when designing a language and it's stdlib. Just forbid blocking IO and no locks needed.
> The findings highlight significant variations
in the theoretical detection capabilities of these techniques
and reveal that, in practice, the implementations of most
available sanitizers fall short of their conceptual potential.
Furthermore, the evaluation demonstrates the complexities
and diversity of memory bugs in C/C++, as well as the
challenges associated with detecting them. For instance, our
results show that SoftBound+CETS, a conceptually complete sanitizer, misses nearly a quarter of spatial memory
bugs in its original implementation, while ASan, likely the
most widely used memory sanitizer, cannot detect 50% of
use-after-* bugs and any non-linear overflows and under-
flows. Ultimately, our evaluation concludes that no sanitizer
currently provides complete temporal or spatial memory
safety
Rust supports all the same sanitizers as c/C++ last I checked (at least UBSAN, TSAN, MSAN and ASAN which are the only ones I’ve ever seen used in practice on c/c++).
Weird that Infer [1] was not included in the evaluation. It supports C/C++ and its underlying reasoning framework (Separation Logic [2]) is exactly geared towards checking memory safety.
Sanitizers are runtime tools, not static analysis tools.
Static analyzers are also virtually never sound as sound tools produce an outrageous number of false positives, especially when languages that so easily permit nonlocal mutation.
I wonder if their test cases are available.
Fil-C is specifically engineered to catch everything so it would be interesting to check it against their tests
Wait hold up the same authors did MESH: https://arxiv.org/pdf/2108.08683
So why isn't MESH part of the evaluation? And why isn't it mentioned even once in the paper?
Weird indeed.
But I also miss valgrind, which is precise, just very slow. I wonder if valgrind detects intra-object OOBAs.
No, it's not that precise.
Valgrind is based on tagging whether a memory location is live or not. So, it would not just fail the intra-object OOB test, it would also fail the "out of bounds but in bounds" case: https://github.com/pizlonator/llvm-project-deluge/blob/delug...
Valgrind's Memcheck is included. It does not detect intra-object OOBAs.
Ah thanks, missed that
I wonder how true the assertion "This performance is partly achieved by sacrificing memory safety" is today. I suspect a sufficiently advanced compiler can remove bounds checks where they are provably unnecessary, and a sufficiently advanced CPU can run the remaining checks in parallel with the array accesses. But it'd be interesting if there's been any research on that.
(A sufficiently advanced programming language can avoid the entire issue by writing loops as map, fold, etc. but we're talking about C here.)
> A sufficiently advanced compiler
Much like how Intel Itanium would've taken over the world with the VLIW revolution if only we had one of these 'sufficiently advanced compilers'. It's a very vague statement that doesn't really mesh with reality. Decades of research in compilers have yet to yield one.
I think you're right, and it's up to the programming language to provide optimization friendly primitives that fit into the mold of what we can prove as in-bounds.
Speculative execution will never be free because those bounds checks will take up space in the execution pipeline that could've been used for other instructions.
It would have had, without the AMD move coming up with AMD64 architecture.
If Intel considered x86 done, the WinTel world would have had no other alternative.
VLIW isn't bottlenecked on sufficiently advanced compilers. That's actually nonsense. VLIW is non-viable because it exposes micro-architectural aspects of the CPU, which require a new version of the ISA for every CPU generation to be taken advantage of and for all applications to be recompiled for the specific CPU you have.
In Arm and x86 land you get CPUs that run even the old code 10% faster with every CPU generation. Meanwhile with say AMD NPUs XDNA1 land is not the same as XDNA2 land. You have to rewrite your algorithms if you want things to get faster. The promised VLIW benefits work as intended. You can run loads, stores and vector operations all in the same cycle without any issue.
Then there is the crazy world of TTAs (transport triggered architectures), which take the VLIW stuff and crank it up to extreme levels. Instead of having an ISA, you directly control the buses connecting the function units. If you have multiple buses, then you can perform multiple transfers in parallel. You can build a custom set of function units specifically for your application and the compiler will automatically turn your C code into transfer instructions for any design you can come up with.
Now the first idea you get is to obviously just crank up the number of buses from 1 to 2 to 4 to 8, but then you notice that the number of cycles to run your algorithm doesn't go down as quickly as you'd hoped. There are a number of reasons for this, but if I had to choose a reason that favors dynamic scheduling, it would be that most sequential programming languages, especially C, don't expose enough parallelism to the compiler to be able to take advantage of it at compile time. If you could prove to the compiler that two functions f and g do not mutate the same data (local mutable state is allowed), then the TTA compiler could produce a mixed instruction stream that blends both functions to be executed in parallel rather than sequentially. This is similar to SMT with the exception that the overhead is zero and that you are allowed to run nano threads that run for a few nano seconds.
> I suspect a sufficiently advanced compiler can remove bounds checks where they are provably unnecessary,
That’s true by definition, isn’t it?
> and a sufficiently advanced CPU can run the remaining checks in parallel with the array accesses.
But it still would slow down the program, as the CPU would have to commit resources to that bound checking that it then cannot use for doing other things.
There are a few things that cannot be done as fast in rust, but those are rare to need in the real world. Most of the things rust cannot do are around sharing memory between threads with locks - Humans have a very hard time getting code that does this to work correctly and usually have race conditions because they analysed the problem wrong.
That's actually easy to do, if you care about concurrency safety at first when designing a language and it's stdlib. Just forbid blocking IO and no locks needed.
Download possible without paywall from https://publica.fraunhofer.de/entities/publication/9d7783f8-...
Unfortunately that gives a 500 error when attempting to download the PDF (maybe the server is overloaded now?)
https://publica-rest.fraunhofer.de/server/api/core/bitstream...
Conclusion is scathing:
> The findings highlight significant variations in the theoretical detection capabilities of these techniques and reveal that, in practice, the implementations of most available sanitizers fall short of their conceptual potential. Furthermore, the evaluation demonstrates the complexities and diversity of memory bugs in C/C++, as well as the challenges associated with detecting them. For instance, our results show that SoftBound+CETS, a conceptually complete sanitizer, misses nearly a quarter of spatial memory bugs in its original implementation, while ASan, likely the most widely used memory sanitizer, cannot detect 50% of use-after-* bugs and any non-linear overflows and under- flows. Ultimately, our evaluation concludes that no sanitizer currently provides complete temporal or spatial memory safety
If SoftBound+CETS has the best results, why does Fraunhofer not sponsor the creation of a Debian package?
It is unmaintained:
https://github.com/Fraunhofer-AISEC/softboundcets
With this commit being the actual llvm patch for Softbounds+CETS: https://github.com/Fraunhofer-AISEC/softboundcets-llvm-proje...
x86, x64 only. LTO only.
Seems like it should have "C/C++" in the title. Or maybe that's understood?
Sanitizers aren't just for C/C++. Rust, go, D all have at least asan support.
Rust supports all the same sanitizers as c/C++ last I checked (at least UBSAN, TSAN, MSAN and ASAN which are the only ones I’ve ever seen used in practice on c/c++).
Weird that Infer [1] was not included in the evaluation. It supports C/C++ and its underlying reasoning framework (Separation Logic [2]) is exactly geared towards checking memory safety.
[1] https://fbinfer.com/
[2] https://en.wikipedia.org/wiki/Separation_logic
Sanitizers are runtime tools, not static analysis tools.
Static analyzers are also virtually never sound as sound tools produce an outrageous number of false positives, especially when languages that so easily permit nonlocal mutation.
Looks interesting, but unfortunately the research paper is behind a paywall
Here is the author's version: https://publica-rest.fraunhofer.de/server/api/core/bitstream...
really sorry about that gaffe ! i had access, and the content was too interesting to not share.
as @osivertsson has kindly pointed out, may you please access that, and share your insights here ? thanks !