> [M]any developers use C++ as if it was still the previous millennium. [...] C++ now offers modules that deliver proper modularity.
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
TinkersW [3 hidden]5 mins ago
Ya that is rather disingenuous, modules aren't ready, and likely won't be for another 5 years.
Also they are difficult to switch to, so I would expect very few established projects to bother.
vr46 [3 hidden]5 mins ago
Last weekend, I took an old cross-platform app written by somebody else between 1994-2006 in C++ and faffed around with it until it compiled and ran on my modern Mac running 14.x. I upped the CMAKE_CXX_STANDARD to 20, used Clang, and all was good. Actually, the biggest challenge was the shoddy code in the first place, which had nothing to do with its age. After I had it running, Sonar gave me 7,763 issues to fix.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
coffeeaddict1 [3 hidden]5 mins ago
The C++ Core Guidelines have existed for nearly 10 years now. Despite this, not a single implementation in any of the three major compilers exists that can enforce them. Profiles, which Bjarne et al have had years to work on, will not provide memory safety[0].
The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes. However, it's already too late. Even if somehow they manage to make changes to the language that enforce memory safety, it will take a decade before the efforts propagate at the compiler level (a case in point is modules being standardised in 2020 but still not ready for use in production in any of the three major compilers).
> The C++ committee, including Bjarne Stroustrup, needs to accept that the language cannot be improved without breaking changes.
The example in the article starts with "Wow, we have unordered maps now!"
Just adding things modern languages have is nice, but doesn't fix the big problems.
The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
IshKebab [3 hidden]5 mins ago
You absolutely can throw things out, and they have! Checked exceptions, `auto`, and breaking changes to operator== are the two I know of. There were also some minor breaking changes to comparison operators in C++20.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
imtringued [3 hidden]5 mins ago
"just get good" implies development processes that catch memory and safety bugs. Meaning what they are really saying between the lines is that the minimum cost of C++ development is really high.
Any C++ code without at least unit tests with 100% test coverage on with UB sanitizer etc, must be considered inherently defective and the developer should be flogged for his absurd levels of incompetence.
Then there is also the need for UB aware formal verification. You must define predicates/conditions under which your code is safe and all code paths that call this code must verifiably satisfy the predicates for all calls.
This means you're down to the statically verifiable subset of C++, which includes C++ that performs asserts at runtime, in case the condition cannot be verified at compile time.
How many C++ developers are trained in formal verification? As far as I am aware, they don't exist.
Any C++ developers reading this who haven't at least written unit tests with UB sanitizer for all of their production code should be ashamed of themselves. If this sounds harsh, remember that this is merely the logical conclusion of "just get good".
williamcotton [3 hidden]5 mins ago
Add ASan and friends as well as a sanitizer-less build for Valgrind!
fooker [3 hidden]5 mins ago
>Despite this, not a single implementation in any of the three major compilers exists that can enforce them
Because no one wants it enough to implement it.
richard_todd [3 hidden]5 mins ago
I feel like a few decades ago, standards intended to standardize best practices and popular features from compilers in the field. Dreaming up standards that nobody has implemented, like what seems to happen these days, just seems crazy to me.
immibis [3 hidden]5 mins ago
It's bottom-up vs top-down design.
lifthrasiir [3 hidden]5 mins ago
Or it's better to have other languages besides from C++ for that.
htfy96 [3 hidden]5 mins ago
While I sort of agree on the complaint, personally I think the best spot of C++ in this ecosystem is still on great backward-compatibility and marginal safety improvements.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
IshKebab [3 hidden]5 mins ago
I think at least Go and Java have as good backwards compatibility as C++.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
thbb123 [3 hidden]5 mins ago
I don't know about go, but java is pathetic. I have 30 years old c++ programs that work just fine.
However, an application that I had written to be backward compatible with java 1.4, 15 years ago, cannot be compiled today. And I had to make major changes to have it run on anything past java 8, ~10 years ago, I believe.
simoncion [3 hidden]5 mins ago
Compared to C++ (or even Erlang), Go is pretty bad.
$DAYJOB got burned badly twice on breaking Go behavioral changes delivered in non-major versions, so management created a group to carefully review Go releases and approve them for use.
All too often, Google's justification for breaking things is "Well, we checked the code in Google, and publicly available on Github, and this change wouldn't affect TOO many people, so we're doing it because it's convenient for us.".
bigstrat2003 [3 hidden]5 mins ago
Java has had shit backwards compatibility for as long as I have had to deal with it. Maybe it's better now, but I have not forgotten the days of "you have to use exactly Java 1.4.15 or this app won't work"... with four different apps that each need their own different version of the JRE or they break. The only thing that finally made Java apps tolerable to support was the rise of app virtualization solutions. Before that, it was a nightmare and Java was justly known as "the devil's software" to everyone who had to support it.
ivan_gammel [3 hidden]5 mins ago
That was probably 1.4.2_15, because 1.4.15 did not exist. What you describe wasn’t a Java source or binary compatibility problem, it was a shipping problem and it did exist in C++ world too (and still exists - sharing runtime dependencies is hard). I remember those days too. Java 5 was released 20 years ago, so you describe some really ancient stuff.
Today we don’t have those limits on HDD space and can simply ship an embedded copy of JRE with the desktop app. In server environments I doubt anyone is reusing JRE between apps at all.
simoncion [3 hidden]5 mins ago
While "Well, just bundle in a copy of the whole-ass JRE" makes packaging Java software easier, it's still true that Java's backwards-compatibility is often really bad.
> ...sharing runtime dependencies [in C or C++] is hard...
Is it? The "foo.so foo.1.so foo.1.2.3.so" mechanism works really well, for libraries whose devs that are capable of failing to ship backwards-incompatible changes in patch versions, and ABI-breaking changes in minor versions.
ivan_gammel [3 hidden]5 mins ago
> Java's backwards-compatibility is often really bad.
“Often” is a huge exaggeration. I always hear about it, but never encountered it myself in 25 years of commercial Java development. It almost feels like some people are doing weird stuff and then blame the technology.
> Is it? The "foo.so foo.1.so foo.1.2.3.so"
Is it “sharing” or having every version of runtime used by at least one app?
simoncion [3 hidden]5 mins ago
> I always hear about it, but never encountered it myself in 25 years of commercial Java development.
Lucky you, I guess?
> Is it “sharing” or having every version of runtime used by at least one app?
I'm not sure what you're asking here? As I'm sure you're aware, software that links against dependent libraries can choose to not care which version it links against, or link against a major, minor, or patch version, depending on how much it does care, and how careful the maintainers of the dependent software are.
So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
menaerus [3 hidden]5 mins ago
Language is improving (?), although IME it went besides the point I'm finding new features to be less useful for every day code. I'm perfectly happy with C++17/20 for 99% of the code I write. And keeping the backwards compatibility for most of the real-world software is a feature not a bug, ok? Breaking it would actually make me go away from the language.
pjmlp [3 hidden]5 mins ago
Clion, clang tidy and Visual C++ analysers do have partial support for the Core Guidelines, and they can be enforced.
Granted, it is only those that can be machine verified.
Office is using C++20 modules in production, Vulkan also has a modules version.
skywal_l [3 hidden]5 mins ago
I hoped Sean would open source Circle. It seemed promising, but it's been years and don't see any tangible progress. Maybe I am not looking hard enough?
alexeiz [3 hidden]5 mins ago
He's looking to sell Circle. That must be the reason he's not open sourcing it.
fooker [3 hidden]5 mins ago
Huh, I guess if that was the motivation all along.
janice1999 [3 hidden]5 mins ago
I think Carbon is more promising to be honest. They are aiming for something production-ready in 2027.
mempko [3 hidden]5 mins ago
What are you talking about, the language gets better with each release. Using C++ today is a hell of a lot better than even 10 years ago. It seems like people hold "memory safety" as the most important thing a language can have. I completely disagree. It turns out you can build awesome and useful software without memory safety. And it's not clear if memory safety is the largest source of problems building software today.
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
WalterBright [3 hidden]5 mins ago
The top memory safety bugs in shipped code for C and C++ are out of bounds array indexing.
saagarjha [3 hidden]5 mins ago
Are you sure? I generally see more use-after-free and other lifetime issues.
nindalf [3 hidden]5 mins ago
> And it's not clear if memory safety is the largest source of problems building software today.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
logicchains [3 hidden]5 mins ago
Two of the biggest use cases for modern C++ are video games and HFT, where memory safety is of absolutely minimal importance (unless you're writing some shitty DRM/anticheat). I work in HFT using modern C++ and bugs related to memory safety are vanishingly rare compared to logic and performance bugs.
imtringued [3 hidden]5 mins ago
The importance of memory safety depends on whether your code must accept untrusted inputs or not.
Basically 99% of networked applications that don't talk to a trusted server and all OS level libraries fall under that category.
Your HFT code is most likely not connecting to an exchange that is interested in exploiting your trading code so the exploit surface is quite small. The only potential exploit involves other HFT algorithms trying to craft the order books into a malicious untrusted input to exploit your software.
Meanwhile if you are Google and write an android library, essentially all apps from the play store are out to get you.
Basically C++ code is like an infant that needs to be protected from strangers.
otabdeveloper4 [3 hidden]5 mins ago
I million times more systems were infiltrated due to PHP SQL injection bugs than were infiltrated via Chromium use-after-free bugs.
Let's keep some sanity and perspective here, please. C++ has many long-standing problems, but banging on the "security" drum will only drive people away from alternative languages. (Everyone knows that "security" is just a fig leaf they use to strong-arm you into doing stuff you hate.)
jpc0 [3 hidden]5 mins ago
> Around 70% of our high severity security bugs are memory unsafety problems
> ~70% of the vulnerabilities Microsoft assigns a CVE
> 76% of vulnerabilities
What is the difference between the first two (emphasis added) and what you said? Just as a thought experiment...
If I measure a single factor in exclusion to all others I can also find whatever I want in any set of data. Now your point may be valid but it is not what they published and without the full dataset we cannot validate your claim however I can validate that what you claim is no what they claim.
To answer your question in the final paragraph. Yes it is, but it requires the same cultural shift as what it would take to write the same code in rust or swift of golang or whatever other memory safe language you want to pick.
If rust was in fact viable for such a large project, how's the servo project going? That still the resounding success it was expected to be? Rust in the kernel? That going well?
The jury is still out on whether rust will be mass adopted and is able to usurp C/C++ in the domains where C/C++ dominate. It may get there, but I would much much rather start a new project using C++20 than in rust and I would still be able to make it memory safe and yes it is a "skill issue", but purely because of legacy C++ being taught and accepted in new code in a codebase.
Rules for writing memory safe C++ has not just been around for decades but has be statically checkable for over a decade but for a large project there are too many errors to universally apply them to existing code without years of work. However if you submit new code using old practices you should be held financially and legally responsible just like an actual engineer in another field would be.
It's because we are lax about standards that it's even an issue.
As a note, if you see an Arc<Mutex<>> in rust outside of some very specific Library code whoever wrote that code probably wouldn't be able to write the same code in a memory and thread safe manner, also that is an architectural issue.
Arc and Mutex are synchronisation primatives that are meant to be used to build datastructures and not in "userspace" code. It's a strong code smell that is generally accepted in Rust. Arc probably shouldn't even need to exist at all because that is a clear indication nobody thought about the ownership semantics of the data in question, maybe for some datastructures it is required but you should very likely not be typing it into general code.
If Arc<Mutex<>> is littered throughout your rust codebase you probably should have written that code in C#/Java/Go/pick your poison...
tsimionescu [3 hidden]5 mins ago
This whole concept that code should be architected as "libraries" and "userspace" is such a C++ism.
It's a really weird concept that probably comes only from having this extremely complex language where even the designers expect some parts of it are too weird for "normal programmers". But then they imagine some advanced class of programmer, the "library programmers", who can deal with such complexity.
The more modern way of designing software is to stick to the YAGNI principle: design your code to be simple and straightforward, and only extract out datastructures into separate libraries if and when they prove to be needed.
Not to mention, the position that shared ownership should just not exist at all is self-evidently absurd. The lifetime of an object can very well be a dynamic property of your program, and a concurrent one. A language that lacks std::shared_ptr / Arc is simply not a complete language, there will be algorithms that you just can't express.
bluGill [3 hidden]5 mins ago
Profiles will not provide perfect memory safety, but they go a long way to making things better. I have 10 million lines of C++. A breaking change (doesn't matter if you call it new C++ or Rust) would cost over a billion dollars - that is not happening. Which is to say I cannot use your perfect solution, I have to deal with what I have today and if profiles can make my code better without costing a full rewrite then I want them.
tialaramex [3 hidden]5 mins ago
Changes which re-define the language to have less UB will help you if you want safety/ correctness and are willing to do some work to bring that code to the newer language. An example would be the initialization rules in (draft) C++ 26. Historically C++ was OK with you just forgetting to initialize a primitive before using it, that's Undefined Behaviour in the language so... if that happens too bad all bets are off. In C++ 26 that will be Erroneous Behaviour and there's some value in the variable, it's not always guaranteed to be valid (which can be a problem for say, booleans or pointers) but just looking at the value is no longer UB and if you forgot to initialize say an int, or a char, that's fine since any possible bit sequence is valid, what you did was an error, but it's not necessarily fatal.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
saagarjha [3 hidden]5 mins ago
This seems bad actually.
bluGill [3 hidden]5 mins ago
The first part is why I'm excited for future C++ - they are making things better.
The reason I life profiles is they are not all or nothing. I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor. Or at least so I hope, it remains to be seen if that is how they work out. I've been trying to figure out how to make rust fit in, but std::vector<SomeVirtualInterface> is a real pain to wrap into rust and so far I haven't managed to get anything done there.
The $1 billion is realistic - this project was a rewrite of a previous product that became unmaintainable and inflation adjusted the cost was $1 billion. You can maybe adjust that down a little if we are more productive, but not much. You can adjust it down a lot if you can come up with a way to keep our existing C++ and just extend new features and fix the old code only where it really is a problem. The code we have written in C++98 (because that was all we had in 2010) still compiles with the latest C++23 compiler and since there are no know bugs it isn't worth updating that code to the latest standards even though it would be a lot easier to maintain (which we never do) if we did.
zozbot234 [3 hidden]5 mins ago
> I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor.
It's also expected that you'll be able to do this with Safe C++. Of course the interop with older C++ code will then still involve unsafety. But incremental improvement should be possible.
wakawaka28 [3 hidden]5 mins ago
Enforcing style guidelines seems like an issue that should be tackled by non-compiler tools. It is hard enough to make a compiler without rolling in a ton of subjective standards (yes, the core guidelines are subjective!). There are lots of other tools that have partial support for detecting and even fixing code according to various guidelines.
zozbot234 [3 hidden]5 mins ago
> Profiles, which Bjarne et al have had years to work on, will not provide memory safety
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
bluGill [3 hidden]5 mins ago
I have seen 3 different safe c++ proposals (most are not papers yet, but they are serious efforts to show what safe c++ could look like). However there is a tradeoff here. the full bower checker in C++ approach is incompatible with all current C+++ and so adopting it is about as difficult is rewriting all your code in some other language. The other proposals are not as safe, but have different levels of you can use this with your existing code. All are not ready to get added to C++, but they all provide something better and I'm hopeful that something gets into C++ (though probably not before C++32)
Animats [3 hidden]5 mins ago
I've seen maybe twice that many. Did one myself once. It's possible to make forward progress, but to get any real safety you have to prohibit some things.
Maxatar [3 hidden]5 mins ago
>the full bower checker in C++ approach is incompatible with all current C++
Circle is an implementation of C++ that includes a borrow checker and is 100% backwards compatible with C++:
That is one of of the three. It isn't really backward compatible because to take adventage of it you need to write\change a lot of code.
a nice attempt but I have millions of lines of c++ that isn't going away-
Maxatar [3 hidden]5 mins ago
Circle is 100% backward compatible with C++. That is a technical property of the language.
You are welcome to take your millions of lines of C++ code and it will compile without change using Circle as any valid C++ code is valid Circle code, which is the technical definition of being backward compatible.
You don't need to change existing code to use Circle or the new features Circle introduces, you can just write new classes and functions with those features and your existing code will continue to compile as-is.
bluGill [3 hidden]5 mins ago
You don't get the advantages of circle if you are constantly dealing with code that is returning raw pointers you have to deallocate. Or APIs where you need to pass in an index which the called function then uses vectors operator []. Safe C++ (from the same guy from what I can tell) only is safe if you used std2 containers, and otherwise rewrite your C++ entirely. Sure the world would be better if we did, but that would cost billions of dollars so it isn't happening. What we need is a way to introduce some safety into code that already exists without spending billions and a lot of time to rewrite it.
Maxatar [3 hidden]5 mins ago
By your standard C++11 isn't backward compatible with C++98.
zozbot234 [3 hidden]5 mins ago
"C++ isn't really backward compatible with C because to take advantage of its classes and templates you need to change so much code..."
bluGill [3 hidden]5 mins ago
That is not backward compatibility. In the real world people mix C and C++ all the time without a lot of complex rewriting. Most of the time they don't even write a wrapper around the C, or if they do it is a easy/thin wrapper (generally you take a function returning a pointer you have to delete and make it a smart pointer), not a deep rewrite of the C code.
All my efforts to do the above so I can mix C++ and Rust have quickly failed when I realized that my wrappers would not be thing, and thus they would cost large performance penalties.
zozbot234 [3 hidden]5 mins ago
The cxx crate offers partial interop between C++ and Rust - for example, it wraps the C++ unique_ptr (the "take a pointer you have to delete and make it a smart pointer" abstraction) so Rust can make use of it appropriately. It's nowhere near complete, but they do welcome patches and issue reports. Anyway, this isn't even all that relevant to Circle and Safe C++, that can potentially share more with C++ than Rust does, such as avoiding a separate heap abstraction so that Safe C++ might be able to free objects that were allocated in legacy C++ code, etc.
cylemons [3 hidden]5 mins ago
But these are not safety features
DidYaWipe [3 hidden]5 mins ago
Let us know when C++ gets rid of the mess that is header files.
Until then... YAWN.
osmsucks [3 hidden]5 mins ago
The article does mention modules.
xigoi [3 hidden]5 mins ago
But it doesn’t mention that you can’t actually use modules without passing a bunch of random compiler flags and hoping that they work.
mindcrime [3 hidden]5 mins ago
I was an extreme C++ bigot back in the late 90's, early 2000's. My license plate back then was CPPHACKR[1]. But industry trends and other things took my career in the direction of favoring Java, and I've spent most of the last 20+ years thinking of myself as mainly a "Java guy". But I keep buying new C++ books and I always install the C++ tooling on any new box I build. I tell myself that "one day" I'm going to invest the time to bone up on all the new goodies in C++ since I last touched it, and have another go.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
ttul [3 hidden]5 mins ago
The programmers on the sound team at the video game company I worked for as an intern in 1998 would always stash a couple of extra void pointers in their classes just in case they needed to add something in later. Programmers should never lose sight of pragmatism. Seeking perfection doesn’t help you ship on time. And often, time to completion matters far more than robustness.
ninkendo [3 hidden]5 mins ago
Funny, sounds like the Simpsons gag from the same time period: “what’s wrong with this country? Can’t a man walk down the street without being offered a job?”
what is the job market like now for C++ programmers? I'm looking for a job.
nialv7 [3 hidden]5 mins ago
How does enforcing profiles per-translation unit make any sense? Some of these guarantees can only be enforced if assumptions are made about data/references coming from other translation units.
Maxatar [3 hidden]5 mins ago
This is the one major stumbling block for profiles right now that people are trying to fix.
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
There is currently no resolution to this issue.
juliangmp [3 hidden]5 mins ago
I guess modules are supposed to be the magic solution for that, Bjarne has shown them in this article, even using import std.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
jpc0 [3 hidden]5 mins ago
I've been playing with building out an OpenGL app using C++23 on bleeding edge CMake and Clang and it really is a breath of fresh air... I do run into bugs in both but it is really nice. Most of the bugs are related to import std though which is expected... Oh and clangd(LSP) still having very spotty support for modules.
The tooling is way better than it was 6 months ago though asin I can actually compile code in a non Visual Studio project using import std.
I will be extremely happy the day I no longer need to see a preprocessor directive outside of library code.
humanrebar [3 hidden]5 mins ago
Modules alone do not guarantee one definition per entity per linked program. On the contrary, build systems are needing to add design complexity to support, for instance, multiple built module interfaces for the std module because different translation units are consuming the std module with different settings -- different standards versions for instance.
hoc [3 hidden]5 mins ago
I definitely wouldn't have used "<<" in an "ad" for C++ :)
(I must say that I was happy to see/read that article, though)
imron [3 hidden]5 mins ago
I want to love C++.
Over my career I’ve written hundreds of thousands of lines of it.
But keeping up with it is time consuming and more and more I find myself reaching for other languages.
okanat [3 hidden]5 mins ago
Same. Luckily my team switched to Rust almost 100%. So I don't need to learn about the godforsaken coroutine syntax and what pitfalls they laid when you use char wrong with it or in which subset of calls std::range does something stupid and causes a horrible performance regression.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
araes [3 hidden]5 mins ago
> Bjarne has been criticized for accepting too many (questionable) things
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
xyproto [3 hidden]5 mins ago
I went from being curious about C++, to hating C++, to wanting to love it, to being fine with it, to using it for work for 5+ years, to abandoning it and finally to want to use it for game development, maybe. It's the circle of life.
DrBazza [3 hidden]5 mins ago
The masochist in me keeps coming back to c++. My analogy of it to other languages is that it’s like painting a house with a fine brush versus painting the Mona Lisa with a roller. Right tool for the job I suppose.
01100011 [3 hidden]5 mins ago
It's my job and career(well, C and C++) but I often try to avoid C++. Whenever I use it(usually writing tests) I go through this cycle of re-learning some cool tricks, trying to apply them, realizing they won't do what I want or the syntax to do it is awkward and more work than the dumb way, and I end up hating C++ and feeling burned yet again.
codr7 [3 hidden]5 mins ago
I've been writing C++ since 1996-ish.
Less and less, for sure.
Nothing the past few years.
They killed it.
mr_00ff00 [3 hidden]5 mins ago
If you only read HN, you would think C++ died years ago.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
codr7 [3 hidden]5 mins ago
The fact that we don't have a viable alternative yet doesn't exactly mean that the language is in good shape.
chikere232 [3 hidden]5 mins ago
It just means it's in the best shape of any of the languages in it's domain
bboygravity [3 hidden]5 mins ago
Can confirm pretty much the entire embedded systems world uses either C or C++.
That's probably most devices in the world.
markus_zhang [3 hidden]5 mins ago
I have listened to a few podcasts by HFT people. Looks like you try to maximize performance and use a lot of C++ skills. Very interesting to listen to but I wonder how does anyone pick up the skills?
3vidence [3 hidden]5 mins ago
Can also confirm c++ is alive and well at FAANG. Might still be the most popular language for most new projects.
bobnamob [3 hidden]5 mins ago
* for some values of FAANG
C++ has been dead and effectively banned at amzn for years. Only very specific (robotics and ML generally) projects get exemptions. Rust is big and only getting bigger
musicale [3 hidden]5 mins ago
Took me a moment to realize that "killed it" was being used in the negative sense.
bogeholm [3 hidden]5 mins ago
Almost a haiku :)
erwincoumans [3 hidden]5 mins ago
Same here.
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
midnightclubbed [3 hidden]5 mins ago
I have used auto liberally for 8+ years; maybe I'm accustomed to reading code containing it but I really can't think of it being a problem. I feel like auto increases readability, the only thing I dislike is that they didnt make it a reference by default.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
William_BB [3 hidden]5 mins ago
E.g. `std::ranges::for_each`, where lambda captures a bunch of variables by reference. Like I would hope the compiler optimizes this to be the same as a regular loop. But can I be certain, when compared to a good old for loop?
jpc0 [3 hidden]5 mins ago
To be fair std::ranges seems like the biggest mistake the committee allowed into the language recently.
Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.
for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.
Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?
jandrewrogers [3 hidden]5 mins ago
Yeah, the std::ranges implementation is a bit of a mess. The inability to start clean without regard for backward compatibility reasons limits what is possible. I think most people see how you could implement comparable functionality with nicer properties from a clean sheet of paper. It is the curse of being an old language.
TinkersW [3 hidden]5 mins ago
Just ban ranges lib, it is hot garbage anyway. The compilers are able to optimize lambdas fairly well nowadays(when inlined), I wouldn't be that concerned.
throwaway2037 [3 hidden]5 mins ago
Did you try the two version in Godbolt?
musicale [3 hidden]5 mins ago
I just wish they hadn't repurposed the old "auto" keyword from C and had used a new keyword like "var" or "let".
#define var auto
#define let auto
maleldil [3 hidden]5 mins ago
Given how important backwards compatibility is for C++, it's either take over a basically unused keyword or come up with something so weird that would never appear in existing code.
Java solved this by making var a reserved type, not a keyword, but I don't know if that's feasible for C++.
galkk [3 hidden]5 mins ago
If we're going that route, how about
#define var auto
#define let const auto
?
astrobe_ [3 hidden]5 mins ago
You don't have to "keep up with it", if by this you mean what I think you mean.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
01100011 [3 hidden]5 mins ago
Wrong. Most programmers spend tremendous amounts of time reading and maintaining someone else's code. You absolutely have to keep up with it.
midnightclubbed [3 hidden]5 mins ago
You don't 'have' to keep up with the language and I don't know that many people try to keep up with every single new feature - but it is worse to be one of those programmers for whom C++ stopped at C++03 and fight any feature introduced since then (the same people generally have strong opinions about templates too).
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
imron [3 hidden]5 mins ago
I’ve been using c++ since the late 90’s but am not stuck there.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
layer8 [3 hidden]5 mins ago
It’s okay to be a few years behind the standard, the compilers tend to be as well.
imron [3 hidden]5 mins ago
Yeah, the issue is more that the perceived complexity means I’m less interested in investing time to catch it all back up
TinkersW [3 hidden]5 mins ago
If you already used C++20 you aren't meaningfully behind, very little of interest has been introduced since then, and much of it isn't usable yet because of implementation issues.
01100011 [3 hidden]5 mins ago
I mean, right from Bjarne's mouth:
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
markus_zhang [3 hidden]5 mins ago
I think it's good enough or side projects. More powerful than C so I don't need to hand roll strings and some algos but I tend to keep a minimum number of features because I'm such an amateur.
gosub100 [3 hidden]5 mins ago
since -14 or -17 I feel no need to keep up with it. thats cool if they add a bunch more stuff, but what I'm using works great now. I only feel some "peer pressure" to signal to other people that I know c++20, but as of now, I've put nothing into it. I think it's best to lag behind a few years (for this language, specifically).
midnightclubbed [3 hidden]5 mins ago
The compilers tend to lag a few years behind the language spec too, especially if you have to support platforms where the toolchains lag latest gcc/clang (Apple / Android / game consoles).
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
rvz [3 hidden]5 mins ago
On the other hand, the decline of robust and high quality software started with the introduction of very immature languages such as both javascript or typescript ecosystems.
It's really any other language other than those two.
tialaramex [3 hidden]5 mins ago
Here's how Bjarne describes that first C++ program:
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
Maxatar [3 hidden]5 mins ago
The collect_lines example won't even compile, it's not valid C++, but there's undefined behavior in one of the examples? I'm very surprised and would like to know what it is, that would be truly shocking.
tialaramex [3 hidden]5 mins ago
Really? If you've worked with C++ it shouldn't be shocking.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
otabdeveloper4 [3 hidden]5 mins ago
"Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
notfed [3 hidden]5 mins ago
> "Undefined behavior" is not a bug. It's something that isn't specified by an ISO standard.
An ISO standard? According to who, ISO?
zie1ony [3 hidden]5 mins ago
Seeing badly formatted code snippets without color highlighting in article called "21st Century C++" somehow resonates with my opinion on how hard to write and to ready C++ still is after working with other laguages.
AtlasBarfed [3 hidden]5 mins ago
This honestly looks like C++ being feature-juryrigged to a degree that it doesn't even look like what C++ is: a c-derived low level language.
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
DidYaWipe [3 hidden]5 mins ago
Yeah, I didn't have a problem keeping my shit straight in C++ in the '90s. The kitchen-sink approach since then hasn't been worth keeping up with. The fact that we're still dealing with header files means that the language stewards' priorities are not in line with practical concerns.
wiseowise [3 hidden]5 mins ago
I always hear about “import std” but still don’t see out of the support for it. Is it still experimental?
DidYaWipe [3 hidden]5 mins ago
Just reading the first 1/5 of this made me bored. I started my career with C++, being heavy into it for 10 years. But I've been doing Swift for the last 10 at least. I had a job interview last week for a job that was heavy C++, with major reliance on templates and post-C++ 11... and it didn't go well. You know what? I don't give a shit.
bboygravity [3 hidden]5 mins ago
It's crazy that with that amount of experience you wouldn't get the job, just because you lack some modern C++ info in your brain's memory. Stuff you could search for or ask an LLM in 5 seconds (or even look up in a freaking physical book). You'd probably be fully up to date within a few weeks.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
jpc0 [3 hidden]5 mins ago
If you last worked on Pre templates C++ and now need to work on a template heavy codebase you are effectively writing in a different language. I don't think it will be a few weeks of catching up.
modernerd [3 hidden]5 mins ago
I haven't read much from Bjarne but this is refreshingly self-aware and paints a hopeful path to standardize around "the good parts" of C++.
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
bb88 [3 hidden]5 mins ago
The problem with 45 years of C++ is that different eras used different features. If you have 3 million lines of C++ code written in the 1990's that still compiles and works today, should you use new 202x C++ features?
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
fuzztester [3 hidden]5 mins ago
>footguns
I was expecting that someone would have posted this by now:
I'm curious about that now, too. Is there the equivalent of Python's ruff or Rust's cargo clippy that can call out code that is legal and well-formed but could be better expressed another way?
bluGill [3 hidden]5 mins ago
Clang-tidy can rewrite some old code to better. However there is a lot of working code from the 1990s that cannot be automatically rewritten to a new style. Which is what makes adding tooling hard - somehow you need to figure out what code should follow the new style and what is the old style and updating to modern would be too expensive.
lenkite [3 hidden]5 mins ago
> As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
einpoklum [3 hidden]5 mins ago
> And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
Bjarne Stroustrup (the creator of C++) is the best language designer. Many language designers will create a language, work on it for a couple years, and then go and make another language. Stroustrup on the other hand has been methodically working on C++ and each year the language becomes better.
mskcc [3 hidden]5 mins ago
Prof. Bjarne's commitment to C++ is beyond comparison!
sixthDot [3 hidden]5 mins ago
So now even H news are being poluted with IA.
jjmarr [3 hidden]5 mins ago
Modules sound cool for compile time, but do they prevent duplicative template instantiations? Because that's the real performance killer in my experience.
Maxatar [3 hidden]5 mins ago
Modules don't treat templates any differently than non-modules so no, they don't prevent duplicate template instantiations.
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
senkora [3 hidden]5 mins ago
> You lose expressiveness
Or more, correctly, the following happens:
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
AnonC [3 hidden]5 mins ago
Tangential question: is there a Rust equivalent for the book “The Design and Evolution of C++”?
steveklabnik [3 hidden]5 mins ago
There is not.
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
AnonC [3 hidden]5 mins ago
Thank you. It would be interesting to read the history, including the design decisions, the influences (and distractions), the trade offs, etc.
When I read “The Design and Evolution of C++”, it gave me a better understanding of the language.
munificent [3 hidden]5 mins ago
I would 100% buy a hardback gold embossed version of this book.
steveklabnik [3 hidden]5 mins ago
Well if I could make it half as good looking as Crafting Interpreters, maybe I’d manage to make it happen, hahah.
I’m mostly focused on jj with my writing right now, but we’ll see…
justanotheratom [3 hidden]5 mins ago
C++ should be known for the amount of collective brain cycles wasted on arguing what subset of C++ is the right one to use.
mempko [3 hidden]5 mins ago
Professionals know what tool to use for a job. Does it take time to become good? Of course, like anything.
justanotheratom [3 hidden]5 mins ago
Not a question of difficulty or skill. I am saying professionals can't agree what subset to use!
mempko [3 hidden]5 mins ago
They don't have to. The subset depends on the job! That's the beauty and power of C++. That's why we have projects written in it in all domains. From websites to spaceships and Mars rovers.
justanotheratom [3 hidden]5 mins ago
yes, and you will tell me exactly what subset and coding convention "makes sense" for this domain, and you will give your reasoning too. And I will give my arguments, and on and on it goes. teams have broken up over this.
A well-designed language is one in which there are very few different ways of doing the same thing. And C++ is definitely not that.
mempko [3 hidden]5 mins ago
Why would a well designed language have only one or few ways to do the same thing? Seems rather arbitrary. I like when I have many ways to do the same thing.
Imagine if you told a writer or poet that English is bad because there is more than one way to say the same thing...
Programming languages are for people more than machines. Machines are happy with microcode.
ninetyninenine [3 hidden]5 mins ago
21st century C++? AKA Rust?
jandrewrogers [3 hidden]5 mins ago
Unfortunately, Rust is significantly less expressive than C++ and therefore is unlikely to replace it for high-performance systems code. As much as I don’t like C++, it is very powerful as a tool. The ability to express difficult low-level systems constructs and optimizations concisely and safely in the language are its killer feature. Once you know how to use it, other languages feel hobbled.
AlotOfReading [3 hidden]5 mins ago
C++ doesn't allow you to express low level systems constructs concisely and safely though. You usually get neither.
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
jandrewrogers [3 hidden]5 mins ago
This assumes you are writing C++ in the most naive way possible. I’m sure some people do that but nothing requires it. The capabilities of a language are not defined by its worst programmers.
Modern C++ allows you to swap out most features and behaviors of the language with your own implementations that make different guarantees. C++ is commonly used in high-assurance environments with extremely high performance requirements, and it remains the most effective language for these purposes because you can completely replace most of the language with something that makes the safety guarantees you require. This is rather important. For example, userspace DMA is idiomatic in e.g. high-performance databases kernels; handling this is much safer in C++ than Rust. In C++, you can trivially write elegant primitives that completely hide the unusual safety model. In Rust, you have to write a lot of ugly unsafe code to make this work at all because userspace DMA isn’t compatible with a borrow checker. There can always be multiple mutable references to memory but it is not knowable at compile-time, safety of an operation can only be arbitrated at runtime.
Of course, it is still incumbent on the developer to use the language competently in all cases.
AlotOfReading [3 hidden]5 mins ago
The capabilities of a language are not defined by its worst programmers.
Is the implication here that Bjarne is a bad C++ developer? If the person in charge of the EWG fails "to use the language competently in all cases", what hope is there for the rest of us mere mortals?
For what it's worth, unsafe Rust is safer than C++. There's very little UB to explode your carefully crafted implementations. Safe rust of course has no UB except for what you write in unsafe blocks, so it's safer still and there's no real difference in the abstractions you can write with concepts vs traits.
I'm not actually arguing for rust here though, because this isn't a great showing for it. Trying to write the related add_wrap(T, T) function in rust is stupidly verbose compared to add_sat(T, T) thanks to bad decisions the num_traits authors made. What I am saying is C++ isn't a form of high level assembly like your original comment suggested. Understanding the relationship between the language and the hardware takes a lot of experience that most people don't use when writing code.
jandrewrogers [3 hidden]5 mins ago
UB is a feature of the standard, not the implementation. Many of those behaviors can be defined. Modern C++ conveniently allows you to replace many of the bits that have UB, per standard, with your own bits with defined behavior with zero overhead. This was not always the case. You aren’t dependent on the compiler implementor. The ability to consistently do this transparently became practical around C++17 IMO. The C++ standard library is in many regards obsolete and many orgs treat it that way.
I never suggested that C++ was “a form of high level assembly”. I’ve written enough assembly and C to know better; you lose a bit of precision with C++. But now I can define (or not) the behavior I want in a way that is largely transparent. This has been a brilliant change to the language.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation. Everyone doing high-performance and/or high-assurance systems is dragging in few if any dependencies, so this is practical. The kinds of things that C++ is really good at for new code are the kinds of things where this is what you would do regardless.
Developers don’t even have to be hardware experts, they just have to not use std for most things. That is a pretty low barrier. And std is a mess with the albatross of legacy support. Reimagined C++20 native “standard” libraries are much, much cleaner and safer (and faster).
Legacy C++ code bases aren’t going to be rewritten in a new language. New C++ code bases can take advantage of alternative foundations that ignore std and many do. Most things should not be written in C++, but for some things C++ is unmatched currently and safer in practice than is often suggested with basic hygiene.
imtringued [3 hidden]5 mins ago
DMA being a problem appears to be mostly a problem with a lack of identification of the data. If the shape of the data could be verified by the language runtime, instead of being an arbitrary stream of bytes whose meaning must be known by the recipient without any negotiation, this form of unsafety would disappear, since the receiving code simply needs to assert the schema, which could be as simple as checking a 32 bit integer.
Then all you need to do is also verify that the sending code adheres to the schema it specified.
This has very little to do with borrow checking. From the perspective of the borrow checker, a DMA call is no different from RPC or writing to a very wide pointer.
imtringued [3 hidden]5 mins ago
Most high performance code is vectorized and Rust is better at autovectorization and aliasing analysis than C++, so I'm not really seeing your point.
Having to drop down to intrinsics early is not a strength.
pro14 [3 hidden]5 mins ago
is the job market for C++ developers still good?
imron [3 hidden]5 mins ago
Depends. For certain fields the pay is great and there’s a dearth of candidates.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
crims0n [3 hidden]5 mins ago
For someone who wants to get into systems programming professionally, is C++ going to be a hard requirement or can one mostly get away with C/Rust?
pjmlp [3 hidden]5 mins ago
The only places where C++ failed to take C's crown has been on UNIX clones (naturally, due to the symbiotic relationship), and embedded where even modern C couldn't replace C89 + compiler extensions from the chip vendor, many shops are stuck in the past, even though most toolchains are already up to C++20 and C17 nowadays.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
IshKebab [3 hidden]5 mins ago
Depends exactly what you want to do. C is not very popular at all in professional settings - C++ is far more popular. I would say if you know Rust then C++ isn't very hard though. You'll write better C++ code too because you'll naturally keep the good habits that the Rust compiler enforces and the C++ compiler doesn't.
DonHopkins [3 hidden]5 mins ago
Generalizing Overloading for C++2000
Bjarne Stroustrup,
AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected
to become part of the next revision of the standard. The focus is on general ideas rather than technical
details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
That's why C++ is still around today, it was built on some solid principles. Bjarne is such a good language designer because he never abandoned it. Lesser designers make a language and start another in 5 or 10 years. Bjarne saw the value in what he created and had a sense of responsibility to those using it to keep making it better and take their projects seriously.
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
mskcc [3 hidden]5 mins ago
Excellent article. Thanks for sharing.
biohcacker84 [3 hidden]5 mins ago
After decades of C++ development, I prefer C, modern Fortran and Rust.
Something about the formatting of the code blocks used is all messed up for me. Seems to be independent on browser, happens in both Firefox and Chrome.
npalli [3 hidden]5 mins ago
This is a Bjarne issue. For personal reasons he uses proportional fonts in his code blocks (in his texts) instead of monospaced and the code snippets always look bad. I guess he is stuck in his ways, just have to work around this ugly look.
breppp [3 hidden]5 mins ago
Looking at how aesthetically charming the C++ syntax is, I wouldn't expect anything less than Comic Sans code blocks
James_K [3 hidden]5 mins ago
> This is a Bjarne issue.
I have come to find this category of error to be distressingly large.
adrian_b [3 hidden]5 mins ago
Bjarne has nothing to do with the HTML/CSS pages of the ACM site, which select for displaying the code the default monospace font that is configured in the browser of the user.
If a proportional font is used for rendering, the most likely cause is that the user has not configured the default monospace font in the settings of the browser.
edflsafoiewq [3 hidden]5 mins ago
No, the formatting was definitely botched. It should look much better than it does even in a proportional font.
kstrauser [3 hidden]5 mins ago
Agreed. I wouldn't mind if, say, end of line comments weren't perfectly aligned. There's zero indentation so things like
for (string line; getline(is,line); )
s.insert(line);
are hard to visually parse.
adrian_b [3 hidden]5 mins ago
This must depend on some settings of the browser and perhaps also on the locally installed typefaces.
On my Firefox on Linux, this HTML page is not rendered with any custom typefaces, but it uses those specified by me as defaults for serif/sans serif/monospace.
The C++ code is rendered in my browser with my default, i.e. with JetBrains Mono and there is nothing weird.
The code quoted by you is indented as expected, not as in your posting.
On my computer, I have mostly typefaces that I have bought myself and which are seldom encountered in most computers. I do not have any of the typefaces that are typically specified in CSS rules, i.e. none of the typefaces that can be found in default installations of Windows, Linux or MacOS.
So perhaps there is a bug in their CSS at the definition of "wp-block-code", which on other computers selects a bad typeface that is proportional, so that the narrow spaces make the indentation disappear. (Their wp-block-code says "font-family:inherit" and I have not searched further to see from where the wrong font-family may be inherited.)
Here, perhaps because that bad typeface cannot be found, the browser uses my default monospace font and the code is displayed fine.
Or else, perhaps you have not set in your browser a proper default for monospace fonts and it just takes Arial or other such inappropriate system font even for monospace.
edflsafoiewq [3 hidden]5 mins ago
The formatting has been (partially) fixed since it was posted.
adrian_b [3 hidden]5 mins ago
This is not a Bjarne issue.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
jcelerier [3 hidden]5 mins ago
this is definitely an issue for the editors of the ACM journal
hkwerf [3 hidden]5 mins ago
It's typical Stroustrup style to write code in a variable width font. I'd wager they didn't have an option to use a variable-width font in their code blocks in their CMS and normal paragraphs are trimmed automatically.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
tialaramex [3 hidden]5 mins ago
The other give away is that he wants to use his awful "I/O streams" feature even though he also wants very modern features like modules.
Normal people who have a modern environment would std::println but Bjarne insists on using the I/O streams from last century instead
adrian_b [3 hidden]5 mins ago
While you are right about the books of Stroustrup, here your inference is wrong, because Stroustrup cannot have anything to do with the CSS style sheets of the ACM Web site, which, in conjunction with the browser settings, determine the font used for rendering the text.
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
hkwerf [3 hidden]5 mins ago
They just fixed it by now. It was different when the story was new.
The code blocks aren't in a preformatted tag like <pre> so the whitespace gets collapsed. It seems the intention was to turn spaces into but however it was done was messed up because lots of spaces didn't get converted.
adrian_b [3 hidden]5 mins ago
The code blocks are formatted as "wp-block-code", which seems to select the default monospace font of the browser.
My browser has an appropriate default monospace font (JetBrains Mono), so the code is formatted and indented correctly, as expected.
Where this does not happen, the setting for the default monospace font must be wrong, so it should be corrected.
edflsafoiewq [3 hidden]5 mins ago
It has been changed since it was posted. You can check the Wayback Machine for the original.
adrian_b [3 hidden]5 mins ago
Have you verified that your browsers have correct settings for their default fonts, i.e. a real monospace font as the default for "monospace"?
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
Cieric [3 hidden]5 mins ago
Firefox reader view seems to be a slight improvement since it removes the random right alignments in the article.
speerer [3 hidden]5 mins ago
This doesn't seem to be a code blog, but a general science communication blog. The editors may not be familiar with code syntax, and may simply be using a content management system and copy-pasting from source material.
mmoskal [3 hidden]5 mins ago
> ACM, the Association for Computing Machinery, is the world's largest educational and scientific society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges.
Most of programming language conferences are organized by ACM.
speerer [3 hidden]5 mins ago
I know, but the blog itself doesn't seem oriented to post code snippets. I clicked a few articles which were much more general.
kanbankaren [3 hidden]5 mins ago
Yeah. Looks nasty. Don't the editors of the ACM have a say on how the article is presented?
zygentoma [3 hidden]5 mins ago
Who the hell typeset this?
wrs [3 hidden]5 mins ago
Communications of the ACM has had unbelievably bad typography for code samples for decades (predating the web). No idea how this is allowed to continue.
rbanffy [3 hidden]5 mins ago
I'm guessing someone pasted from what went into the print edition. Or Bjarne himself.
It's just the first code snippet that's messed up. The rest is merely wonky.
rgovostes [3 hidden]5 mins ago
You don't use 10 spaces of indentation? It's the 21st century.
layer8 [3 hidden]5 mins ago
It’s a wchar_tab.
James_K [3 hidden]5 mins ago
[flagged]
dang [3 hidden]5 mins ago
Ok, but please don't fulminate on Hacker News. We're trying for something different here.
> Between Rust and Zig, the problems of C++ have been solved much more elegantly
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
James_K [3 hidden]5 mins ago
> it is not difficult to write very nice, readable, efficient, and safe C++ code
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
SirHumphrey [3 hidden]5 mins ago
Even Cobol code hasn't been ported in it's entirety, and the whole codebase at the peak was probably orders of magnitude smaller than C++. It's also far easier to port Cobol - with it being used mostly for data processing and business logic - than C++ that was used for all manners of strange, esoteric and complicated pieces of software requiring thousand to millions of man-hours to port (for example most of Gecko and Blink).
C++ will be here forever, at least in some manner.
edit: spelling
James_K [3 hidden]5 mins ago
We can all at least appreciate that COBOL is something you try to get rid of where possible. If we took the same attitude to C++ as we do COBOL, then I think the issue would be much less severe.
spacechild1 [3 hidden]5 mins ago
> It is so objectively horrible in every capacity,
Total hyperbole and simply not true.
> but it still somehow managed to limp on for all these years
Before Rust became somewhat popular, there was simply no serious alternative to C++ in many domains.
James_K [3 hidden]5 mins ago
That in and of itself is a failure. The decision to continually bolt more stuff onto this mess instead of developing a viable alternative is honestly painful. When you look at something like Zig, it gets you much of what C++ offers and in a way that doesn't cause you pain. Is the argument that Zig simply wasn't possible 30 years ago? I doubt it. As best I can tell, Zig comes as the result of a relatively experienced C programmer making the observation that you could improve C in a lot of easy ways. Were it not for the existing mess, he might have called his language C++. Instead a Scandinavian nut-job decided to heap some mess on top of C and everyone just went along with it.
spacechild1 [3 hidden]5 mins ago
I guess someone had to make all these mistakes so that others can now learn from them :)
bandika [3 hidden]5 mins ago
Honestly, I am a happier and more productive developer since left C++ behind for other languages. And it's not just the language, but the lack of ecosystem too. Things like the build system, managing dependencies, etc, all such a pain compared to modern languages with good ecosystem (Rust, Flutter, Kotlin, etc)
pjmlp [3 hidden]5 mins ago
Start by removing Rust's dependency in GCC and LLVM, both written in C++.
tialaramex [3 hidden]5 mins ago
Rust doesn't "depend" on LLVM in the sense you seem to imagine, you can instead lower Rust's MIR into Cranelift (which is written in Rust) if you want for example.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
pjmlp [3 hidden]5 mins ago
It can, but until that becomes the reference implementation backend, it hardly matters.
Similar to how much Python folks disregard PyPy's existence.
I doubt LLVM project would start accepting polyglot contributions, beyond what they already do for language specific frontends.
Also, the ongoing GCC support is dependent on C++ as well.
otteromkram [3 hidden]5 mins ago
I dislike the style of Code used to write this. I understand that, given who wrote the article, this is blasphemy.
Opening braces should be inline with the expression or definition.
Comments can be above what they're referred to.
Combined, this makes any code snippet look like crap on mobile and almost impossible to follow as a result.
999900000999 [3 hidden]5 mins ago
C++ and C still force a usage of header files.
For whatever reason this is probably the biggest reason I've struggled with it( aside from tooling... Makes me miss npm).
C++ may offer modules (in fact, it's been offering them since 2020), however, when it comes to their implementation in mainstream C++ compilers, only now things are becoming sort of usable with modules still being a challenge in more complex projects due to compiler bugs in the corner cases.
I think we need to be honest and upfront about this. I've talked to quite a few people who have tried to use modules but were unpleasantly surprised by how rough the experience was.
Also they are difficult to switch to, so I would expect very few established projects to bother.
The moral of the story? Backwards compatibility means never leaving your baggage behind.
[0] https://www.circle-lang.org/draft-profiles.html
The example in the article starts with "Wow, we have unordered maps now!" Just adding things modern languages have is nice, but doesn't fix the big problems. The basic problem is that you can't throw anything out. The mix of old and new stuff leads to obscure bugs. The new abstractions tend to leak raw pointers, so that old stuff can be called.
C++ is almost unique in having hiding ("abstraction") without safety. That's the big problem.
They absolutely could say "in C++26 vector::operator[] will be checked" and add an `.at_unsafe()` method.
They won't though because the whole standards committee still thinks that This Is Fine. In fact the number of "just get good" people in the committee has probably increased - everyone with any brains has run away to Rust (and maybe Zig).
Any C++ code without at least unit tests with 100% test coverage on with UB sanitizer etc, must be considered inherently defective and the developer should be flogged for his absurd levels of incompetence.
Then there is also the need for UB aware formal verification. You must define predicates/conditions under which your code is safe and all code paths that call this code must verifiably satisfy the predicates for all calls.
This means you're down to the statically verifiable subset of C++, which includes C++ that performs asserts at runtime, in case the condition cannot be verified at compile time.
How many C++ developers are trained in formal verification? As far as I am aware, they don't exist.
Any C++ developers reading this who haven't at least written unit tests with UB sanitizer for all of their production code should be ashamed of themselves. If this sounds harsh, remember that this is merely the logical conclusion of "just get good".
Because no one wants it enough to implement it.
I would never expect our 10M+ LOC performance-sensive C++ code base to be formally memory safe, but so far only C++ allowed us to maintain it for 15 years with partial refactor and minimal upgrade pain.
Most languages take backwards compatibility very seriously. It was quite a surprise to me when Python broke so much code with the 3.12 release. I think it's the exception.
However, an application that I had written to be backward compatible with java 1.4, 15 years ago, cannot be compiled today. And I had to make major changes to have it run on anything past java 8, ~10 years ago, I believe.
$DAYJOB got burned badly twice on breaking Go behavioral changes delivered in non-major versions, so management created a group to carefully review Go releases and approve them for use.
All too often, Google's justification for breaking things is "Well, we checked the code in Google, and publicly available on Github, and this change wouldn't affect TOO many people, so we're doing it because it's convenient for us.".
Today we don’t have those limits on HDD space and can simply ship an embedded copy of JRE with the desktop app. In server environments I doubt anyone is reusing JRE between apps at all.
> ...sharing runtime dependencies [in C or C++] is hard...
Is it? The "foo.so foo.1.so foo.1.2.3.so" mechanism works really well, for libraries whose devs that are capable of failing to ship backwards-incompatible changes in patch versions, and ABI-breaking changes in minor versions.
“Often” is a huge exaggeration. I always hear about it, but never encountered it myself in 25 years of commercial Java development. It almost feels like some people are doing weird stuff and then blame the technology.
> Is it? The "foo.so foo.1.so foo.1.2.3.so"
Is it “sharing” or having every version of runtime used by at least one app?
Lucky you, I guess?
> Is it “sharing” or having every version of runtime used by at least one app?
I'm not sure what you're asking here? As I'm sure you're aware, software that links against dependent libraries can choose to not care which version it links against, or link against a major, minor, or patch version, depending on how much it does care, and how careful the maintainers of the dependent software are.
So, the number of SOs you end up with depends on how picky your installed software is, and how reasonable the maintainers of the libraries they use are.
Granted, it is only those that can be machine verified.
Office is using C++20 modules in production, Vulkan also has a modules version.
In my opinion, having good design and architecture are much higher on my list than memory safety. Being able to express my mental model as directly as possible is more important to me.
The Chromium team found that
> Around 70% of our high severity security bugs are memory unsafety problems (that is, mistakes with C/C++ pointers). Half of those are use-after-free bugs.
Chromium Security: Memory Safety (https://www.chromium.org/Home/chromium-security/memory-safet...)
Microsoft found that
> ~70% of the vulnerabilities Microsoft assigns a CVE each year continue to be memory safety issues
A proactive approach to more secure code (https://msrc.microsoft.com/blog/2019/07/a-proactive-approach...)
It’s possible you hadn’t come across these studies before. But if you have, and you didn’t find them convincing, what did they lack?
- Were the codebases not old enough? They’re anywhere between 15 and 30 years old, so probably not.
- Did the codebases not have enough users? I think both have billions of active users, so I don’t think so.
- Was it a “skill issue”? Are the developers at Google and Microsoft just not that good? Maybe they didn’t consider good design and architecture at any point while writing software over the last couple of decades. Possible!
There’s just one problem with the “skill issue” theory though. Android, presumably staffed with the same calibre of engineers as Chrome, also written in C++ also found that 76% of vulnerabilities were related to memory safety. We’ve got consistency, if nothing else. And then, in recent years, something remarkable happened.
> the percentage of memory safety vulnerabilities in Android dropped from 76% to 24% over 6 years as development shifted to memory safe languages.
Eliminating Memory Safety Vulnerabilities at the Source (https://security.googleblog.com/2024/09/eliminating-memory-s...)
They stopped writing new C++ code and the memory safety vulnerabilities dropped dramatically. Billions of Android users are already benefiting from much more secure devices, today!
You originally said
> And it's not clear if memory safety is the largest source of problems building software today.
It is possible to defend this by saying “what matters in software is product market fit” or something similar. That would be technically correct, while side stepping the issue.
Instead I’ll ask you, do you still think it is possible to write secure software in C++, but just trying a little harder. Through “good design and architecture”, as your previous comment implied.
Basically 99% of networked applications that don't talk to a trusted server and all OS level libraries fall under that category.
Your HFT code is most likely not connecting to an exchange that is interested in exploiting your trading code so the exploit surface is quite small. The only potential exploit involves other HFT algorithms trying to craft the order books into a malicious untrusted input to exploit your software.
Meanwhile if you are Google and write an android library, essentially all apps from the play store are out to get you.
Basically C++ code is like an infant that needs to be protected from strangers.
Let's keep some sanity and perspective here, please. C++ has many long-standing problems, but banging on the "security" drum will only drive people away from alternative languages. (Everyone knows that "security" is just a fig leaf they use to strong-arm you into doing stuff you hate.)
> ~70% of the vulnerabilities Microsoft assigns a CVE
> 76% of vulnerabilities
What is the difference between the first two (emphasis added) and what you said? Just as a thought experiment...
If I measure a single factor in exclusion to all others I can also find whatever I want in any set of data. Now your point may be valid but it is not what they published and without the full dataset we cannot validate your claim however I can validate that what you claim is no what they claim.
To answer your question in the final paragraph. Yes it is, but it requires the same cultural shift as what it would take to write the same code in rust or swift of golang or whatever other memory safe language you want to pick.
If rust was in fact viable for such a large project, how's the servo project going? That still the resounding success it was expected to be? Rust in the kernel? That going well?
The jury is still out on whether rust will be mass adopted and is able to usurp C/C++ in the domains where C/C++ dominate. It may get there, but I would much much rather start a new project using C++20 than in rust and I would still be able to make it memory safe and yes it is a "skill issue", but purely because of legacy C++ being taught and accepted in new code in a codebase.
Rules for writing memory safe C++ has not just been around for decades but has be statically checkable for over a decade but for a large project there are too many errors to universally apply them to existing code without years of work. However if you submit new code using old practices you should be held financially and legally responsible just like an actual engineer in another field would be.
It's because we are lax about standards that it's even an issue.
As a note, if you see an Arc<Mutex<>> in rust outside of some very specific Library code whoever wrote that code probably wouldn't be able to write the same code in a memory and thread safe manner, also that is an architectural issue.
Arc and Mutex are synchronisation primatives that are meant to be used to build datastructures and not in "userspace" code. It's a strong code smell that is generally accepted in Rust. Arc probably shouldn't even need to exist at all because that is a clear indication nobody thought about the ownership semantics of the data in question, maybe for some datastructures it is required but you should very likely not be typing it into general code.
If Arc<Mutex<>> is littered throughout your rust codebase you probably should have written that code in C#/Java/Go/pick your poison...
It's a really weird concept that probably comes only from having this extremely complex language where even the designers expect some parts of it are too weird for "normal programmers". But then they imagine some advanced class of programmer, the "library programmers", who can deal with such complexity.
The more modern way of designing software is to stick to the YAGNI principle: design your code to be simple and straightforward, and only extract out datastructures into separate libraries if and when they prove to be needed.
Not to mention, the position that shared ownership should just not exist at all is self-evidently absurd. The lifetime of an object can very well be a dynamic property of your program, and a concurrent one. A language that lacks std::shared_ptr / Arc is simply not a complete language, there will be algorithms that you just can't express.
If you're not willing to do any work then you're just stuck, nobody can help you, magic "profiles" don't help either.
But, if you're willing to do work, why stop at profiles? Now we're talking about a price and I don't believe that somehow the minimum assignable budget is > $1Bn
The reason I life profiles is they are not all or nothing. I can put them in new code only, or maybe a single file that I'm willing to take the time to refactor. Or at least so I hope, it remains to be seen if that is how they work out. I've been trying to figure out how to make rust fit in, but std::vector<SomeVirtualInterface> is a real pain to wrap into rust and so far I haven't managed to get anything done there.
The $1 billion is realistic - this project was a rewrite of a previous product that became unmaintainable and inflation adjusted the cost was $1 billion. You can maybe adjust that down a little if we are more productive, but not much. You can adjust it down a lot if you can come up with a way to keep our existing C++ and just extend new features and fix the old code only where it really is a problem. The code we have written in C++98 (because that was all we had in 2010) still compiles with the latest C++23 compiler and since there are no know bugs it isn't worth updating that code to the latest standards even though it would be a lot easier to maintain (which we never do) if we did.
It's also expected that you'll be able to do this with Safe C++. Of course the interop with older C++ code will then still involve unsafety. But incremental improvement should be possible.
While I agree with this in a general sense, I think it ought to be quite possible to come up with a "profile" spec that's simply meant to enforce the language restriction/subsetting part of Safe C++ - meaning only the essentials of the safety checking mechanism, including the use of the borrow checker. Of course, this would not be very useful on its own without the language and library extensions that the broader Safe C++ proposal is also concerned with. It's not clear as of yet if these can be listed as part of the same "profile" specifications or would require separate proposals of their own. But this may well be a viable approach.
Circle is an implementation of C++ that includes a borrow checker and is 100% backwards compatible with C++:
https://www.circle-lang.org/site/index.html
a nice attempt but I have millions of lines of c++ that isn't going away-
You are welcome to take your millions of lines of C++ code and it will compile without change using Circle as any valid C++ code is valid Circle code, which is the technical definition of being backward compatible.
You don't need to change existing code to use Circle or the new features Circle introduces, you can just write new classes and functions with those features and your existing code will continue to compile as-is.
All my efforts to do the above so I can mix C++ and Rust have quickly failed when I realized that my wrappers would not be thing, and thus they would cost large performance penalties.
Until then... YAWN.
When the heck that day will actually arrive, FSM only knows. The will is sort-of there, but there are just SO many other things competing for my time and attention. :-(
[1]: funny side story about that. For anybody too young to remember just how hot the job market was back then... one day I was sitting stopped at a traffic light in Durham (NC). I'm just minding my own business, waiting for the light to change, when I catch a glimpse out of my side mirror, of somebody on foot, running towards my car. The guy gets right up to my car, and I think I had my window down already anyway. Anyway, the guy gets up to me, panting and out of breath from the run and he's like "Hey, I noticed your license plate and was wondering if you were looking for a new job." About then the light turned green in my direction, and I'm sitting there for a second in just stunned disbelief. This guy got out of his car, ran a few car lengths, to approach a stranger in traffic, to try to recruit him. I wasn't going to sit there and have a conversation with horns honking all around me, so I just yelled "sorry man" and drove off. One of the weirder experiences of my life.
https://youtube.com/watch?v=yDbvVFffWV4
C++ code involves numerous templates, and the definition of those templates is almost always in a header file that gets included into a translation unit. If a safety profile is enabled in one translation unit that includes a template, but is omitted from another translation unit that includes that same template... well what exactly gets compiled?
The rule in C++ is that it's okay to have multiple definitions of a declaration if each definition is identical. But if safety profiles exist, this can result in two identical definitions having different semantics.
There is currently no resolution to this issue.
Its a bit optimistic cause modules are still not really a viable option in my eyes, because you need proper support from the build systems, and notably cmake only has limited support for them right now.
The tooling is way better than it was 6 months ago though asin I can actually compile code in a non Visual Studio project using import std.
I will be extremely happy the day I no longer need to see a preprocessor directive outside of library code.
(I must say that I was happy to see/read that article, though)
Over my career I’ve written hundreds of thousands of lines of it.
But keeping up with it is time consuming and more and more I find myself reaching for other languages.
Bjarne has been criticized for accepting too many (questionable) things into the language even at the dawn of C++ and committee kept that behavior. Moreover they have this pattern that given the options they always choose the easiest to misuse and most unsafe implementation of anything that goes into standard. std::optional is a mess, so is curly bracket initialization, auto is like choosing between stepping on Legos or putting your arm into a spider-full bag.
The committee is the worst combination of "move fast and break things" and "not in my watch". C++98 was an okay language, C++11 was alright. Anything after C++14 is a minesweeper game with increasing difficulty.
He even writes that way in his own article... The quote from the last section of the introduction was hilarious, and actually made me laugh a little bit for almost those exact reasons.
BS, Comm ACM > "I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many."
Less and less, for sure.
Nothing the past few years.
They killed it.
As someone who worked in HFT, C++ is very much alive and new projects continue to be created in it simply because of the sheer of amount of experts in it. (For better or for worse)
That's probably most devices in the world.
C++ has been dead and effectively banned at amzn for years. Only very specific (robotics and ML generally) projects get exemptions. Rust is big and only getting bigger
>>contemporary C++30 can express the ideas embodied in such old-style code far simpler
IMO, newer C++ versions are becoming more complex (too many ways to do the same thing), less readable (prefer explicit types over 'auto', unless unavoidable) and harder to analyse performance and memory implications (hard to even track down what is happening under the hood).
I wish the C++ language and standard library would have been left alone, and efforts went into another language, say improving Rust instead.
Where do you see difficult to track down performance/memory implications? Lambda comes to mind and maybe coroutines (yet to use them but guessing there may be some memory allocations under the hood). I like that I can breakpoint my C++ code and look at the disassembly if I am concerned that the compiler did something other than expected.
Effectively other than for rewriting older iterators based algorithms to using new ranges iterators I just don't use std::ranges... Likely the compiler cannot optimise it as well (yet) and all the edge cases are not workes out yet. I also find it to be quite difficult to reason about vs older iterator based algorithm's.
for each would take a lambda and call the lambda for each iterator pair, if the compiler can optimise it it becomes a loop, if it can't it becomes a function call in a loop which probably isn't much worse... If for some reason the lambda needs to allocate per iteration it's going to be a performance nightmare.
Would it really be much harder to take that lambda, move it to a templated function that takes an iterator and call it the old fashioned way?
Java solved this by making var a reserved type, not a keyword, but I don't know if that's feasible for C++.
You don't have to use features. Instead, when you have a (language) problem to solve or something you'd like to have, you look into the features of the language.
Knowing they exist beforehand is better but is the hard part, because "deep" C++ is so hermetic that it is difficult to understand a feature when you have no idea which problem it is trying to solve.
There are certainly better tools for many jobs and it is important to have languages to reach for depending on the task at hand. I don't know that anything is better than C++ for performance sensitive code.
I was using c++11 when it was still called c++0x (and even before that when many of the features were developing in boost).
I took a break for a few years over c++14, but caught up again for c++17 and parts of c++20...
Which puts me 5-6 years behind the current state of things and there’s even more new features (and complexity) on the horizon.
I’m supportive of efforts to improve and modernize c++, but it feels like change didn’t happen at all for far too long and now change is happening too fast.
The ‘design by committee’ with everyone wanting their pet feature plus the kitchen sink thrown in doesn’t help reduce complexity.
Neither does implementing half-baked features from other ‘currently trendy’ languages.
It’s an enormous amount of complexity - and maybe for most code there’s not that much extra actual complexity involved but it feels overwhelming.
> I used the from_range argument to tell the compiler and a human reader that a range is used, rather than other possible ways of initializing the vector. I would have preferred to use the logically minimal vector{m} but the standards committee decided that requiring from_range would be a help to many.
Oh so I have to remember from_range and can't do the obvious thing? Great. One more thing to distract me from solving the actual problem I'm working on.
What exactly is wrong with the C++ community that blinds them to this sort of thing? I should be able to write performant, low-level code leveraging batteries-included algorithms effortlessly. This is 2025 people.
Respectfully, you might want to add at least a few C++20 features into your daily usage?
consteval/constinit guarantees to do what you usually want constexpr to do. Have personally found it great for making lookup tables and reducing the numbers of constants in code (and c++23 expands what can be done in consteval).
Designated initializer is a game-changer for filling structures. No more accidentally populating the wrong value into a structure initializer or writing individual assignments for each value you want to initialize.
It's really any other language other than those two.
"a simple program that writes every unique line from input to output"
Bjarne does thank more than half a dozen people, including other WG21 members, for reviewing this paper, maybe none of them read this program?
More likely, like Bjarne they didn't notice that this program has Undefined Behaviour for some inputs and that in the real world it doesn't quite do what's advertised.
The first example uses the int type. This is a signed integer type and in practice today it will usually be the 32-bit signed integer Rust calls i32 because that's cheap on almost any hardware you'd actually use for general purpose software.
In C++ this type has Undefined Behaviour if allowed to overflow. For the 32-bit signed integer that will happen once we see 2^31 identical lines.
In practice the observed behaviour will probably be that it treats 2^32 identical lines as equivalent to zero prior occurrences and I've verified that behaviour in a toy system.
Rust code is 100 percent undefined behavior because Rust doesn't have an ISO standard. So, theoretically some alternative Rust compiler implementation could blow up your computer or steal your bitcoins. There's no ISO standard to forbid them from doing so.
(You see where I'm going with this? Standards are good, but they're a legal construct, not an algorithm.)
An ISO standard? According to who, ISO?
Everything is unobvious magic. Sure, you stick to a very restricted set of API usages and patterns, and all the magic allocation/deallocation happens out of sight.
But does that make it easier to debug? Better to code it?
This simply looks like C++ trying not to look like C++: like a completely different language, but one that was not built from the ground up to be that language, rather a bunch of shell games to make it look like another language as an illusion.
Says a lot about the people hiring imo. Good luck to them finding someone who can recite C++ spec from memory.
As a C++ newbie I just don't understand the recommended path I'm supposed to follow, though. It seems to be a mix of "a book of guidelines" and "a package that shows you how you should be using those guidelines via implementation of their principles".
After some digging it looks like the guidebook is the "C++ Core Guidelines":
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
And I'm supposed to read that and then:
> use parts of the standard library and add a tiny library to make use of the guidelines convenient and efficient (the Guidelines Support Library, GSL).
Which seems to be this (at least Microsoft's implementation):
https://github.com/microsoft/GSL
And I'm left wondering, is this just how C++ is? Can't the language provide tooling for me to better adhere to its guidelines, bake in "blessed" features and deprecate what Bjarne calls, "the use of low-level, inefficient, and error-prone features"? I feel like these are tooling-level issues that compilers and linters and updated language versions could do more to solve.
I still feel the sting of being bit by C++ features from the 1990s that turned out to be footguns.
Honestly, I kinda like the idea of "wrapper" languages. Typescript/Kotlin/Carbon.
I was expecting that someone would have posted this by now:
How to Shoot Yourself In the Foot:
https://www-users.york.ac.uk/~ss44/joke/foot.htm
Did you even read the article ? He has given the recommended path in the article itself.
Two books describe C++ following these guidelines except when illustrating errors: “A tour of C++” for experienced programmers and “Programming: Principles and Practice using C++” for novices. Two more books explore aspects of the C++ Core Guidelines
J. Davidson and K. Gregory Beautiful C++: 30 Core Guidelines for Writing Clean, Safe, and Fast Code. 2021. ISBN 978-0137647842
R. Grimm: C++ Core Guidelines Explained. Addison-Wesley. 2022. ISBN 978-0136875673.
Well, first, the language can't provide tooling: C++ is defined formally, not through tools; and tools are not part of the standard. This is unlike, say, Rust, where IIANM - so far, Rust has been what the Rust compiler accepts.
But it's not just that. C++ design principles/goals include:
* multi-paradigmatism;
* good backwards compatibility;
* "don't pay for what you don't use"
and all of these in combination prevent baking in almost anything: It will either break existing code; or force you to program a certain way, while legitimate alternatives exist; or have some overhead, which you may not want to pay necessarily.
And yet - there are attempts to "square the circle". An example is Herb Sutter's initiative, cppfront, whose approach is to take in an arguably nicer/better/easier/safer syntax, and transpile it into C++ :
https://github.com/hsutter/cppfront/
(It's a great post in general. N.B. that it's also quite old and export templates have been removed from the standard for quite some time after compiler writers refused to implement them.)
TL;DR: Declare your templates in a header, implement them in a source file, and explicitly instantiate them inside that same source file for every type that you want to be able to use them with. You lose expressiveness but gain compilation speed because the template is guaranteed to be compiled exactly once for each instantiation.
Which is to say, "extern template" is a thing that exists, that works, and can be used to do what you want to do in many cases.
The "export template" feature was removed from the language because only one implementer (EDG) managed to implement them, and in the process discovered that a) this one feature was responsible for all of their schedule misses, b) the feature was far too annoying to actually implement, and c) when actually implemented, it didn't actually solve any of the problems. In short, when they were asked for advice on implementing export, all the engineers unanimously replied: "don't". (See https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14... for more details).
Or more, correctly, the following happens:
1. You gain the ability to use the compilation unit's anonymous namespace instead of a detail namespace, so there is better encapsulation of implementation details. The post author stresses this as the actual benefit of export templates, rather than compile times.
2. You lose the ability to instantiate the template for arbitrary types, so this is probably a no-go for libraries.
3. Your template is guaranteed to be compiled exactly once for each explicit instantiation. (Which was never actually guaranteed for real export templates).
I have often thought about writing something vaguely similar. We’ll see if I ever do. It wouldn’t be the same because I don’t hold the same position Bjarne did in the early days, but I am very interested in Rust history, and want to preserve it. It wouldn’t be from my perspective rather than from the creator’s perspective.
I did give a talk one time on Rust’s history. It was originally at FOSDEM, but there was an issue with the recording. The ACM graciously asked me to do it again to get it down on video https://dl.acm.org/doi/10.1145/2959689.2960081
When I read “The Design and Evolution of C++”, it gave me a better understanding of the language.
I’m mostly focused on jj with my writing right now, but we’ll see…
A well-designed language is one in which there are very few different ways of doing the same thing. And C++ is definitely not that.
Imagine if you told a writer or poet that English is bad because there is more than one way to say the same thing...
Programming languages are for people more than machines. Machines are happy with microcode.
Look at the first example in the article, where the increment can overflow and cause UB despite that overflow having completely defined semantics at the hardware level. Fixing it requires either a custom addition function or C++26, another include, and add_sat(). I wouldn't consider either concise in a program that doesn't include all of std.
Modern C++ allows you to swap out most features and behaviors of the language with your own implementations that make different guarantees. C++ is commonly used in high-assurance environments with extremely high performance requirements, and it remains the most effective language for these purposes because you can completely replace most of the language with something that makes the safety guarantees you require. This is rather important. For example, userspace DMA is idiomatic in e.g. high-performance databases kernels; handling this is much safer in C++ than Rust. In C++, you can trivially write elegant primitives that completely hide the unusual safety model. In Rust, you have to write a lot of ugly unsafe code to make this work at all because userspace DMA isn’t compatible with a borrow checker. There can always be multiple mutable references to memory but it is not knowable at compile-time, safety of an operation can only be arbitrated at runtime.
Of course, it is still incumbent on the developer to use the language competently in all cases.
For what it's worth, unsafe Rust is safer than C++. There's very little UB to explode your carefully crafted implementations. Safe rust of course has no UB except for what you write in unsafe blocks, so it's safer still and there's no real difference in the abstractions you can write with concepts vs traits.
I'm not actually arguing for rust here though, because this isn't a great showing for it. Trying to write the related add_wrap(T, T) function in rust is stupidly verbose compared to add_sat(T, T) thanks to bad decisions the num_traits authors made. What I am saying is C++ isn't a form of high level assembly like your original comment suggested. Understanding the relationship between the language and the hardware takes a lot of experience that most people don't use when writing code.
I never suggested that C++ was “a form of high level assembly”. I’ve written enough assembly and C to know better; you lose a bit of precision with C++. But now I can define (or not) the behavior I want in a way that is largely transparent. This has been a brilliant change to the language.
If you have a foundational library that makes different and/or explicit guarantees than std, it is pretty easy to police that in a code base with automation. Everyone doing high-performance and/or high-assurance systems is dragging in few if any dependencies, so this is practical. The kinds of things that C++ is really good at for new code are the kinds of things where this is what you would do regardless.
Developers don’t even have to be hardware experts, they just have to not use std for most things. That is a pretty low barrier. And std is a mess with the albatross of legacy support. Reimagined C++20 native “standard” libraries are much, much cleaner and safer (and faster).
Legacy C++ code bases aren’t going to be rewritten in a new language. New C++ code bases can take advantage of alternative foundations that ignore std and many do. Most things should not be written in C++, but for some things C++ is unmatched currently and safer in practice than is often suggested with basic hygiene.
Then all you need to do is also verify that the sending code adheres to the schema it specified.
This has very little to do with borrow checking. From the perspective of the borrow checker, a DMA call is no different from RPC or writing to a very wide pointer.
Having to drop down to intrinsics early is not a strength.
For other fields there is also a dearth of candidates but the pay falls short and you’ll be leaving tens of thousands of dollars on the table compared to what you could get with other languages.
Rust is still too new for many folks to adopt, it depends on how much you would be willing to help grow the ecosystem, versus doing the actual application.
It will eventually get there, but also have the same issues as C++, regarding taking over C in UNIX/POSIX and embedded, and C++ has the advantage of having been a kind of Typescript for C, in terms of adoption effort, being a UNIX language from AT&T, designed to fit into C ecosystem.
Bjarne Stroustrup, AT&T Labs, Florham Park, NJ, USA
Abstract
This paper outlines the proposal for generalizing the overloading rules for Standard C++ that is expected to become part of the next revision of the standard. The focus is on general ideas rather than technical details (which can be found in AT&T Labs Technical Report no. 42, April 1, 1998).
https://www.stroustrup.com/whitespace98.pdf
Whenever I have an idea and I start a project, I start with C++ because I know if the idea works out, the project can grow and work 10 years later.
(How is that possible, someone may ask? It's the SCP! - see https://news.ycombinator.com/item?id=26998308)
I have come to find this category of error to be distressingly large.
If a proportional font is used for rendering, the most likely cause is that the user has not configured the default monospace font in the settings of the browser.
On my Firefox on Linux, this HTML page is not rendered with any custom typefaces, but it uses those specified by me as defaults for serif/sans serif/monospace.
The C++ code is rendered in my browser with my default, i.e. with JetBrains Mono and there is nothing weird.
The code quoted by you is indented as expected, not as in your posting.
On my computer, I have mostly typefaces that I have bought myself and which are seldom encountered in most computers. I do not have any of the typefaces that are typically specified in CSS rules, i.e. none of the typefaces that can be found in default installations of Windows, Linux or MacOS.
So perhaps there is a bug in their CSS at the definition of "wp-block-code", which on other computers selects a bad typeface that is proportional, so that the narrow spaces make the indentation disappear. (Their wp-block-code says "font-family:inherit" and I have not searched further to see from where the wrong font-family may be inherited.)
Here, perhaps because that bad typeface cannot be found, the browser uses my default monospace font and the code is displayed fine.
Or else, perhaps you have not set in your browser a proper default for monospace fonts and it just takes Arial or other such inappropriate system font even for monospace.
The font is selected by the HTML/CSS of the ACM site, not by Bjarne.
There may be a bug in the CSS of the ACM site, but I think that it is more likely that anyone who does not see correctly formatted code on that page has forgotten to open the settings of their browsers and select appropriate default fonts for "serif", "sans serif" and "monospace".
As installed, most browsers very seldom have appropriate default fonts, you normally must choose them yourself.
In this case, whoever does not see a monospace font, which is mandatory for rendering the code on that page, because the indentation is done with spaces, which become too narrow if rendered with a proportional font, must have that proportional font set in their browser as a default monospace font, so they should correct this.
I didn't see the author at first. However, immediately after seeing the code I checked for the author, because I was sure it was Stroustrup.
Normal people who have a modern environment would std::println but Bjarne insists on using the I/O streams from last century instead
On my browser, all the code is properly indented, most likely because my browsers are configured correctly, i.e. with a monospace font set as the default for "monospace".
Whoever does not see indentation, most likely has not set the right default font in their browser.
My browser has an appropriate default monospace font (JetBrains Mono), so the code is formatted and indented correctly, as expected.
Where this does not happen, the setting for the default monospace font must be wrong, so it should be corrected.
Here the code is displayed with my default monospace font, as configured in browsers, so the formatting is fine.
There are only 2 possible reasons for the bad formatting: a bug in the CSS of the ACM site, which selects a bad font on certain computers or a bad configuration of your own browsers, where you have not selected appropriate default fonts.
Most of programming language conferences are organized by ACM.
It's just the first code snippet that's messed up. The rest is merely wonky.
https://news.ycombinator.com/newsguidelines.html
Those languages occupy different points in the design space than C++. And thus, in the general sense, neither of them, nor their combination, is "C++ with the problems solved". I know very little Rust and even less Zig. But I do know that there are various complaints about Rust, which are different than the kinds of complaints you get about C++ - not because Rust is bad, just because it's different in significant ways.
> It is so objectively horrible in every capacity
Oh, come now. You do protest too much... yes, it has a lot of warts. And it keeps them, since almost nothing is ever removed from the language. And still, it is not difficult to write very nice, readable, efficient, and safe C++ code.
That's a fine case of Stockholm Syndrome you've got there. In reality, it is hard. The language fights you every step of the way. That's because the point in the design space C++ occupies is a uniquely stupid one. It wants to have it's cake and eat it too. The pipe-dream behind C++ is that you can write code in an expressive manner and magically have it also be performant. If you want fast code, you have to be explicit about many things. C++ ties itself in knots trying to be implicitly explicit about those things, and the result is just plain harder to reason about. If you want code that's safe and fast, you go with Rust. If you want code that's easy and fast, you go with Zig. If you want code that's easy and safe you go with some GCed lang. Then if you want code that's easy, safe, and fast, you pick C++ and get code which might be fast. You cannot have all three things. Many other langues find an appropriate balance of these three traits to be worthwhile, but C++ does not. It's been 40 years since the birth of C++ and they are only just now trying to figure out how to make it compile well.
C++ will be here forever, at least in some manner.
edit: spelling
Total hyperbole and simply not true.
> but it still somehow managed to limp on for all these years
Before Rust became somewhat popular, there was simply no serious alternative to C++ in many domains.
LLVM's optimiser is more powerful, and it handles unwinding, so today most people want LLVM but actually I think LLVM's future might involve more Rust.
Similar to how much Python folks disregard PyPy's existence.
I doubt LLVM project would start accepting polyglot contributions, beyond what they already do for language specific frontends.
Also, the ongoing GCC support is dependent on C++ as well.
Opening braces should be inline with the expression or definition.
Comments can be above what they're referred to.
Combined, this makes any code snippet look like crap on mobile and almost impossible to follow as a result.
For whatever reason this is probably the biggest reason I've struggled with it( aside from tooling... Makes me miss npm).