HN.zip

Is software abstraction killing civilization? (2021)

205 points by yamrzou - 163 comments
recursivedoubts [3 hidden]5 mins ago
I teach the systems class at Montana State, where we go from transistors up to a real computing system, and I have students that don't understand what a file system really is when they start my class

I agree that blow is wrong on some details, but I really think we need to be thinking hard around a NAND-to-Tetris style education that starts in high school for our technical students.

I use "outdated" models like Little Man Computer and a simple visual MIPS emulator which, even though they aren't realistic, at least give the students a sense of where we came from, with a level of complexity that a normal human can get their head around. When I look at modern 64 bit architecture books that get suggested for me to teach with I just laugh.

Anyway, not to say I agree 100% with Blow on everything, but connecting technology down to the roots is a hard problem.

cookiengineer [3 hidden]5 mins ago
When I was a kid, I got a C64 pretty early. I also got a huge magazine collection from my uncle when he had to move and threw out his old C64 stuff.

When I wanted to play a game, I had my favorite magazines, and I had to type down the game code because I didn't have a working floppy disk at first (needed to look for one for around 2 years on flea market, different story).

But honestly I never thought much of it, until years later when someone debugged a C++ binary written for an embedded controller, and I realized that I can understand assembly code. To me this was mindblowing, because I never ever associated C64 games with programming or anything like that. I changed careers into pentesting and malware reverse engineering after that.

I think part of the problem of the current generational knowledge transfer crisis is that we (as around Gen-Y?) had such a unique education due to the old way of doing things fading out, and the new way of doing things just starting.

In a world without DMCA I would love to share my digital collection. Some day, when there's the need, I will probably switch to being a digital librarian. I hoard so much stuff because I constantly see how much knowledge is being lost or forgotten, but to me is essential of understanding how computers work.

As you said, the x64 knowledge alone is already so high level that I am almost crying about all the old websites that I discovered when altavista was still cool. I wish I had kept more of my old C64 and CCC (chaos computer club) magazines, because most of them I can't even find anymore.

layer8 [3 hidden]5 mins ago
The concept of files and file systems is useful to regular computer users, even when they have no interest in knowing how things work under the hood. The issue is with mobile OSs, and that software companies like their apps to be a walled garden for your data as much as possible, and therefore resist exposing your data as files living in a normal shared file system. Even if you already work with files, they have you “import” your existing data into their storage system, and you have to manually “export” (or “share”) any modifications as a new, separate copy.
DiggyJohnson [3 hidden]5 mins ago
In the name of low effort, tangential, golden era HN comments: the decision to hide file format extensions on windows (and maybe other OSs) sucks soooo much.

The point about mobile devices breaking the desktop metaphor and file system norms is really interesting.

Higher quality discussion question: Files, buffers,file systems, file explorers, and window managers seem like useful abstractions to me for the human computer user. Why did we not end up with something like “every app has a folder with your files” or “if you have a new iPhone, just send or recover your user files from the official Reddit app on your old device and import them to carry on where you left off on your new device. Welcome back to the Reddit app.”

layer8 [3 hidden]5 mins ago
Hiding file name extensions is a bad default, but at least it’s just two clicks in the Explorer ribbon to permanently unhide them: https://static1.howtogeekimages.com/wordpress/wp-content/upl... (Well, it’s now three clicks in Windows 11 it seems.)
rjbwork [3 hidden]5 mins ago
> Why did we not end up with something like ...

Because that's anti-thetical to control. The biggest sin in a technology business is allowing your customers to stop using your product with no negative consequences for them.

philwelch [3 hidden]5 mins ago
File name extensions aren’t an inherent or necessary part of the desktop metaphor. They can be stored as metadata in the file system. Mac OS used to do this.
aoanevdus [3 hidden]5 mins ago
The structure of the files an app uses internally is undocumented and not intended to be a user-facing API. Who wants to be responsible for handling every insane thing the users decide to do to those files?
recursivedoubts [3 hidden]5 mins ago
Yes, also the curse of modern desktop os’s trying to trick people into storing data in the cloud. The notion of just having files somewhere accessible and organized in a reasonable manner isn’t clear to many (most?) of my students.
lmz [3 hidden]5 mins ago
Which is a better security model in these days of untrusted apps vs the desktop "my screensaver can read my chrome passwords file" model.
layer8 [3 hidden]5 mins ago
Access permissions are orthogonal to having a file system. In fact, mobile apps still use the local file system, they just hide it from the user. And password files should still be encrypted with a master key, e.g. application-private secure enclave key where available.
lupire [3 hidden]5 mins ago
Your screensaver should be running as an unprivileged user.
immibis [3 hidden]5 mins ago
Files and directories are just one of many possible abstractions for storing data. You have files and directories. Directories contain files and directories. Your whole device is one big directory. Files are identified by name. There's absolutely no reason to think this is the best possible model.

Here's another: Your device contains a bunch of photos and videos. Photos are identified by looking at them. Videos are identified by a still image, or by playing the video within the video chooser.

Here's another: Your device contains a bunch of apps. Apps can organize their own data however they see fit.

... Microsoft's OLE really was the most well-integrated document-centric desktop we ever got, wasn't it?

itronitron [3 hidden]5 mins ago
> But How Do It Know? The Basic Principles Of Computers For Everyone

https://archive.org/details/jclarkscottbuthowdoitknowthebasi...

jonhohle [3 hidden]5 mins ago
I learned computer architecture using MIPS when MIPS were actually used in things. It was nice then, and is nice now.

I spend a lot of my free time decompiling MIPS assembly and small functions can be decompiled to matching C by “hand” without need other tools.

gerdesj [3 hidden]5 mins ago
"I teach the systems class at Montana State,"

You teach and hence deal with this: "Information passed on between generations is diluted"

Is it diluted? No it isn't. Your job is teaching and books and computers help us avoid calling you a bard 8) Mind you, being called a bard is not a bad thing!

In the end this is a blog post response to another blog post. I have no idea about how "important" those bloggers are but I smell ... randomly deployed blogging for the sake of it. That's the whole point of a blog anyway. I'm being polite here ...

pyeri [3 hidden]5 mins ago
The IT industry has gotten highly specialised over time. The NAND-to-Tetris style education helps create computer scientists who know their stuff from ground up. But tech companies are constantly on the lookout for vocational pragmatists (like "Python Engineer" or "Java Engineer") who could quickly replace another like them. I think that's why the high level language first approach has gotten so popular.
seanmcdirmid [3 hidden]5 mins ago
You only have so many hours to teach in a course, the more kids come in with, the farther you can go. If kids have to be taught what were once taken for granted, something else has to give.
aleph_minus_one [3 hidden]5 mins ago
In many countries, such a problem is solved via that a lot of incapable students are "weeded out" by brutal exams (in particular "math for computer scientists" exams are prone for this) in the beginning or in the first semesters - consider it to be a kind of hard "sink or swim"-style curriculum.
seanmcdirmid [3 hidden]5 mins ago
Actual old fashioned file systems aren’t really in use anymore, and the computerized versions hide the details, you don’t even get a command prompt on an iPad. I’m pretty sure my 8 year old has no clue what a file system is yet (I learned at about 7 when my dad brought home an Osborne with pre-DOS CPM to play around with). Computers don’t require you to know that stuff anymore just to use apps and save/get files.
dismalaf [3 hidden]5 mins ago
The details being hidden on toys doesn't mean they're not relevant to a course training people to be developers...
llm_trw [3 hidden]5 mins ago
I'd strongly suggest having a look at the Cardiac [0] and MIX [1] computers. Shift operations are pretty important and having a dedicated multiplication/division is also very useful.

The fact that all of these are (or can be) decimal machines makes it much easier for people to understand what's going on.

Jumping to binary before you understand how a computer works is like learning logic in Latin.

[0] https://en.wikipedia.org/wiki/CARDboard_Illustrative_Aid_to_...

[1] https://en.wikipedia.org/wiki/MIX_(abstract_machine)

I've been playing around with a modern version of the cardiac, drop me an email if you'd like to collab on testing it on live students.

ixtli [3 hidden]5 mins ago
I got this exact style of education at UMass Amherst's 4 year CS undergrad degree program 2005-2010 and it absolutely made me the engineer i am today. would recommend 10/10.
ignoramous [3 hidden]5 mins ago
> When I look at modern 64 bit architecture books that get suggested for me to teach with I just laugh

Why I recommend Bryant/O'Hallaron's Computer Systems: A Programmer's Perspective (for comparch) to newbies, as coding is what most folks are likely to be doing early on.

https://books.google.com/books/about/Computer_Systems.html?i...

userbinator [3 hidden]5 mins ago
but I really think we need to be thinking hard around a NAND-to-Tetris style education that starts in high school for our technical students.

I suggest Petzold's Code as a good example of a book that's great at teaching the bottom-up approach.

mp05 [3 hidden]5 mins ago
> I teach the systems class at Montana State, where we go from transistors up to a real computing system, and I have students that don't understand what a file system really is when they start my class

Admittedly I am old grouch, but I stopped having any expectations of the current generation of "college kids".

Incidentally, I'm at Montana State getting a master's in IE, and I deal daily with this one PhD student who has demonstrated an inability to perform a simple partial derivative, which you'd think is a pretty useful skill given the subject matter. Hell, last semester in a 400-level math course, one of the students didn't understand how to add two matrices, I kid you not. It is odd that a senior in CS wouldn't know what a file system is, but that seems rather quaint compared to some of the wild bullshit I've encountered here.

My first stint in university in the 2000s felt a lot different than this, and it's a bit depressing. But man, I feel just great about my prospects in the job market next spring.

lo_zamoyski [3 hidden]5 mins ago
This depends on the specialization. Computer science vs. computer or electronics engineering, for example.

Computer science, despite the misnomer, is not about computers, even if computers are an indispensable tool in the field. Computer science is all about abstraction and language modeling of a domain and their application; the computing technology of the tool, however important practically, is an implementation detail. Just as it pays for the astronomer to understand how to operate his telescope in due proportion to his needs as an astronomer, it pays for the computer science person to understand how to operate his computer to the degree that he needs to. But it is a total mistake to place the computer at the center of the universe and treat it as the point of departure for what computer science is really about. This muddles things and is a source of much historical confusion in the field, and in fact, this discussion.

In fact, even the language of "low-level" programming or whatever is abstraction and language. It is simply the language of the computing device that must be used to simulate the abstractions of the domain of discourse.

louthy [3 hidden]5 mins ago
I like this description a lot, but I think it needs a caveat or two. Most people writing code today are not computer scientists and don’t understand many of the mathematical fundamentals.

I’ve always thought of programming as 1 part science, 1 part craft, and 1 part art. The percentages of those parts vary depending on the job. There may be 0% science in some roles (outside of writing functions). I think the vast majority of programmers are firmly in the craft camp.

mouse_ [3 hidden]5 mins ago
Thanks for your post. I'm always on the hunt for this sort of gritty educational material. MIT's NAND-to-Tetris first principles book seems right up my alley.
makerdiety [3 hidden]5 mins ago
The problem is, your best minds, whom you regard as being "technical," ironically simply don't have the pedagogical and epistemological axioms needed for possessing the sensibility that can conceptualize and eventually abstract away the fundamental principles of the technology civilization presently enjoys. Not just young students, but so too is lacking the experts, leaders, teachers, and supposedly prolific and employed engineers living within the cosmopolitan nationalist tribal world-system that is none other than globalist liberal democracy. In much more simple words, the communal program of regressing education toward the mean where the average human can apprehend the secret physics of the universe always runs into the barrier where an average disposition just can't cut it in the progression of science that isn't mundane or trivial. You said you laugh at the idea of expecting the children of your community to understand and leverage an understanding of things like Intel's latest microprocessors and a web browser front-end framework like Angular? Why, that's the bare minimum needed for surviving the complex world that is arriving on the scene.

Humans are physiologically incapable of doing or understanding anything significant. They inflate their status to be stuff like anything even remotely central to the universe, which is the definition of the anthropocentric. While in actuality reality can and will go on without humanity. Regardless of how life gets falsified, simplified, and dumbed down for the slow kids of the universe that is humanist society. As if the constant string of inconsistencies in quantum physics wasn't proof enough of the reliable ineffectiveness of mathematics as well as science in achieving knowledge.

mouse_ [3 hidden]5 mins ago
> Humans are physiologically incapable of doing or understanding anything significant.

Absolutely terrible take -- strong subjectivity and defeatist sentiment.

> While in actuality reality can and will go on without humanity.

Sure but in that case you won't be here to consider it. There's no considering such a situation, so there's no use considering such a situation. Push onwards!

> As if the constant string of inconsistencies in quantum physics wasn't proof enough of the ineffectiveness of mathematics as well as science in achieving knowledge.

We're never going to understand everything about everything. That is half the fun! How you got from there to "Humans are physiologically incapable of doing or understanding anything significant" is beyond me.

Stop paying attention to quantum mechanics, and start paying attention to what's important to you. The kinds of things that made you truly happy as a child. Make model train sets or 7 day roguelikes. Make box fortresses for your cats. Make your neighbors lives a little bit better. Talk to a neighbor you've never talked to about the weather or something.

makerdiety [3 hidden]5 mins ago
The anthropocentric are quite egocentric, you've just demonstrated, yes? Because everything you said is based on either the assumption or the incorrect sincere belief that I am a human being. What if I was not a human being and instead an artificial intelligence? Or at least a cyborg or a mutant?

And what if I derived pleasure from mastering quantum mechanics, like some sort of strange alien that, for some, foreign, reason, had a huge fancy for physics that touches deep down into interesting rabbit holes? Not only that alternative preference, but what if I had a situation where comprehensive knowledge was a requirement, rather than an optional leisure in a sea of other freedoms? Surely, then, holisticism or completeness would be a priority for the alternative scientist and engineer.

We can only conclude from our social interaction something like a physical difference between the anthropocentric and the transhumanist, at the least. Which leads to questions of what are the properties of this division. And why it exists. To get the ball rolling in some neo-globalist societal engineering that is going to be very awesome.

mkoubaa [3 hidden]5 mins ago
We can't train every engineer with the goal of making them qualified to be the CTO of a complex systems engineering organization. People who end up in those roles learn what they need to know themselves over time, anyways.
greybox [3 hidden]5 mins ago
You make some very good points here. I've watched the talk too and criticism of it is important.

I have to say though Blow is right when he says: "you can't just Draw pixels to the screen"

I am a game engine programmer at a 'medium sized' games company and it is becoming VERY difficult to hire anyone to work on graphics code. DX12 (and others of the same generation) is a massive step up from what previous generations (DX11) used to demand of the programmer. Doing anything at all with these APIs is a massive task and by Microsoft's own admission (I can't find the quote from the docs anymore, citation needed), DX12 is extremely difficult to learn without experience with previous graphics APIs.

I see a lot of people using the argument: "but these APIs are only for developers that want to really push the limits of what the graphics card can do, and want to implement very low level optimizations." and that's partially true. But it's now the industry standard and it's nigh-on unteachable to anyone without previous experience.

Unless something changes, the hiring pool is going to continue to decrease in size.

pjfin123 [3 hidden]5 mins ago
Blow's talk made me realize I'm not crazy when I get frustrated about basic things being incredibly difficult. Drawing a button to the screen when you want to build a software application has become so difficult that most people just use a progressive web app which is 100x slower than is possible. Are the best options for GUI applications in 2025 really Java Swing and Qt?
coffeeaddict1 [3 hidden]5 mins ago
I agree with your point, but DX12 went in the opposite direction of abstraction: it's a much lower level API than the highly abstracted OpenGL.
ignoramous [3 hidden]5 mins ago
> Unless something changes, the hiring pool is going to continue to decrease in size.

Or, we'll see the return of "trainees"?

harrall [3 hidden]5 mins ago
If an older web developer rants about abstraction, they will target React developers.

If a Python dev rants about abstraction, they will target older web developers.

If a C++ application dev rants about abstractions, they will target Python developers.

If a firmware dev rants about abstractions, they will target application developers.

If an electrical engineer rants about abstractions, they will target firmware developers.

Drawing the line for “excessive abstraction” based on what you personally know and calling everything afterwards as “killing civilization” is quite a take.

wredcoll [3 hidden]5 mins ago
God, you're right, this is almost as bad as those "well chemistry is just applied physics and physics is just applied mathematics so math is the best" nonsense that shows up every so often.
MonkeyClub [3 hidden]5 mins ago
... And math is just applied philosophy, and philosophy is just applied wine drinking.

So let's drink some wine and be merry.

ilrwbwrkhv [3 hidden]5 mins ago
I think JavaScript on the server and React and these things has really made the web a mess of software development compared to how little stuff it actually does.

I know for a fact a bunch of kids now do not even know that HTML is what gets rendered in the browser. They think that React is itself what browsers render.

Not to mention the absolute idiot of a CEO of Vercel who thinks React is the Linux kernel of development.

the__alchemist [3 hidden]5 mins ago
What do you mean by "HTML is what gets rendered by the browser"? In the context of using Javascript to modify the DOM.

My understanding is that HTML is an input to the browser, which is translated to the DOM. (Which is then translated to painting the screen, handling inputs etc.) This is an important distinction in context of your point. React (Or any VDOM or similar JS lib) isn't creating HTML; it's creating a set of DOM manipulation instructions in JS.

gherkinnn [3 hidden]5 mins ago
What an odd claim to make, but he did.

https://news.ycombinator.com/item?id=42824720

I have been around for long enough to remember the vanilla js, jQuery, Knockout, and Angular 1 days and it always came with a baseline mess.

React (sometimes just JSX) can be used sensibly.

Instead, I blame VC-backed tooling like Vercel, Next, Apollo, Prisma, and all the other rubbish Webdevfluencers are paid to fill the web with.

Come to think of it, every part of building software is bloated. From Notion boards to questionable DB choices.

voidr [3 hidden]5 mins ago
I agree with you that it's awful that a lot of young developers wouldn't be able to program without React.

I would like to add as someone would has no problems working without any library that the DOM is one of worst APIs ever invented by humankind and that "reactive programming" is a superior model to what we used to do before.

That being said: - NextJS rolled back years of tooling improvements, it's much slower than Vite - I just statically built a NextJS and a page that has no interactivity downloads 100KB of JS to do nothing with it - Facebook decided to make a "compiler" for React instead of just not pointlessly re-rednering components by default - Compared to Preact which is almost a drop in replacement, React is huge! This shows how little Facebook cares.

__MatrixMan__ [3 hidden]5 mins ago
That was the whole point of react, right? To create something that wouldn't work at all without JavaScript enabled and would be enough of a mess that Facebook could effectively hide their malware in it.

I think many of Blow's points are good, but that he overlooks that much of the degradation isn't a facet of some kind of generational drift or information entropy but is straight up malevolence on the part of people making the decisions.

gardenhedge [3 hidden]5 mins ago
What malware has Facebook hidden in React?
piperswe [3 hidden]5 mins ago
I think they were referring to spyware hidden in Facebook's JavaScript, which uses React
__MatrixMan__ [3 hidden]5 mins ago
Yes. Particularly it's those little badges that let you share some content or other on the social media site of your choice. I'm under the impression that whether you click them or not, they are phoning home to Facebook about your behavior.

So it's not hidden in react itself but rather in the constructed-on-page-load-and-hard-to-disable-just-parts-of space which react creates.

I'll confess to not having merely read about this skulduggery and not analyzed it myself. But it certainly explains why Facebook open sourced react and also why those things are everywhere. Is there a better explanation?

booleandilemma [3 hidden]5 mins ago
No one knows, it's been hidden til this day
Swizec [3 hidden]5 mins ago
> They think that React is itself what browsers render

My kingdom for native JSX in the browser. That would be awesome.

Something similar to how we have WebGL and Canvas, but for pure JavaScript UI without the extra step of DOM reconciliation.

There’s a small (and growing) cohort of people working on rendering React Native directly on canvas or webgl (I think). Supposedly gives you super smooth UX, but I haven’t knowingly tried it yet.

ninkendo [3 hidden]5 mins ago
> rendering React Native directly on canvas or webgl

I just threw up in my mouth a little. I can’t wait to:

- not be able to double click to highlight text a word at a time, because the developers of the “super smooth UX” didn’t know that’s what typical text does.

- or triple click to highlight a paragraph at a time

- or have the standard menu available when I highlight text (which I often use to look up words, etc)

- or have text editing support any of my OS’s key bindings, like ctrl-left to move the caret one word, ctrl-shift left to highlight in the process, etc etc

- or any one of the hundreds upon hundreds of common control behaviors, accessibility behaviors, system-wide settings on text behaviors, etc etc etc be respected

Of course if they’re anything like the Flutter folks, they’ll look at every one of these things and say “well, I guess we gotta implement all that” rather than understanding the basic fact that common UI elements offered by the OS should actually be reused, not reimplemented poorly.

I really worry about what software will look like 20 years from now. At this rate we’re just going to forget every thing we learned about how it should behave.

caspper69 [3 hidden]5 mins ago
Don't worry, they'll figure out a way to compile Skia to webassembly and re-link it to the DOM through JS.

Maybe then the circle(jerk) will be complete.

Ugh.

FullGarden_S [3 hidden]5 mins ago
oh no. At this point, we just need something that doesn't use the stupid combo of two languages(one markup and one style sheet) for UI and one(totally single threaded) for UX that allows people to share and view simple text and media to one another. Maybe then we can rid ourselves from constraints like having to stick to JS or a Virtual Machine like V8 while settling on poor implementations like WebGL and Canvas. WebGPU is still not a thing and it probably won't be anytime soon.

A new web browser or another JS runtime won't be the solution to the current mayhem. What could actually be helpful is an alternative to the "web browser" that operates on an entirely different stack than the currently JIT overdosed JS engines. But since everybody is well accustomed to and excited about improvements within the current madness(like this comment), adaptation of any alternative web browser like software will be highly unlikely even if it were several folds better at transferring media and rendering graphics with a much simpler approach and high performance. We are officially fo*ked.

rglover [3 hidden]5 mins ago
The lengths that some JS developers go to avoid writing HTML is unbelievable.

JSX is one of those technologies that was clever for what it was, but should have never caught on. Adding another layer of abstraction on top of JSX is the type of behavior at the root of the civilizational collapse argument.

Just because you can doesn't mean you should.

gmueckl [3 hidden]5 mins ago
Just because you can doesn't mean you should in production.

If someone want to build horrific abominations in rheur spare time at home, I won't stop them. They might become better engineers through that experience. But don't come to me suggesting actually using one of these horrors.

raincole [3 hidden]5 mins ago
I don't think it's a very fair criticism. Of all the abstractions over HTML, JSX is the only one that forces you to learn HTML itself. So the reason why JS programmers chose JSX isn't that they refuse to write HTML.
wruza [3 hidden]5 mins ago
Mind you, I’m a former x86 asm and C guy who avoids writing HTML. Web 1 state transfers never made sense to me. It just doesn’t click, I don’t think this way and refuse to do so in the name of not sure what.

And while I despise react for its architectural delusions, I still use a js framework to render my webapps. I also don’t see any bloat or something, they all render instantly on both my pc and my phone.

MonkeyClub [3 hidden]5 mins ago
> I also don’t see any bloat or something, they all render instantly on both my pc and my phone.

That's because both our phones and computers have multicore CPUs running billions of instructions per second per core. So all the bloat gets executed quickly, and we don't get to notice that it's there.

But it's there. And it's dirty.

wruza [3 hidden]5 mins ago
Due to my "roots" I'm capable of painting a "site" fully myself, given only the windowing API and e.g. pango/cairo (too lazy to reserach into skia) and a few utility libraries. I had actually created a fully working gui framework in the past using only drawing and windowing libs.

And I can tell you that a browser is an absolute technological atrocity that is full of binary bloat and practical (as in, not on paper) inefficiencies. An additional layer that generates 2000-or-so {}s per mutating interaction doesn't add much to this. It doesn't add anything at all, unless you're armed with a profiler.

"Browsers are fast and efficient" is a fucking lie. Same for HTML. You're looking for bloat in the wrong places. The only reason you don't realize it is because

both our phones and computers have multicore CPUs running billions of instructions per second per core. So all the bloat gets executed quickly, and we don't get to notice that it's there.

For example, this almost empty, absolutely js-less "Add comment" page I'm writing this comment on takes 50MB (a clean browser run, ALL plugins disabled). That's only the page -- the entire browser takes 450MB.

A full-blown gtk-based backup app that I'm using, with numerous screens and layouts, only uses 14MB, and total 35MB after I click through all of its screens, settings, managers, etc. You can't even fathom how much memory it would take if it was browser-based.

raincole [3 hidden]5 mins ago
> super smooth UX

I think typical front end developers (or the managers they report to) don't really know what smooth UX is. They keep reinventing things like scrolling.

foobiekr [3 hidden]5 mins ago
There's almost no HCI or interaction design in what passes for UX today. They are mostly - probably 90+% of them - visual designers who have memorized some canned responses to use when asked why they aren't doing basic things.

Seriously, anyone who things software as a discipline is bad, well, they're right, but they have no appreciation for how appalling "UX" is today vs even 1990.

mhitza [3 hidden]5 mins ago
> There’s a small (and growing) cohort of people working on rendering React Native directly on canvas or webgl (I think)

Sounds terrible for accessibility.

sd9 [3 hidden]5 mins ago
Blow often makes fantastic points about development, and often completely misses the mark.

He’s accomplished great things and has ideas worth listening to - but also plenty of nonsense that’s presented indistinguishably.

I felt quite strongly that the collapse of civilisation talk was one of those pieces of nonsense, and I’ve largely ignored it (despite listening to it twice). I’m grateful to OP for providing a more principled rebuttal.

Don’t even get me started on Casey Muratori, who tries to do the Blow thing but doesn’t even get the good parts right.

dismalaf [3 hidden]5 mins ago
Agreed.

Blow's taken the better part of a decade to make a game where he didn't even need to invent the mechanics and Muratori never finished a game he started a decade ago.

Meanwhile if you use modern game engines (including stuff like Raylib) you can have a pretty decent result in a weekend game jam session, and you could probably make Blow's Sobokan game in ~6 months or so (especially with a team of ~10).

wanderlust123 [3 hidden]5 mins ago
What do you disagree with Casey Muratori on specifically? I have seen some of his content and he seems pretty humble but opinionated on things he knows about. I also think he did a great job with handmade hero.
TheMode [3 hidden]5 mins ago
I personally do not like how his solution boils down to "just learn more" which may be true at an individual level, but not as the general solution to awful software.

You will never be able to force developers worldwide to start writing everything in C/Assembly, or even to care beyond "it performs fine on my machine". Individuals can have fun micro-optimizing their application, but overall, we have the app we have because of compromises we find somewhat acceptable.

More likely the solution will be technical, making great/simple/efficient code the path of least resistance.

sarchertech [3 hidden]5 mins ago
Watch some of the intros to his performance aware programming videos. He doesn’t want everyone to use C or Assembly. He also doesn’t want everyone micro-optimizing things.

>compromises we find somewhat acceptable

His entire point is that most developers aren’t actually aware of the compromises they are making. Which is why he calls it “performance aware programming” and not performance first programming.

TheMode [3 hidden]5 mins ago
I have actually watched many of his videos, as an individual I very much like his advices. What I am saying however is that this has nothing to do whatsoever with improving software at scale.

But my point still stands, Casey focuses solely on the cultural aspect but completely ignore the technical one. He says that developers became lazy/uninformed, but why did that happen? Why would anything he currently say solve it?

sarchertech [3 hidden]5 mins ago
I don’t think you can blame him for not having a large scale systemic solution to the problem.

Imagine if there was a chef on YouTube who was telling everyone how bad it was that we are eating over processed food, and he makes videos showing how easy it is to make your own food at home.

Would it be reasonable to comment to people who share the chefs message that you don’t like how his cooking videos don’t solve the root problem that processed food is too cheap and tasty?

And here’s the thing, he doesn’t have to solve the whole problem himself. Many developers have no idea that there’s even problem. If he spreads the message about what’s possible and more developers become “performance aware” maybe it causes more people to expect performance from their libraries, frameworks, and languages.

Maybe some of these newly performance aware programmers are the next generation of language and library developers and it inspires one of them to create the technological solution you’re hypothesizing.

patrick451 [3 hidden]5 mins ago
The real problem is that we have a broken, anti-performance culture. We have allowed "premature optimizaiton is the root of all evil" to morph into "optimization is the root of all evil. That single quote has done untold damage to the software industry. We don't need to pass laws to force all programmers worldwide to do anything. Fixing our culture will be enough. In my view, that's what Casey is trying to do.
TheMode [3 hidden]5 mins ago
I don't believe this is the root cause, computers got faster, and software got quicker to the state of "run good enough". I'm calling Wirth's law on it.

"Clean code" is indeed often a bad idea, but you are overestimating the impact. Even software written by people caring very much about performance consume way more than it theoretically should.

Plus, if this was that simple, people would have already rewritten all the bad software.

Your message is exactly the reason why I do not like Casey, he is brainwashing everyone into thinking this is a culture problem. Meanwhile nobody tries to solve it technically.

gmueckl [3 hidden]5 mins ago
The free market is preventing technical solutions. People generally buy based on features first and everything else second. This allows for a precarious situation in software market: the company producing the most bloat the fastest wins the biggest market share and sees no need to invest in proper fixes. Everyone that cares about software quality too much gets outcompeted almost immediately.

And since software can be rebuilt and replicated with virtually zero cost, there is no intrinsic pressure to keep unit costs down as it happens in industry, where it tends to keeps physical products simple.

TheMode [3 hidden]5 mins ago
It doesn't have to come from the free market, FOSS is hardly exempt of awfully slow/unstable software. Nobody figured out yet how to make writing good software the default/path-of-least-resistance.
patrick451 [3 hidden]5 mins ago
Wirth's law doesn't bolster your point. He observed software is getting slower at a more rapid rate than computers are getting faster. Which is the whole point. We write increasingly slow, increasingly shitty code each year. I read and hear this attitude all the time that is basically "if you optimize something, you're bad at your job, only juniors try to do that". That's a culture problem.

It's frankly insulting you think Casey brainwashed me into this stance, when it's been obvious to me since long before I'd ever heard of him. IDGAF if code is clean or not. I care that Jira can display a new ticket in less than 15 seconds. I care that vscode actually keeps up with the characters I type. None of this software is remotely close to "runs good enough".

lmm [3 hidden]5 mins ago
Jira runs good enough to get idiot managers to pay for it, which is what it's designed for. And, yeah, microoptimising (or micropessimising, who knows since the changes are usually just made on vibes anyway) random bits of code that probably aren't even on any kind of hot path, while compromising maintainability, is something only juniors and people who are bad at their job do. It's easy to forget how common security flaws and outright crashes were in the "good old days" - frankly even today the industry is right to not prioritise performance given how much we struggle with correctness.

A lot of code is slow and could be faster, often much faster. This is more often because of people who thought they should bypass the abstractions and clean code and do something clever and low level than the opposite. Cases where you actually gain performance on a realistic-sized codebase by doing that are essentially nonexistent. The problem isn't too many abstractions, it's using the wrong algorithm or the wrong datastructure (which, sure, sometimes happens in a library or OS layer, but the answer to that isn't to bypass the abstraction, it's to fix it), and that's easier to spot and fix when the code is clean.

TheMode [3 hidden]5 mins ago
What I am saying is that this is a natural phenomenon assuming no technical solution. People will tend to optimize their software based on the performance of their hardware.

I completely agree that many apps are horrendously slow, but given the alternative are hard pressed to arrive, I can only conclude they are considered "good enough" for our current tech level.

The difficulty involved in rewriting modern apps is one of the reason I would give that result in slow software. Can't really complain about the number of independent web browsers when you look at the spec. Ensuring the software we use is easily reimplementable by a few or one developer in a few days would go a long way improving performance.

Another reason would be the constant need to rewrite working code, to work on new platforms, to support some new trendy framework, etc. etc. You cannot properly optimize without some sort of stability.

jiggawatts [3 hidden]5 mins ago
Personally I find that Casey presents simple hard facts that rile people up, but at that point they're doing... what? Arguing with facts?

He had a particularly pointed rant about how some ancient developer tools on a 40 MHz computer could keep up with debug single-stepping (repeatedly pressing F10 or whatever), but Visual Studio 2019 a on multi-GHz multi-core monster of a machine can't. It lags behind and is unable to update the screen at a mere 60 Hz! [1]

I have had similar experiences, such a the "new Notepad" taking nearly a minute to open a file that's just 200 MB in size. That's not "a small cost that's worth paying for some huge gain in productivity". No, that's absurdly bad. Hilariously, stupidly, clownshoes bad. But that's the best that Microsoft could do with a year or more of development effort using the "abstractions" of their current-gen GUI toolkits such as WinUI 3.

This is not progress.

[1] The whole video is worth watching end to end, but this moment is just... sad: https://youtu.be/GC-0tCy4P1U?t=1759

jakelazaroff [3 hidden]5 mins ago
If I only spoke Arabic or Russian or Chinese, could I write words in my language on those ancient developer tools? Or would I be limited to ASCII characters?

If I were blind, could the computer read the interface out to me as I navigated around? If I had motor issues, could I use an assistive device to move my cursor?

I'm not saying this excuses everything, but it's easy to point at complexity and say "look how much better things used to be". But a lot of today's complexity is for things that are crucial to some subset of users.

jiggawatts [3 hidden]5 mins ago
> If I only spoke Arabic or Russian or Chinese, could I write words in my language on those ancient developer tools?

For relevant cases, YES!

NT4 was fully Unicode and supported multiple languages simultaneously, including in the dev tools. Windows and all associated Microsoft software (including VS) has had surprisingly good support for this even back in the late 1990s, possibly earlier. I remember seeing in 2001 an Active Directory domain with mixed English, German, and Japanese identifiers for security groups, OU names, user names, file shares, etc... Similarly, back in 2002 I saw a Windows codebase written by Europeans that used at least two non-English languages for comments.

Note that programming languages in general were ASCII only for "reasons", but the GUI designers had good i18n support. Even the command-line error messages were translated!

Linux on the other hand was far behind on this front until very recently, again, for "reasons". You may be thinking back to your experience of Linux limitations, but other operating systems and platforms of the era were Unicode, including MacOS and Solaris.

None of this matters. UTF8 is not the reason the GUI is slow. Even the UCS16 encoding used by Windows is just 2x as slow, and only for the text, not any other aspect such as filling pixels, manipulating some GUI object model, or responding synchronously vs asynchronously.

Look at it this way: for a screen full of text, ASCII vs Unicode is a difference of 10 KB vs 20 KB in the volume of data stored. The fonts are bigger, sure, but each character takes the same amount of data to render irrespective of "where" in a larger font the glyph comes from!

> If I were blind, could the computer read the interface out to me as I navigated around?

Text-to-speech is an unrelated issue that has no bearing on debugger single-stepping speed. Windows had accessibility APIs since forever, including voice, it was just bad for reasons to do with hardware computing limitations, not a lack of abstractions.

> If I had motor issues, could I use an assistive device to move my cursor?

That's hardware.

> But a lot of today's complexity is for things that are crucial to some subset of users.

And almost all of it was there, you just may not have been aware of it because you were not disabled and did things in English.

Don't confuse a "lack of hardware capacity" or a "lack of developer budget" with the impact that overusing abstractions has caused.

These things were not a question of insufficient abstractions, but insufficient RAM capacity. A modern Unicode library is very similar to a 20-year-old one, except the lookup tables are bigger. The fonts alone are tens of megabytes, far more than the disk capacity of my first four computers... combined.

Today I have a 4.6 Ghz 8-core laptop with 64 GB of memory and I have wait for every app to open. All of them. For minutes sometimes. MINUTES!

None of this has anything to do with accessibility or multi-lingual text rendering.

jcranmer [3 hidden]5 mins ago
Your comment indicates a pretty poor understanding of why multilingual text rendering is a lot harder than it was in the early 90s. Back then, to display some text on the screen, the steps were: for each character (=byte), use the index to select a small bitmap from a dense array, copy that bitmap to the appropriate position on the screen, advance.

But modern font rendering is: for a span of bytes, first look up which font is going to provide those characters (since fonts aren't expected to contain all of them!). Then run a small program to convert those bytes into a glyph index. The glyph index points to a vector image that needs to be rendered. Next, run a few more small programs to adjust the rendered glyph for its surrounding text (which influences things like kerning spacing) or the characteristics of its display. And then move on to the next character, where you get to repeat the same process again. And if you've got rich text, all of this stuff can get screwy in the middle of the process: https://faultlore.com/blah/text-hates-you/

Indic text rendering, for example, never worked until the mid-2000s, and there's even more complex text rendering scenarios that still aren't fully supported (hello Egyptian hieroglyphs!).

jiggawatts [3 hidden]5 mins ago
Having personally implemented a Unicode text renderer in both DirectX and OpenGL back in 2002 for the Japanese game market, I dare say that chances are that I know a heck of a lot more than you about high performance multilingual text rendering. Don't presume.

You just made the exact same argument that Casey Muratori absolutely demolished very publicly.

The Windows Console dev team swore up and down that it was somehow "impossible" to render a fixed-width text console at more than 2 fps on a modern GPU when there are "many colors" used! This is so absurd on its face that it's difficult to even argue against, because you have to do the equivalent of establishing a common scientific language with someone that thinks the Earth is flat.

Back in the 1980s, fully four decades ago, I saw (and implemented!) colorful ANSI text art animations with a higher framerate on a 33 MHz 486 PC! That is such a miniscule amount of computer power that my iPhone charger cable has more than that built into the plug to help negotiate charging wattage. It's an order of magnitude less than the computer power a single CPU core can gain (or lose) simply from a 1 C degree change in the temperature of my room!

You can't believe how absurdly out-of-whack it is to state that a modern high-end PC would somehow "struggle" with text rendering of any sort!

Here is the developers at Microsoft stating that 2 fps is normal, and it's "too hard" to fix it. "I believe what you’re doing is describing something that might be considered an entire doctoral research project in performant terminal emulation as “extremely simple” somewhat combatively" from: https://github.com/microsoft/terminal/issues/10362#issuecomm...

Here is Casey banging out a more Unicode compliant terminal that looks more correct in two weekends that can sink text at over 1 GB/s and render at 9,000 fps: https://www.youtube.com/watch?v=99dKzubvpKE

PS: This has come up here on HN several times before, and there is always an endless parade of flabbergasted developers -- who have likely never in their lives seriously thought about the performance of their code -- confidently stating that Casey is some kind of game developer wizard who applied some "black art of low-level optimisation". He used a cache. That's it. A glyph cache. Just... don't render the vector art more than you have to. That's the dark art he applied. Now you know the secret too. Use it wisely.

jcranmer [3 hidden]5 mins ago
> You can't believe how absurdly out-of-whack it is to state that a modern high-end PC would somehow "struggle" with text rendering of any sort!

I never said that.

What I said is that modern text rendering--rendering vector fonts instead of bitmap fonts, having to deal with modern Unicode features and other newer font features, etc.--is a more difficult task than the days when text rendering was dealing with bitmap fonts and largely ASCII or precomposed characters (even East Asian multibyte fonts, which still largely omit the fun of ligatures).

My intention was to criticize this part of your comment:

> None of this matters. UTF8 is not the reason the GUI is slow. Even the UCS16 encoding used by Windows is just 2x as slow, and only for the text, not any other aspect such as filling pixels, manipulating some GUI object model, or responding synchronously vs asynchronously.

> Look at it this way: for a screen full of text, ASCII vs Unicode is a difference of 10 KB vs 20 KB in the volume of data stored. The fonts are bigger, sure, but each character takes the same amount of data to render irrespective of "where" in a larger font the glyph comes from!

This implies to me that you believed that the primary reason Unicode is slower ASCII is because it takes twice as much space, which I hope you agree is an absurdly out-of-whack statement, no?

jiggawatts [3 hidden]5 mins ago
> This implies to me that you believed that the primary reason Unicode is slower ASCII is because it takes twice as much space, which I hope you agree is an absurdly out-of-whack statement, no?

Actually, this precise argument comes up a lot in the debate of UTF-16 vs UTF-8 encodings, and is genuinely a valid point of discussion. When you're transferring gigabytes per second out of a web server farm, through a console pipeline, or into a kernel API call, a factor of two is... a factor of two. Conversely, for some East Asian locales, UTF-16 is more efficient, and for them using UTF-8 is measurably worse.

The point is that any decent text renderer will cache glyphs into buffers / textures, so once a character has appeared on a screen, additional frames with the same outline present will render almost exactly as fast as in the good old days with bitmap fonts.

Not to mention that Microsoft introduced TrueType in 1992 and OpenType in 1997! These are Windows 3.1 era capabilities, not some magic new thing that has only existed for the last few years.

None of this matters in the grand scheme of things. A factor of two here or there is actually not that big a deal these days. Similarly, rendering a 10x20 pixel representation of maybe a few dozen vector lines is also a largely solved problem and can be done at ludicrous resolutions and framerates by even low-end GPUs.

The fact of the matter is that Casey got about three orders of magnitude more performance than the Windows Console team in two orders of magnitude less time.

Arguing about font capabilities or unicode or whatever is beside the point because his implementation was more Unicode and i18n compliant than the slow implementation!

This always happens with anything to do with performance. Excuses, excuses, and more excuses to try and explain away why a 1,000x faster piece of hardware runs at 1/10th of the effective speed for software that isn't actually that much more capable. Sometimes less.

PS: Back around 2010, my standard test of remote virtual desktop technology (e.g.: Citrix or Microsoft RDP) was to ctrl-scroll a WikiPedia page to zoom text smoothly through various sizes. This forces a glyph cache invalidation and Wiki especially has superscripts and mixed colours, making this even harder. I would typically get about 5-10 fps over a WAN link that at the time was typically 2 Mbps. This is rendered in CPU only(!) on a server, compressed, sent down TCP, decompressed, and then rendered by a fanless thin terminal that cost $200. The Microsoft devs are arguing that it's impossible to do 1/5th of this performance on a gaming GPU more than a decade later. Wat?

jcranmer [3 hidden]5 mins ago
I continue to be mystified as to why you think I am commenting in any way on the quality (or lack thereof) of the Windows Console.
masfuerte [3 hidden]5 mins ago
Single-stepping is moderately complicated. I'm even less impressed with modern editors that can't keep up with typing. My first computer could do that. It had a 2MHz 6502.
john_the_writer [3 hidden]5 mins ago
I've had the same thing happen with a drawing app.. I've an older android tablet that has a sketch app. It works great.. I got a new high powered tablet, because the battery on the old one wouldn't keep a charge.. New one has a sketch app that lags.. draw line... wait. draw line... wait. It's unusable.. It has a newer processor, and more memory, but I can't draw.
patrick451 [3 hidden]5 mins ago
Muratori's main talking point seems to be that modern software is slow. I think he is 100% correct on this. It's absolutely insane how long it takes jira to show me a ticket or slack to switch chat rooms and that vscode can't keep with up with normal typing speed.
saagarjha [3 hidden]5 mins ago
He's right about software being slow. He's wrong about why they are slow or how to solve the problem.
TheMode [3 hidden]5 mins ago
What is your opinion on the cause and the solution?
saagarjha [3 hidden]5 mins ago
I think it's like asking why poverty exists. There's no one, easy answer. We have a lot of difference forces, many of them market or demand or incentives that cause software to be slow. Fixing this will require concerted effort, rewards for making things that are fast, and some sort of accounting for the externalities. Same as any other complex problem.
TheMode [3 hidden]5 mins ago
Aren't you agreeing with Casey by saying this is an effort/cultural problem? He also does say that fast software can sell better.
saagarjha [3 hidden]5 mins ago
I agree with Casey that this is a problem. I disagree with his view that seems to mostly boil down to "let me at them chief".
bena [3 hidden]5 mins ago
Does he though?

It seems like he just makes these broad critical statements then pisses off to continue to do nothing.

I wouldn’t even say he’s accomplished great things. He’s accomplished some decent things.

He’s released two games which barely qualify as games. They’re more like puzzles. You don’t need to play them ever again after you finish the first time. Braid is ok. And the Witness is just Flow. And then after that he’s been working on a programming language for the past decade which he hasn’t released to the public because “it’s not finished”.

He got lucky and made his nut and ever since then has thought of himself as way more talented than he is.

mjr00 [3 hidden]5 mins ago
> He’s released two games which barely qualify as games.

Everyone's entitled to an opinion, but it's worth pointing out that Braid is not only widely considered one of the best games ever made[0], its success was also instrumental in the explosion of indie games in the late 00s/10s. The Witness didn't quite reach the same heights, but it got an 87 on Metacritic and was a financial success.

Even if it's only two games, that's a much stronger resume than most. You can argue that he takes too long to develop games, but other studios also take 8 years to make games and come out with Concord or Dragon Age: Veilguard.

[0] https://en.wikipedia.org/wiki/List_of_video_games_considered...

bluefirebrand [3 hidden]5 mins ago
> Braid is not only widely considered one of the best games ever made[0]

The source here is a wikipedia article containing "A list of video games that multiple have considered to be among the best of all time"

I don't know anyone who is into gaming that takes the opinions of "video game journalists or magazines" seriously anymore. They are so obviously just a marketing extension of the videogame industry that you can't trust anything they say. Look at recent developments with Dragon Age Inquisition. Reviewers gave it great scores, but it flopped hard enough that Bioware had huge layoffs [0]

Braid is a game that people barely remember exists today, and in another 10 years it will be even more obscure. It is not remotely an all time great

> its success was also instrumental in the explosion of indie games in the late 00s/10s

Don't give Braid too much credit here, it caught the indie game wave at the start but the wave was coming either way. The explosion happened at the same time Braid came out, not years later inspired by its success.

[0] https://www.forbes.com/sites/paultassi/2025/02/02/after-drag...

jcranmer [3 hidden]5 mins ago
> Look at recent developments with Dragon Age Inquisition. Reviewers gave it great scores, but it flopped hard enough that Bioware had huge layoffs [0]

First off, it's Dragon Age: Veilguard, not Dragon Age: Inquisition.

Second off, it's had a better release (in terms of sales X days from launch) than literally every other Dragon Age came to date.

While it's true that it underperformed expectations, that's more because the expectations were ludicrous to begin with, and the actual performance can hardly be called a flop.

bena [3 hidden]5 mins ago
I like Braid. I do. But, it’s more of a puzzle than a game. Once you figure out the answer, you’re done. Like Myst.

Games have some element of skill involved and Blow’s products lack that.

And that’s not to knock the products. I like puzzles. And not everything has to be a game. Will Wright has said that he does not make games, he makes toys. Digital toys.

I also think people judge his work on his reputation more than on its merits

olejorgenb [3 hidden]5 mins ago
I'm confused. When did puzzle games stop being games?
becquerel [3 hidden]5 mins ago
I despise Blow but this is a critique that borders on the incoherent. You could just as well say that when you figure out how to hurt enemies without taking damage, you're done with action games.
bena [3 hidden]5 mins ago
Being able to execute the action is part of the challenge. Being able to do it once does not guarantee you will be able to do it again.

Neither Braid nor The Witness nor Myst have that quality. Once you know solution, it’s done, it never changes.

You seem to think being a puzzle is an insult. But it’s not, it’s just not a game.

Wentworth and Ravensburg make decent money publishing puzzles. But neither company tries to say this is how all construction should be done. Which is how Blow comes across to me.

mjr00 [3 hidden]5 mins ago
> I like Braid. I do. But, it’s more of a puzzle than a game. Once you figure out the answer, you’re done. Like Myst.

This criteria is very specific to you. Outer Wilds is also widely considered one of the greatest games of all time, and also one of the least replayable once you've beaten it. Myst too, since you mentioned it.

Besides, Braid is as much a platformer as it is a puzzler... is Super Mario Bros not a game since the levels don't change once you beat them?

yoyohello13 [3 hidden]5 mins ago
I do agree, although I do like him. I find it fun to listen to Blow’s rants and part of me feels like he is right to a degree. It would be nice if programmers knew more about the platforms they are developing on.

But it’s pretty clear that Blow’s process does not make him particularly productive. The dude obviously works insanely hard and is talented. I can’t imagine how much he would be able to produce if he leaned on some abstractions.

torlok [3 hidden]5 mins ago
I think you're getting close to ad hominem. It should be sufficient to point out how little overall people like Blow have accomplished. Blow is a hard worker, and yet it took him and his team 8 years to develop The Witness, which as you pointed out isn't anything revolutionary. I think there's a weird form of elitism at play here. Working close to the hardware isn't hard. There was a point in my early 20s where I too felt like I'm better, because I know how a computer works, or that making games in pure C is harder than in C++. As an adult this is cringe. I work in embedded, and I can tell you there's enough low level programmers out there to threaten my job security.
xanthor [3 hidden]5 mins ago
Have you shipped a commercial-quality 3D game engine and game? How long did it take you?
deanCommie [3 hidden]5 mins ago
The Witness isn't revolutionary, but it is crafted with the care of someone who cares about the small details of 3D worlds. Besides being easily one of the deepest/broadest puzzle games of the decade, it is GORGEOUS, and filled to the brim with visual scenes that take your breath away.

To me it's something more akin to the iPod. "No wireless. Less space than a Nomad. Lame." was absolutely a correct way to dismiss it as nothing revolutionary. And yet it's perfect well-rounded craftsmanship WAS revolutionary.

From what it seems, Blow is doing exactly the same thing with his next game too. And there's something admirable about that.

I say this as someone who disagrees with 80% of what he says, but VEHEMENTLY agree with the remaining 20.

torlok [3 hidden]5 mins ago
I'm not dismissing the craftsmanship that went into making The Witness. All I'm saying is that it's cringe to be so critical of other developers when in essence he spends an extraordinary amount of time on his projects, and hasn't shipped anything cutting edge or anything other developers can even use.
bena [3 hidden]5 mins ago
Blow routinely trashes software he’s never had to write.

Single user software that is the sole focused application when it is running. It saves nothing of importance and doesn’t alter any system to any appreciable degree.

You know what’s more impressive than The Witness? Any web browser. It has to deal with HTML, CSS, JavaScript, XML, SGML, SVG, etc. An entire runtime environment with garbage collector. A web browser can run VS Code. It can run Doom. It can be made to run The Witness.

Blow does not remotely acknowledge the complexity needed to make modern software. He treats his incredibly small domain as the pinnacle of the craft.

BirAdam [3 hidden]5 mins ago
There are certainly plenty of issues with the modern software landscape, and I do think too much abstraction is a problem. Yet, the opposite extreme is also bad, and people overly romanticize the past. Not only were crashes and reboots problems, not only did Amiga and such have compatibility problems between hardware versions, but even systems that strove for compatibility suffered from incompatibilities.

The fact is, even on the most unreliable modern system (Windows 11) my computer is far more reliable than any computer I had before about 2010. It can also run software written for Windows 95. That’s a very good run. A computer being usable day to day is better than one that isn’t.

DarkNova6 [3 hidden]5 mins ago
„ If we forget low level stuff, civilization will fall apart since we won't be able to keep vital software running.“

I argue the other way around. There is too mich complexity involved with using low level systems to model high level processes. We use the wrong tools and often model our system in a way which mimics the initial dataflow instead of discovering our domain and have a higher level understanding.

Complexity kills. And we must do everything in our power ti keep complexity to its minimum and avoid accidental complexity.

ignoramous [3 hidden]5 mins ago
> And we must do everything in our power to keep complexity to its minimum and avoid accidental complexity.

We aren't avoiding nor minimising complexity (by being at higher levels) but merely being ignorant of it, which is okay, too; until the abstractions leak: For example, folks working in hpc / cybersecurity have no such luxury.

DarkNova6 [3 hidden]5 mins ago
You are right, I should have been more accurate in my depiction. Inherently technological problems require low level-technological solutions. There is no way around that.

But most complexity I see arrives from just moving data from A to B to automate everyday bureaucratic tasks or enable data analysis. And the needless complexity I encounter in such systems is just mind boggling.

Currently I am working on a government system which captures tasks for long running procedures such as approving power plants. And the way the devs are treating this system is borderline criminal, acting as if this is just another CRUD app without any structure or architecture.

lmm [3 hidden]5 mins ago
The idea that abstractions have to leak is a myth. We don't need fewer abstractions, just better ones. Abstractions that don't require obscuring the low-level details (e.g. typeclass/trait systems instead of interface/impl ones) help too.
travisgriggs [3 hidden]5 mins ago
Not all simplifications are abstractions. Not all abstractions are simplifications. But the pursuit of simplification is usually what motivates an abstraction. I don’t think that abstractions kill software, or civilization for that matter, but ill begotten abstractions in the name of short win simplifications, puts a drag on the flexibility and agility and approachability of either.

Take syntactic sugar in just about any language. There’s usually a breaking point, where you can say the localized gain in simplification for this particular nuance is not worth how complex this tool (language) has become. People don’t make more mistakes in a syntax heavy language because of any particular element, but because using the tool well to solve complex problems just gets difficult (regardless of what a compiler might guarantee you).

I look at the complexity that async and coroutines adds to my experience when programming anything “threaded like” in Kotlin compared to how I deal with the same sorts of problems in Elixir/Erlang and it’s just night and day difference. Both have abstractions/simplifications to the age old problem of parallel/async computing, but one (the former) just multiplies simplicities to end up with something complex again and the other has an abstraction/simplification that Just Works(tm) and really is simple.

rglover [3 hidden]5 mins ago
Ill-considered abstractions, yes. There are a lot of abstractions that you can tell are the first draft or attempt but because of tech's "religion of speed"—and being drunk on hubris—end up shipping (vs being one of several iterations).

Then, if those abstractions are part of a popular project, other people copy them due to mimetics (under the watery banner of "best practices").

Rinse and repeat for ~10-20 years and you have a gigantic mess on your hands. Even worse, in a hyper-social society (ironically, because of the technology), social consensus (for sake of not being found out as an "impostor") around these not-quite-there solutions just propagates them.

FWIW, I love that Jonathan Blow talk and come back to it at least once per year. He doesn't say anything controversial, but deep down, I think a lot of developers know that they're not shipping their best (or guiding younger generations properly), so they get upset or feel called out.

We've just reached a state of culture in which chasing novelty has become commonplace (or in some cases, celebrated). Well-considered solutions used to be a cultural standard whereas now, anything that's new (whether or not its actually good) has become the standard.

We can navel gaze and nitpick the details of Blow's argument all we want, but the evidence is front and center—everywhere. And yes, over a long enough horizon, that can lead to civilizational collapse (and when you consider the amount of broken stuff in the world, arguably, already is).

dostick [3 hidden]5 mins ago
It appears that author is from that newer generation, and completely misses points because he just don’t know. Ironically the article is an example of what Blow was talking about. similar happens if I talk about how Figma is destroying the design world on unprecedented scale by normalising bad UX, UI and product management found in Figma itself, and get responses from younger designers that everything is great, completely baffled. You have all that knowledge because you grew up in that environment. They didn’t, and not likely they can learn the equivalent of culture and experience anywhere.
xandrius [3 hidden]5 mins ago
Could you expand on how Figma is destroying the design world?
low_tech_punk [3 hidden]5 mins ago
I'm curious too. I want to understand "Figma is destroying the design world on unprecedented scale by normalising bad UX, UI and product management found in Figma itself".
senordevnyc [3 hidden]5 mins ago
So your rebuttal is that they’re wrong because they’re young and inexperienced? OK, maybe, but what exactly are they wrong about? You haven’t actually added anything to the conversation with this ad hominem attack.
rat87 [3 hidden]5 mins ago
Also I couldn't find the authors age but judging by the fact they say they still use an Amiga they're probably at least in their 40s(unless they're a huge retro fan)
john_the_writer [3 hidden]5 mins ago
Also talked about C64's, like they lived through it. That brings them into their 50's. I got the feeling they've been at it a long time.
gtsop [3 hidden]5 mins ago
Which points do you think the author has missed?
jibal [3 hidden]5 mins ago
"It appears that author is from that newer generation"

No it doesn't. Nothing else you wrote here is true either.

rat87 [3 hidden]5 mins ago
He demolished Blows argument so I don't think it's important to bring up age but it does seem like he is likely at least in his mid forties. The Amiga was popular in the late 80s

> "The argument that software is advancing is obviously false" I still regularly use my beloved Amiga computers. A couple of weeks ago, I was copying a file to an Amiga's hard drive when the computer suddenly decided to crash on me. Not because I did something wrong, but because old home computer OS:es weren't very stable. This resulted in a corrupted hard drive partition and the OS failed to re-validate the file system. My only option was, in the end, to re-format the partition.

I don't even know much about design but I can tell your claims about Figma are totally wrong, in a similar fashion to blows claims. It's the nostalgia talking. You're forgetting the fact that user interface has always had a lot of crap, just like software just like everything. The positives of the old cream of the crop is remembered while all the crap and even all the failings of the well designed things are forgotten

foobiekr [3 hidden]5 mins ago
Figma absolutely sucks.

It is slow, but more importantly, by encouraging UX designers to work in high fidelity designs where live changes are frustrating and feedback is entirely incorporated as a bunch of mostly-hidden comments in a sandbox that makes it hard for reviewers to follow up on or reference back to, all it's done is made everything worse.

alecco [3 hidden]5 mins ago
The problem is confusing the collapse of the Western civilization and civilization per se. China is very happy to pick up the pieces. In spite of their cultural weaknesses, on this point they do teach and learn how to do everything from scratch.

The West as a dominant civilization is absolutely collapsing. It's just a matter of how long will it take. I hope China learns from our mistakes and has the will to fix their cultural shortcomings.

SirHumphrey [3 hidden]5 mins ago
But is it though? Because it's has been claimed for many many years that the west if falling (at least from 2008 on-wards) and yet - more people speak English than ever before, American culture (at least anecdotally) is more ever present and American tech corporations are 8 out of 10 most valuable companies in the world.

I guess that people always feel that the end is nigh.

alecco [3 hidden]5 mins ago
Latin was used for centuries after the Roman empire was gone.

The West is de-industrializing very fast. And a good chunk of the services side is switching to India and other cheaper countries and AI. The West is in its last legs. The final nails in the coffin will be when China achieves good enough but at cheap prices: 1) chips (5-10 years away) and 2) aerospace (8-12 years away?). China is investing heavily on these and acquiring the knowledge (wink-wink).

illiac786 [3 hidden]5 mins ago
Abstraction is a way to move work onto machines to make us humans more productive (at least that’s the plan). Machines do insane amount of useless (ok, highly inefficient) work to allow us to save some brain effort. Sure, some abstractions are counterproductive and add to complexity rather than making it easier for us. I’d argue it’s a minority though.
bionhoward [3 hidden]5 mins ago
One reason I like Rust is it lets me replace a lot of high level Python stuff with faster and more correct code while also opening access a new world of low level C stuff I never had an opportunity to learn about, it’s been super fun to tinker with and I look forward to keep working with it!

Definitely good to broaden our horizons but also crucial to maintain focus…there is so much to learn, how can we balance the joy and productivity of software with the strategic imperative to build locally grown hardware?

low_tech_punk [3 hidden]5 mins ago
I think this post is related: We are destroying software (https://antirez.com/news/145)

I don't disagree that software engineering is not as rigorous as it was. But software has also spread into much wider area, allowing many more people to participate as programmers, all thanks to abstractions.

I'm drawn to the romantic/pure/spartan aspect of low level programming too but I also need to make a living by solving problems in practical and scrappy ways.

cyberax [3 hidden]5 mins ago
> I don't disagree that software engineering is not as rigorous as it was.

Such an utter BS. Modern run-of-the-mill software would run stadiums around the software of 80-s or 90-s.

Version tracking, continuous integration, and even _bug_ tracking was not at all common in 90-s.

Testing? That's so far ahead of the state-of-the-art of even 2000-s that it's not even funny. Even the basic unit testing was a huge step ahead in 2000-s. And fuzz testing, ubiquitous sanitizers, time-travel debugging are all novel to the mainstream development.

And that's not all. We're now not just talking about formal methods, but applying them in practice (see: WUFFS). And we now have safety-oriented low-level languages like Rust (that's just basically 10 years old).

natmaka [3 hidden]5 mins ago
IMHO part of the problem stems from teaching without carefully stating the nature of what is explained. A teacher should check that the audience reasonably knows what 'convention' means, how it differs from 'abstraction', 'truth', 'theory', 'measured fact'...

Bonus: explicitly recognizing that many terms (along with other conventions) aren't adequate. According to the knowledge of those who coined them they appeared adequate at the time, and replacing them may induce confusions, would condemn existing knowledge and writings, and isn't perfectly effective as the new convention may even prove inadequate later. I used to say that we may consider them as tributes to the the wonderful people who coined them.

sebastianconcpt [3 hidden]5 mins ago
This is a very interesting take but there is an underlying issue to software. An abstraction that isn't at the software libraries/system level: managerialism. The processualizing of all the things.

Rudyard Lynch has a provocative interesting take on it.

userbinator [3 hidden]5 mins ago
Abstraction isn't the problem, overabstraction is; and I suspect the reason the latter is so common is because people don't actually understand why and when abstraction is useful (and likewise, when it isn't), so they blindly apply it whenever they can, with the misguided notion that since abstraction is somehow a good thing, the more abstract something is, the better it is.
jppope [3 hidden]5 mins ago
In a sense, I wrote an article about this exact thing: https://jonpauluritis.com/articles/4000years/

While fun to talk about, there really isn't a danger

Sparkyte [3 hidden]5 mins ago
In a sense abstraction does. You stop learning the fundamentals of how something works. Long enough as time goes by no one knows how the fundamentals work and a company fails.
jibal [3 hidden]5 mins ago
As the article shows, these ridiculous claims are false.
torlok [3 hidden]5 mins ago
As somebody who works in embedded, and does kernel programming and low level networking, I wish any of Blow's fear mongering was true. It would do wonders to my feeling of job security and self importance.
jrm4 [3 hidden]5 mins ago
This feels like plumbers thinking that using some weird type of toilet fitting is killing civilization.
ninetyninenine [3 hidden]5 mins ago
No it’s not. Software is killing the US and not because it abstracts low level stuff. It’s because it abstracts reality in such a way that the US is becoming a nation of software engineers ignorant of even how to build the stuff it programs. We’ve moved up the stack and moving up the stack requires people knowledgeable about the bottom.

One can’t exist without the other so the world isn’t getting destroyed. China is handling the bottom of the stack. And by understanding the bottom of the stack they also become good with the top.

I used to hear shit like China only knows how to manufacture stuff the US is good at design. And I keep thinking well if you submit an entire design to China to manufacture you don’t think they will figure out how to design? Knowing how to design is easy. Manufacturing or turning that design into reality is hard.

Software isn’t killing civilization. It’s killing the US and that may seem like the world if you lived here all your life.

scarface_74 [3 hidden]5 mins ago
You realize that the US is still designing the most advanced processors and technology. Now if you argued that the US has forgotten how to manufacture anything, I would agree with you.
ninetyninenine [3 hidden]5 mins ago
No. Taiwan and Korea are on par for design.
scarface_74 [3 hidden]5 mins ago
Taiwan doesn’t “design” any leading edge chip they manufacture them. Where are the great chios from Korea? The top Android phones are four years behind iPhones as far as performance - including those from Samsung
sampullman [3 hidden]5 mins ago
Don't discount MediaTek, and four years is an exaggeration.
scarface_74 [3 hidden]5 mins ago
My bad - only 3 years. The fastest Samsung is around iPhone 12 and iPhone 13.

https://browser.geekbench.com/mobile-benchmarks

MediaTek is definitely not releasing high end chips. For desktop ARM, the only ones that come close are Qualcomm’s chip

ninetyninenine [3 hidden]5 mins ago
TMSC realizes the designs. They only aren't designing because they choose not to. Basically everything you see around you Asia does better. I think the last thing we have is military technology and passenger planes.

As for software, we're only slightly better than asia.

scarface_74 [3 hidden]5 mins ago
If that were the case, why wouldn’t they do that?

Does Foxconn also “realize the design” of the iPhone?

ninetyninenine [3 hidden]5 mins ago
Because their business model is to focus on manufacturing so that they don’t compete with their customers. This gives them access to Nvidia and amd with no conflict of interest. This is not something I’m just speculating about and writing about here as a counter point. This is literally a well known and specific strategic business decision made by tsmc.

Foxconn I’m less familiar with them so I can’t comment on that. But I wouldn’t be surprised if they had a similar strategy. Would apple really outsource to them if they were making their own competing phones? Fuck no.

That being said you know that all phone technology comes from abroad right? Like most design and manufacturing come from Asia. It is impossible for America alone to support the full stack of this technology. But Asia can.

scarface_74 [3 hidden]5 mins ago
Foxconn is actually an ODM too. They design and manufacturer their own phones.

https://nuumobile.com/android-oems-vs-odms-5-things-you-shou...

ninetyninenine [3 hidden]5 mins ago
Doubt those phones compete with apple.
paul_h [3 hidden]5 mins ago
Killing or just another risk? As far back as the 80's or 70's there was the idea of losing ball bearing manufacture capability as a risk to society and civilization. I'd think the lack of a systemic defence to amplified mis and dis info is might higher in a list, with application of another software abstraction (merkle trees) being part of the solution. Well, defence of rule of law perhaps and less about layered technological aspects of civilization.
gtsop [3 hidden]5 mins ago
It is unfortunate that someone needs to pick apart a flawed thesis in such detail (as the author did with Blow). The pure empiricist is equaly as detached from reality as the pure theorist, and as such Blow is making up arguments just because they fit his experience, cherry picking examples that fit his rants and promoting the exception to be the rule.
keninorlando [3 hidden]5 mins ago
I miss assembler and maximizing code efficiency to juggle 2k of ram.
chgs [3 hidden]5 mins ago
Plenty of hand written assembler in core programs like ffmpeg. Sure most people glue together tedious JavaScript frameworks but there’s still people doing real work.
saagarjha [3 hidden]5 mins ago
…which isn't just writing writing assembly for ffmpeg.
yoyohello13 [3 hidden]5 mins ago
Not sure if you’re being sarcastic, but I do unironically miss this.
scotty79 [3 hidden]5 mins ago
> Abstraction fosters ignorance about low level programming.

> If we forget low level stuff, civilization will fall apart since we won't be able to keep vital software running.

There's barely anything to remember. Most of low level stuff did 180 in last 3 decades. RAM is slow now, compute is fast. Drives can be almost as fast as RAM. Sometimes it's better to do more than less to make code branchless. It's best if you can split your task into thousands of micro-threads and run on GPU.

tolciho [3 hidden]5 mins ago
> I'm not a historian and will not comment on this first part of the talk. It doesn't matter much,

Okay.

> What is robust? ... Is it the multi-year uptimes of a plethora of ...

Big uptime systems are dubious. Probably a lack of kernel patches, hardware patches, and who know if, on reboot, the system will actually boot and start the relevant services correctly. A bank once had their mainframe fail, and they were flailing around for a good long while, possibly because it had been about a decade since their last failure and everyone forgot what to do, or maybe Bob had retired. Or how about that host that boots four years into the future and now various things are quite broken? There was NTP, but an unrelated change had broken that on some firewall. "Normal Accidents" are somehow a thing in complex systems, as are black swan events. Quite possibly either or both were involved in the late bronze age whateveritwas, but naturally history doesn't matter much.

> Oh, and garbage collection and functional programming aren't new abstractions. Lisp did both in the late 1950s

PAIP (Norvig) recounts that the garbage collection was so bad it was turned off and the LISP machines were let run until they ran out of memory, at which point they were rebooted. I guess this is a point for improved robustness in certain areas, though there are probably still "/5 * * * reboot-something" type cron jobs out there for services that leak too much memory. No, management did not grant time to fix that service last I had to put in the most recent five minute reboot script. Many don't get such a intimate view of the rusty bowels of the internet.

> open up a Unix-type command line in Linux/MacOS/*BSD/WSL, type "ed" at the prompt and see how far you get with your text editing

Some wacky systems do not install ed, or link it to nano or some other nonsense, so you may need to actually install ed to get the standard editor. If you happen to be stuck on such a system, `busybox vi` is tolerable—vim gussied up with color spam is not, especially the unreadable blue on the black console—though learning enough about ed might be good if you, hypothetically, ever have to fix an old system over a serial line at 3AM in the morning. There isn't much to learn for such cases, "a" to append new lines, "." on a line by itself to end that, "/foo" to search, "c" to change that whole line (and then the "." thing) and then "wq" to save. Great for edits were you don't want other folks seeing the contents of, say, /etc/hostname.wg0. Or sometimes a cat of the file should be done to preserve it in the scrollback buffer, which has saved a non-zero number of configuration files across the internet. Ideally this sort of disaster training should be practiced by some now and then, but that does take time away from other things.

Back to the unloved history thing. A collapse can take a few centuries, which may be a problem given the recent emphasis on the current sprint or how the stock will be doing Tuesday (as opposed to the lease for Hong Kong, though a hundred years is a magnificently short period of time). So a few folks crying wolf or playing Cassandra might be a good thing to help point out past issues and maybe from that future shocks can be made less bad.

And of course one should beware the C people.

foxes [3 hidden]5 mins ago
Billionaires are killing civilisation
knodi [3 hidden]5 mins ago
Why stop here… not say easy access to clean water is destroying your health. Should walk two miles to the watering hole, carry it back, collect wood for fire and boil the water your self…

\s

yoyohello13 [3 hidden]5 mins ago
Ngl, forcing people to walk two miles for water everyday would probably make a significant dent in obesity related mortality.
danparsonson [3 hidden]5 mins ago
Well indeed - abstraction is a cornerstone of progress, not a hindrance to it.

Most of us will die from dehydration or water-borne disease or hunger if civilisation collapses though!

rat87 [3 hidden]5 mins ago
No

https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

Of course not

And if it were it wouldn't be something

c or c++ related but all the banks and unemployment systems still written in COBOL