As soon as I finished reading the article, the very first thing that came in my mind is Dieter Rams' "10 Principles of Good Design"; I have been following his principles as much as I can, as they match, more or less, those of UNIX's philosophy:
1. Good design is innovative
2. Good design makes a product useful
3. Good design is aesthetic
4. Good design makes a product understandable
5. Good design is unobtrusive
6. Good design is honest
7. Good design is long-lasting
8. Good design is thorough down to the last detail
9. Good design is environmentally-friendly
10. Good design is as little design as possible
musicale [3 hidden]5 mins ago
> they match, more or less, those of UNIX's philosophy
1. Good design is innovative
UNIX innovated by simplifying Multics -
throwing away ring security and PL/I's memory safety features.
Linux innovated by cloning UNIX, giving it away for free,
and avoiding the lawsuit that sidelined BSD.
2. Good design makes a product useful
Yet somehow people managed to use UNIX anyway.
3. Good design is aesthetic
UNIX threw away clear, long-form command forms and kept
short, cryptic abbreviations like "cat" (short for "felis cattus")
and "wc" (short for "toilet").
Its C library helpfully abbreviates "create" as "creat",
because vowels are expensive.
4. Good design makes a product understandable
See #3
5. Good design is unobtrusive
That's why UNIX/Linux enthusiasts spend so much time
configuring their systems rather than using them.
6. Good design is honest
The UNIX name indicates it is missing something
present in Multics. Similarly, "Linux" is the
gender-neutralized form of "Linus".
7. Good design is long-lasting
Like many stubborn diseases, UNIX has proven hard to eradicate.
8. Good design is thorough down to the last detail
UNIX/Linux enthusiasts love using those details
to try to figure out how to get Wi-Fi, Bluetooth,
and GPU support partially working on their laptops.
9. Good design is environmentally-friendly
Linux recycles most of UNIX's bad ideas, and many
of its users/apologists.
10. Good design is as little design as possible
Linux beats UNIX because it wasn't designed at all.
UNIX haters handbook exists, because many of us don't worship at the fate of UNIX church.
Yes, it did some things right, but also did plenty of them bad, lets not worship it as the epitome of OS design, cloning it all the time without critical thinking.
Alone the fact that its creators went on to design Plan 9, Inferno, Alef, Limbo and Go, shows even they moved on to better approaches.
AnonymousPlanet [3 hidden]5 mins ago
The UNIX Hater's Handbook is not about hate against the UNIX philosophy but about frustrations with inconsistencies and idiosyncrasies in UNIX implementations. I have often seen people confusing the idea of UNIX philosophy and various UNIX implementation details (not the implementation of its philosophy but of mundane concepts like printing or the use of /usr), and then using these implementation details as strawman arguments against the philosophy.
pjmlp [3 hidden]5 mins ago
It is a bit of both.
Also to note that outside FOSS circles worshiping UNIX, no one cares about said philosophy, including commercial UNIX vendors.
oneeyedpigeon [3 hidden]5 mins ago
People rarely care about the underlying philosophy of anything - not even government, nowadays. They care about results and, fortunately, Unix still delivers for many.
dennis_jeeves2 [3 hidden]5 mins ago
When it come to govt - they don't even appear to care about the results.
motorest [3 hidden]5 mins ago
> Alone the fact that its creators went on to design Plan 9, Inferno, Alef, Limbo and Go, shows even they moved on to better approaches.
I think you're confusing "different" with "better", and you're confusing someone'small almost personal experiments implemented as proof-of-concept projects as actually being improvements.
I mean, Plan 9 was designed with a radical push for distributed computing in mind. This is not "better" than UNIX's design goals, just different.
Nevertheless, Plan 9 failed to gain any traction and in practice was pronounced dead around a decade ago. In the meantime, UNIX and Unix-like OSes dominate the computing world still up to this day. How does that reflect in your "better approaches" assertion?
The argument on the Go programming language is particularly perplexing. The design goal of Go has nothing to do with the design goal of C. Their designers were very clear in how their design goals was to put together a high-level programming language and tech stack designed to improve Google's specific problems. This wasn't C's design requirements, were they?
> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.
Go is basically Limbo in a new clothing, Limbo took up the lessons on Alef's design failure.
They could have designed their C++ wannabe replacement in many other ways.
anthk [3 hidden]5 mins ago
You are misinformed. 9front is not dead. Go's roots are Limbo and the ?c compiler suite from plan9.
BTW, go runs on 9front.
iforgotpassword [3 hidden]5 mins ago
Granted, fortunately lots of that book is now obsolete and mostly just good for laughs about the bad old times. And most of the predictions it had about future innovation never turned out that way either.
pjmlp [3 hidden]5 mins ago
Unfortunately many of those critics are as up to date as when the book was originally published.
cjfd [3 hidden]5 mins ago
Abbreviating create as 'creat' is a bit stupid but it is the kind of quirk that makes me feel at home. The opposite can be found here: https://devblogs.microsoft.com/scripting/weekend-scripter-us... . That is a world where well... I have switched jobs once specifically to get out of that world....
ripped_britches [3 hidden]5 mins ago
I don’t know enough about kernel development to agree or disagree but I am thoroughly entertained
blueflow [3 hidden]5 mins ago
The rules are too generic to be useful, because mankind still can't agree on what is "innovative", "useful", "aesthetic", ... and what isn't.
Only rule 7. and 9. are measurable and not purely subjective.
makeitdouble [3 hidden]5 mins ago
And then rule 7. is debatable.
If you design for an ephemeral state, it doesn't make sense to be long lasting.
3D printing a door handle that perfectly fits my hand, my door, the space it moves in and only lasts until I move to another house can be the ultimate perfect design _for me_.
I'd see the same for prosthetic limbs that could evolve as the wearer evolves (e.g. growth up or ages) or what they expect from it changes.
rapnie [3 hidden]5 mins ago
Based on UNIX philosophy too is Dan North's idea of Joyful Coding that does away with formal SOLID principles in favor of CUPID more playful ones: https://cupid.dev
avidiax [3 hidden]5 mins ago
These don't sound like the UNIX philosophy. My impression is that UNIX is more like what's outlined here:
It's primarily just a statement of widely agreed principles everyone has always claimed to follow, even when UNIX was new, and others arguably followed them with more success e.g.
> The Unix philosophy emphasizes building simple, compact, clear, modular, and extensible code that can be easily maintained and repurposed by developers other than its creators
Nobody says "we want to build complicated, sprawling, unclear, unmodular and rigid code", so this isn't a statement that sets UNIX apart from any other design. And if we look at the competing space of non-UNIX platforms, we see that others arguably had more success implementing these principles in practice. Microsoft did COM, which is a much more aggressive approach to modularity and composable componentization than UNIX. Apple/NeXT did Objective-C and XPC, which is somewhat similar. Java did portable, documented libraries with extensibility points better than almost any other platform.
Many of the most famous principles written down in 1978 didn't work and UNIX practitioners now do the exact opposite, like:
• "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features" yet the --help for GNU grep is 74 lines long and documents 49 different flags.
• "Don't hesitate to throw away the clumsy parts and rebuild them." yet the last time a clumsy part of UNIX was thrown away and rebuilt was systemd, yielding outrage from people claiming to defend the UNIX philosophy.
About the only part of the UNIX philosophy that's actually unique and influential is "Write programs to handle text streams, because that is a universal interface". Yet even this principle is a shadow of its former self. Data is exchanged as JSON but immediately converted to/from objects, not processed as text in and of itself. Google, one of the world's most successful tech companies, bases its entire infrastructure around programs exchanging and storing binary protocol buffers. HTTP abandoned text streams in favor of binary.
Overall the UNIX philosophy has little to stand apart other than a principled rejection of typed interfaces between programs, an idea that has few defenders today.
unscaled [3 hidden]5 mins ago
And the "Worse is Better" follows some good design principles, but in a very twisted way: the program is designed to minimize the effort the programmer needs to write it.
Implementation simplicity meant one important thing: Unix could be quickly developed and iterated. When Unix was still new, this was a boon and Unix grew rapidly, but at one point backward compatibility had to be maintained and we remained with a lot of cruft.
Unfortunately, since implementation simplicity and development speed nearly always took precedence over everything else, this cruft could be quite painful. If you look at the C standard library and traditional Unix tools, they are generally quite user hostile. The simple tools like "cat" and "wc" are simple enough to make them useful, but most of the tools have severe shortcomings, either in the interface, lack of critical features or their entire design. For example:
1. ls was never really designed to output directory data in a way that can be parsed by other programs. It is so bad that "Don't parse ls" became a famous warning for shell script writers[1].
2. find has a very weird expression language that is hard to use or remember. It also never really heard about the "do one thing well" part of Unix philosophy and decided that "be mediocre at multiple things" is a better approach. Of course, finding files with complex queries and executing complex actions as a result is not an easy task. But find makes even the simplest things harder than they should be.
A good counterexample is "fd"[2]. You want to find that has a "foo" somewhere in its name in the current directory and display the path in a friendly manner? fd foo vs find . -name 'foo' -Printf "%P\n". What to find all .py files and run "wc -l" on each of them? fd --extension py --exec wc -l (or "fd -e py -x wc -l" if you like it short). "Find requires you to write find . -name '*.py' -exec wc -l {} ;". I keep forgetting that and have to search the manual every time.
Oh, and as a bonus, if you forget to quote your wildcards for find they may (or may not!) be expanded by the shell, and end up giving you completely unexpected results. Great foolproof design.
3. sed is yet another utility which is just too hard to learn. Most people use it as mostly as a regex find-and-replace tool in pipes nowadays, but its regex syntax is quite lacking. This is not entirely sed's fault, since it predates Perl and PCRE which set the modern standard for regular expressions that we expect to more or less work the same everywhere. But it is another example of a tool that badly violates the principles of good design.
The Unix Haters Handbook is full of many more examples, but the reality is that Unix won because other OSes could not deliver what their users needed fast enough. Unix even brought some good ideas to the mass market (like pipes) even if the implementation was messy. We now live under the shadow of its legacy, for better or worse.
But I don't think we should idolize the Unix philosophy. It is mostly a set of principles ("everything is a file", "everything is text" and "each tool should do one job", "write programs to work together") that was never strictly followed (many things in UNIX are not files, many commands do multiple jobs, most commands don't interact nicely with each other unless you just care about opaque lines). But most importantly, the Unix philosophy doesn't extend much beyond designing composable command line tools that handle line-oriented text for power users.
Maybe it's meant in an artistic sense, but under an engineering one I just don't see it.
idle_zealot [3 hidden]5 mins ago
If it's not innovative, then you're reinventing the wheel and would be better off using someone else's good design.
cgh [3 hidden]5 mins ago
From the article:
"We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels."
idle_zealot [3 hidden]5 mins ago
Reinventing the wheel is a great learning exercise, but it doesn't yeild great software.
8n4vidtmkvmk [3 hidden]5 mins ago
I dunno, I feel like I'm having to reinvent the wheel more often these days. I try a few existing libs but they're frequently bad or bloated, and rewriting it from scratch yields better results. I have to pick my battles though, can only do this with things which take no more than a couple days to write.
philwelch [3 hidden]5 mins ago
Taking a dependency on “left-pad”, on the other hand…
marginalia_nu [3 hidden]5 mins ago
Eh, I disagree. Oftentimes integrating someone else's wheel comes with so much added complexity that it rivals (or exceeds) making your own, and that code also needs to be maintained, is at risk of library churn, etc.
There are very real drawbacks to relying on other people's solutions to your problems. Sometimes they are outweighed by the hassle of implementing your own solution, but in many cases they do not.
mytailorisrich [3 hidden]5 mins ago
Note that Dieter Rams is an industrial designer so his 10 principles are not about software but about 'design'. In that context a "good design" is indeed innovative in some way. But in engineering a "good design" does not have to be, and in many cases probably shouldn't be.
lucianbr [3 hidden]5 mins ago
> > > Good design is innovative
> > Why?
> a "good design" is indeed innovative in some way
Proof by repetition, I guess? You haven't answered the question in any meaningful way, just repeated the original assertion. It's still as unsupported as it ever was.
docmars [3 hidden]5 mins ago
I read "innovative" as coming up with a novel solution to a known problem.
Sometimes the things we consider already solved can be solved better with nuances that maybe weren't considered before.
scarface_74 [3 hidden]5 mins ago
Artistic design leads you to the unnecessary skeuomorphism of iOS 1-6
gamedever [3 hidden]5 mins ago
that was vastly better than today's "guess which things are interactive" design where what's clickable/editable/interactable is just a guessing game.
MadWombat [3 hidden]5 mins ago
It seems to be one of those "pick any two" jokes, but those usually only have three items on the list. And yet pretty much everything on this list feels mutually exclusive.
hobs [3 hidden]5 mins ago
Everything had tradeoffs, but let's just list out things you called mutually exclusive:
innovative vs useful,
understandable vs honest,
long lasting vs thorough,
aesthetic vs unobtrusive,
What?
lloeki [3 hidden]5 mins ago
This page has the 10 principles, along with a small text and a visual illustration for each one.
Is your interpretation that these two statements are at odds? What even is the intended meaning of "a file"?
To me it could be:
Something accessible via a file descriptor that can be read from and/or written to. Feel free to add some other details like permissions, as needed.
Perhaps they should allow for folders as well, since a complex piece of hardware undoubtedly needs to expose multiple files to be robust, but the underlying intention was to create a standardized way of interacting with hardware.
Sectors on disk, switches, analog devices like a speaker, i2c and other hardware ideas all need to be read from or written to in some form to be useful
tsimionescu [3 hidden]5 mins ago
I think they meant that in any widely used Unix system today, there are a loooot of things which are not files, so the design is not honest.
The most common example of soemthing almost all programs interact with universally is BSD sockets. In Plan9, which goes out of its way to follow this everything is a file philosophy, TCP/UDP connections are actually represented as files. You open a file and write something to it to create the connection, and then you read or write to other files to read the streams, and you write again to the control file to close the connection. On the server side, you similarly write to a control file to start accepting packets, and monitor a directory to check for new connections, and so on.
Note that "file" here has a pretty clear definition: anything that can be interacted with using strictly the C FILE api - open()/read()/write()/close().
otabdeveloper4 [3 hidden]5 mins ago
Take a simple and obvious example - say, controlling a musical instrument from a computer via MIDI.
Calling that a "file" is ... a humongous stretch, to put it mildly.
pinoy420 [3 hidden]5 mins ago
> Good design is aesthetic
> Xorg
I guess it didn’t say pleasing?
gyomu [3 hidden]5 mins ago
Rams’ principles were perhaps noteworthy when he first vocalized them as the state of design discourse was much more primitive back then (not even sure of that actually), but today they ring quite simplistic and hollow, and kind of useless as actual decision making tools.
“Good design is as little design as possible” ok cool but I have 30 different feature requests coming in every week from actual users, that doesn’t really help me make concrete design decisions
“Good design is aesthetic” whose aesthetic? Aesthetic is extremely cultural and arbitrary, my designers are fighting over whether a photorealistic leather texture is more aesthetic than a gradient texture background, how does that help?
“Good design makes a product useful” uh yeah okay I’ve never had a design problem that was solved by someone saying “you know what, we should just make this product useful” “oooh right how did we not think of that”.
I mean these principles all sound good and high falutin’ as abstract statements, but I’ve never found them useful or actionable in my 15+ years working as a designer.
skydhash [3 hidden]5 mins ago
My takes:
“Good design is as little design as possible”
What you create should be a natural flow of what your clients needs to do. Don't go and add lot of options like a plane cockpit. Which usually means try to find the common theme and adding on top, and also clamping down on fantasy wishes
"Good design is aesthetic"
I'd take the definition of pleasing instead of beautiful for the term. When learning to draw, an often given advice is just to focus and detail only a single part of the whole picture, everything else can be left out. So discussion over a single thing is usually meaningless. If it's not going to be the focus point of interaction, as long as it meshes into the whole, no one care about the exact details.
“Good design makes a product useful”
Usability is a whole field, and you can find the whole corpus under the HCI (Human Computer Interaction) keyword. Focus on meeting this baseline, then add your creativity on top.
> I mean these principles all sound good and high falutin’ as abstract statements, but I’ve never found them useful or actionable
It's kinda like Philosophy, you have to understand what it means for yourself. It's not a cake recipe to follow, but more of a framework from where to derive you own methodology.
lloeki [3 hidden]5 mins ago
> Don't go and add lot of options like a plane cockpit.
Or do, because you're designing a plane cockpit :)
brailsafe [3 hidden]5 mins ago
Ya but how many of the results of what you're describing as obvious are evaluated critically afterward, based on their intention?
If you're working on a piece of software, how likely is it that people are regularly comparing it to the most effective alternative alternative means to accomplish the same task, and the revert course of it turns out you've actually created a more convoluted and time consuming path to the same outcome? Often times, software just adds gets in the way and makes life less easy than it would have been otherwise.
The opposite of these principles is often easier to reason about. For example, people attempting to make "better" versions of Hacker News seem to rarely be aware of these, and when they post to Show HN, hopefully realize that the way it is is hard to beat because it follows at least some of the principles. To make something better, you'd absolutely need to follow similar principles more effectively.
lelanthran [3 hidden]5 mins ago
> If you're working on a piece of software, how likely is it that people are regularly comparing it to the most effective alternative alternative means to accomplish the same task, and the revert course of it turns out you've actually created a more convoluted and time consuming path to the same outcome?
It depends; A/B testing sorta does that at the very granular level. Not so much at a high level.
begueradj [3 hidden]5 mins ago
What is "good design" ? That's the question.
scarface_74 [3 hidden]5 mins ago
And most of that misses the goal of why you write software for a business.
You write software for a company so someone will give them money for it or so the company can save money
Everything else takes a backseat to that core mission. The first goal when writing software is to efficiently get to a point where one of those goals can be met.
It makes no sense to polish software if you are going to run out of money before it gets released, management cuts funding or you can’t find product market fit to convince investors to give you money depending on what your goal is.
Code doesn’t always need to be long lasting, you can’t predict how the market will change, how the underlying platform will change, when a better funded competitor will come in and eat your lunch, etc.
Good design doesn’t need to be “innovative”. It needs to fit within the norms of the platform or ecosystem it is part of.
Doches [3 hidden]5 mins ago
Good thing not all of us write software for a business.
I write little utilities for my parents, games for my son, a web shop for my wife. I write social spaces for myself and my friends. I write tools for myself.
I write software for strangers on the internet. I write software when I’m drunk, to test myself. Sometimes I write software simply for the joy of writing it, and it never runs again after that first “ah, finally!” moment. Aah, time well spent.
Equating “writing software” with “fulfilling business goals” is…quite depressing. Get outside! Feel the sunshine on your face! Write your own image-processing DSL, just for the sheer unbridled hell of it! Learn Nim! Why not?
(Ok, maybe skip the last one)
archargelod [3 hidden]5 mins ago
> Learn Nim! Why not?
As someone who learned Nim as my first "serious" programming language, I do recommend to learn Nim. It is a delight to write and read.
Before I found Nim I looked at C, C++, Python and all of them full of cruft - old bad decisions that they're stuck with and forced to keep in the language. And it makes them a nightmare to learn.
In C there seems to be hundreds of subtly different OS-dependent APIs for every simple thing.
C++ was designed by mad scientist and extended to the point where even c++ developers have no idea what part of the language you should use.
Python is the messiest mess of OOP with non-existent documentation that is actually readable. Just to find how to do anything in Python I need to look at sites like stackoverflow and find outdated solutions for Python 2, deprecated functions and giant third party libraries. Yeah you don't learn Python nowadays, you forced to learn Python + NumPy + Pandas + Python Package Distribution (hell).
I had fun learning Nim, though.
KronisLV [3 hidden]5 mins ago
I feel like that’s the curse of most programming languages that end up mainstream and survive for a decade or two - some amount of legacy cruft or bad decisions that end up more or less set in stone is inevitable.
sgarland [3 hidden]5 mins ago
> Just to find how to do anything in Python I need to look at sites like stackoverflow
Huh? Surely you don’t expect docs to answer generic questions like, “how do I flatten a nested list?”
Pandas (and Polars!) is an excellent library that serves specific needs. One of those is not doing basic file parsing. I’ve seen comments advocating its usage for tasks as simple as “read a CSV file in and get the Nth column.” The same goes for NumPy – powerful library that’s ridiculously over-used for things that are trivially solvable by stdlib.
scarface_74 [3 hidden]5 mins ago
You realize “writing code on a computer” is just the opposite of “getting outside”? Getting outside for me is getting outside.
My wife is out of town this weekend at a conference. I woke up, fixed breakfast, went outside and swam a few laps in the pool enjoying this 80 degree weather (the joys of living in Florida), hung out at the bar downstairs, came back upstairs and took a shower and I am heading back downstairs to hang out at one of the other bars downstairs and just shoot the shit with the bartender who is a friend of mine and whoever else shows up while drinking soda (I go down to hang out not always to drink) and listening to bad kaorake.
When my wife comes back tomorrow, we will hang out during the day and go back downstairs to the bar tomorrow to watch the Super Bowl.
We have 8 vacations planned this year not including 5-6 trips to fly up to our home town in Atlanta (where we lived all of our adult lives until two years ago) for things going on in our friends lives and to fly to my child hood home to see my parents and family.
Not bragging, most of our vacations aren’t anything exotic or expensive and I play the credit card point/sign up bonus/churnimg game to offset some of the costs.
My focuses on how to add business value was what allowed me to find strategic consulting jobs where most jobs are still fully remote.
SaucyWrong [3 hidden]5 mins ago
You must have realized that by, “going outside,” the parent meant “doing something that makes you happy,” and not necessarily literally being outdoors. They find joy writing code. You realized that, and still chose to demean them with this reply.
sgarland [3 hidden]5 mins ago
Ngl, I did not pick up on that, and was confused. Still, I assumed good intent and left it alone.
scarface_74 [3 hidden]5 mins ago
In my mind, spending too much time on a computer instead of physically going “outside” and touching grass can’t be healthy.
Or even staying inside and spending time with family
gamedever [3 hidden]5 mins ago
In my mind, going to a bar and drinking is factually unhealthy
scarface_74 [3 hidden]5 mins ago
So is not being able to read
In my original comment:
whoever else shows up while drinking soda (I go down to hang out not always to drink) and listening to bad kaorake.
hirvi74 [3 hidden]5 mins ago
No one makes it out alive. Might as well have some fun.
jwr [3 hidden]5 mins ago
> And most of that misses the goal of why you write software for a business. You write software for a company so someone will give them money for it or so the company can save money
Hmm. I run a solo-founder SaaS business. I write software for my customers so that they can improve their workflows: essentially, work faster, with fewer mistakes, and make work easier. My customer pay me money if my software improves their lives and workflows. If it doesn't live up to the promise, they stop paying me money.
Most of Dieter Rams's design rules very much do apply to software that I write. I can't always afford to follow all of these rules, but I'm aiming to.
And while I don't always agree with antirez, his post resonated with me. Many good points there.
Incidentally, many of the aberrations he mentioned are side-effects of work-for-hire: if you work for a company and get a monthly salary, you are not directly connected to customers, do not know where the money comes from, and you are not constrained by time and money. In contrast to that, if you are the sole person in a business, you really care about what is the priority. You don't spend time on useless rewrites, you are super careful with dependencies (because they end up costing so much maintenance in the future), you comment your code because you do so much that you forget everything quickly, and you minimize complexity, because simpler things are easier to build, debug and maintain.
scarface_74 [3 hidden]5 mins ago
> Hmm. I run a solo-founder SaaS business. I write software for my customers so that they can improve their workflows: essentially, work faster, with fewer mistakes, and make work easier. My customer pay me money if my software improves their lives and workflows
So your goal is to write software so that customers will give you money because they see that your software is valuable to them. How does that conflict with what I said? That’s the goal of every legitimate company.
I work in consulting. I work with sales, I am the first technical person a customer talks to on a new engagement and when they sign the contract, I lead the implementation and work with the customer. I know exactly where the money comes from and what the customer wants.
If a developer is not close to the customer and doesn’t have as their focus the needs of the business, they are going to find themselves easily replaced and it’s going to be hard to stand out from the crowd when looking for a job
ozim [3 hidden]5 mins ago
No one forces anyone to write software for the business or for profit.
Everybody still can write software however you like just don’t expect to earn money on that.
frontalier [3 hidden]5 mins ago
why is it 10? why not 7 or 15?
scarface_74 [3 hidden]5 mins ago
Opposite anecdote, in the 2000s, I worked at a company that had dozens of computers running jobs, built out a server room with raised floors and built out a SAN to store a whopping 3 TB of data and we had a home grown VB6 job scheduler that orchestrated jobs across the computers running Object Rexx scripts.
We had our own internal load balancer, web servers, mail servers, ftp servers to receive and send files, and home grown software.
Now I could reproduce the entire setup within a week at the most with some yaml files and hosted cloud services. All of the server architected is “abstracted”. One of the things he complains about.
As far as backwards compatibility, worshipping at the thrown of backwards compatibility is one reason that Windows is the shit show it is. Even back in the mid 2000s there was over a dozen ways to represent a string when programming and you had to convert back and forth between them.
Apple has been able to migrate between 5 processors during its existence by breaking backwards compatibility and even remove entire processing subsystems from ARM chips by removing 32 bit code compatibility.
sgarland [3 hidden]5 mins ago
> All of the server architected is “abstracted”. One of the things he complains about.
This is my personal bugbear, so I’ll admit I’m biased.
Infrastructure abstractions are both great and terrible. The great part is you can often produce your desired end product much more quickly. The terrible part is you’re no longer required to have the faintest idea of how it all works.
Hardware isn’t fun if it’s not working, I get that. One of my home servers hard-locked yesterday to the point that IPMI power commands didn’t work, and also somehow, the CPUs were overheating (fans stopped spinning is all I can assume). System logs following a hard reset via power cables yielded zero information, and it seems fine now. This is not enjoyable; I much rather would’ve spent that hour of my life finishing the game of Wingspan.
But at the same time, I know a fair amount about hardware and Linux administration, and that knowledge has been borne of breaking things (or having them break on me), then fixing them; of wondering, “can I automate X?”; etc.
I’m not saying that everyone needs to run their own servers, but at the very least, I think it’s an extremely valuable skill to know how to manage a service on a Linux server. Perhaps then, the meaning of abstractions like CPU requests vs. Limits will become clear, and disk full messages will cause one to not spam logs with everything under the sun.
mihaaly [3 hidden]5 mins ago
About compatibility.
Windows is a shitshow beacuse the leadership is chaotic, dragged all around all the time, never finishing nothing well. They only survived because of backward compatibility! Building on the unlikely success in the 90s.
Also, why do I have to install new software in every couple of months to access my bank account, secure chat, flight booking system, etc., etc., without any noticable difference in operation and functionality. A lot of things unreasonably becoming incompatible with 'old' (we are talking about months for f's sake!!) versions. That's a nuisance and erosion of trust.
ripped_britches [3 hidden]5 mins ago
The app updates you mention are most likely due to the problem of not being able to hot update client side code easily / at all in the google/apple ecosystems.
Web actually excels here because you can use service workers to manage versioning and caching so that backwards compatibility is never a concern.
scarface_74 [3 hidden]5 mins ago
Isn’t updating client side code, pushing an update to the store and the device automatically updating the app?
BirAdam [3 hidden]5 mins ago
I wouldn’t call Microsoft’s success in the 90s unlikely. They had a decent product at a low price for commodity hardware when nothing else was as good for as cheap. They also had decent marketing. The company worked hard and delivered. That’s not unlikely, it’s good execution. The unlikely part was something more like OSX taking Microsoft developer market monopoly away.
AnonymousPlanet [3 hidden]5 mins ago
You needed Visual Studio and C++ to automate even the most mundane things in Windows. Without decent scripting or command line Windows was a developer's nightmare compared to Linux.
Anything providing something like Linux with a polished surface and support for the tools of the rest of office IT (e.g. MS Word) was going to blow Windows away in this area.
So the success of OSX here is no surprise.
scarface_74 [3 hidden]5 mins ago
I was using Perl and the Win32 module to automate over a dozen job servers.
But the native way of doing it was VbScript with the Windows Script Host.
chikere232 [3 hidden]5 mins ago
> Also, why do I have to install new software in every couple of months to access my bank account, secure chat, flight booking system, etc., etc., without any noticable difference in operation and functionality. A lot of things unreasonably becoming incompatible with 'old' (we are talking about months for f's sake!!) versions. That's a nuisance and erosion of trust.
Are you talking about security updates?
derangedHorse [3 hidden]5 mins ago
> without any noticable difference in operation and functionality
Presumably a security update would mean a difference in operation somewhere. They were probably referring to the updates that just exist to add ads, promos, design changes, etc.
scarface_74 [3 hidden]5 mins ago
And my other question, is why are constant updates a concern? It’s not like you have to get a stack of floppies and upgrade your software or even do anything manually. Everything auto updates.
At least with iOS, iOS 18 supports devices introduced since 2018 and iOS 16 just got a security update late last year and that supports phones back to 2017.
eviks [3 hidden]5 mins ago
The obvious answer is because update can break your workflow or waste your time in other ways
sadeshmukh [3 hidden]5 mins ago
How?
eviks [3 hidden]5 mins ago
By removing a feature you use, by breaking said feature, by changing UI, requiring relearning, by introducing a new bug, etc - the possibilities are infinite, just use your imagination and rely on your experience using software!
econ [3 hidden]5 mins ago
You should be able to write something and have it work for the next 1000 years. There is no reason why it can't.
I imagine a simple architecture where each application has its own crappy CPU some memory and some storage with frozen specs. Imagine 1960 and your toaster gains control over your washing machine. Why are they even in the same box?
scarface_74 [3 hidden]5 mins ago
And your needs and technogy doesn’t change in 1000 years?
Does it really make sense in 2025 to use Quicken (?) with a dialup modem that calls into my bank once a day to update my balances like I did in 1996?
Imagine 1960 and your toaster gains control over your washing machine. Why are they even in the same box?
Imagine in 2002 your MP3 Player, your portable TV, your camera, your flashlight, your GPS receiver, your computer you use to browse the web, and your phone being the same device…
caspper69 [3 hidden]5 mins ago
> Does it really make sense to use Quicken (?) with a dialup modem that called into my bank once a day to update my balances like I did in 1996
Well, the modern hot garbage Intuit forces people to use takes 5-15 seconds to save a transaction, 3-5 seconds to mark a transaction as cleared by the bank, it sometimes completely errors out with no useful message and no recourse other than "try again", has random UI glitches such as matched transactions not being removed from the list of transactions to match once matched (wtf?), and is an abject failure at actually matching those transactions without hitting "Find matches", because for whatever reason the software can't seem to figure out that the $2182.77 transaction from the bank matches the only $2182.77 transaction in the ledger. That one really gets my goat, because seriously, WTF guys?
Not to mention the random failure of the interface to show correct totals at random inopportune moments.
Oh, and it costs 5x as much on an annual basis.
I sure would take that 1996 version with some updated aesthetics and a switch to web-based transaction downloading versus the godawful steaming pile of shit we have now- every day of the week and twice on Sunday. Hands down.
This idea that we've made progress is absolutely laughable. Every single interaction is now slower to start, has built-in latency at every step of the process, and is fragile as hell to boot because the interface is running on the framework-du-jour in javascript.
Baby Jesus weeps for the workflows forced upon people nowadays.
I mean seriously, have none of you 20-somethings never used a true native (non-Electron) application before?
sgarland [3 hidden]5 mins ago
Tbf, RocketMortgage makes Quicken; Intuit makes QuickBooks. Still, I hate the latter with a burning passion, so I’ll indulge that.
What kills me about Intuit is that they _can_ make decent software: TurboTax. Obviously, I’d rather the IRS be like revenue departments in other countries, and just inform me what my taxes were at EOY, but since we live in lobbyist hell, at least Intuit isn’t making the average American deal with QuickBooks-level quality.
It’s not like the non-SaaS version of QB is any better, either. I briefly had a side business doing in-person tech support, and once had a job to build a small Windows server for a business that wanted to host QB internally for use. Due to budget constraints, this wound up being the hellscape that is “your domain controller is also your file server, application server…” Unbeknownst to me at the time, there is/was a long-standing bug with QB where it tries to open ports for its database that are typically reserved by Windows Server’s DNS service, and if it fails to get them, it just refuses to launch, with unhelpful error logs. Even once you’ve overcome that bizarre problem, getting the service to reliably function in multi-user mode is a shitshow.
> I mean seriously, have none of you 20-somethings never used a true native (non-Electron) application before?
Judging on the average web dev’s understanding of acceptable latency, no. It’s always amusing to me to watch when web devs and game devs argue here. “250 msec? That’s 15 screen redraws! What are you doing?!” Or in my personal hell, RDBMS. “The DB is slow!” “The DB executed your query in < 1 msec. Your application spent the other 1999 msec waiting on some other service you’ve somehow wrapped into a transaction.”
scarface_74 [3 hidden]5 mins ago
The modern way to check your balance is just go to a website, use an app on your phone or just have it show up on your watch.
caspper69 [3 hidden]5 mins ago
None of which handle bookkeeping, which is what Quicken was used for.
I think it should have been obvious from my list of complaints that I was doing something a little more involved than "checking my bank balance".
scarface_74 [3 hidden]5 mins ago
And in that case, you had to call into each bank?
caspper69 [3 hidden]5 mins ago
My rant is about shitty online modern software that replaced fully functional (and fast) software for “reasons”.
wow, that's an amazingly impossible standard no software lives up to.
Or much technology at all. If you use anything that is 1000 years old, it's probably been maintained or cared for a lot during those 1000 years
robinsonb5 [3 hidden]5 mins ago
Well yeah, 1000 years is obvious hyperbole. But I've been annoyed and frustrated enough by churn over the last two and a half decades that I always ask myself "will this still work in 5 years?" when considering new software - and especially its build process.
It's alarming how often the answer isn't a confident "yes".
chikere232 [3 hidden]5 mins ago
That's fair. Too many languages and frameworks are all too happy to break things for pointless cleanups or renames.
Python for example makes breaking changes in minor releases and seems to think it's fine, even though it's especially bad for a language where you might only find that out runtime
superjan [3 hidden]5 mins ago
Well, why was the initial release insecure in the first place?
chikere232 [3 hidden]5 mins ago
Because security is hard and there are people constantly working on finding new issues.
It's a bit like asking why the army needs tanks when horses worked well the previous war
mihaaly [3 hidden]5 mins ago
I am talking about getting outdated and inoperable frequently, several software.
I wouldn't blame it on security, as many of them do.
...or if it is true, this mass security issues emerging from their design, then the situation is even worse than just being lazy ignorant bastards.... or perhaps the mass security problems are related to this incompetence as well?... oi!
scarface_74 [3 hidden]5 mins ago
Apples leadership hasn’t always been a light on a shining hill, especially during the 90s and they still managed the 68K to PPC transition.
amrocha [3 hidden]5 mins ago
I’m not sure where you live, but I’ve never had to install anything to access the features you described, going back over a decade.
All of that has been solved by the web at this point.
Arelius [3 hidden]5 mins ago
> worshipping at the thrown of backwards compatibility is one reason that Windows is the shit show it is
You say Windows is a shit show, but as someone who has developed a lot on both Windows and Linux, Linux is just as much a shit show just in different ways.
And it's really nice being able to trust that binaries I built a decade ago just run on Windows.
scarface_74 [3 hidden]5 mins ago
Linux is a shit show because there is no driver standard among other reasons
AnthonyMouse [3 hidden]5 mins ago
It has no driver standard on purpose with the intention of getting drivers into the mainline kernel tree.
If drivers are "standard" then low quality drivers full of bugs and security vulnerabilities proliferate and only the OEM can fix them because they're closed source, but they don't care to as long as the driver meets a minimum threshold of not losing them too many sales, and they don't care about legacy hardware at all or by then have ceased to exist, even if people are still using the hardware.
If there is no driver standard then maintaining a driver outside the kernel tree is a pain and more companies get their drivers into the kernel tree so the kernel maintainers will deal with updating them when the driver interface changes, which in turn provides other people with access to fix their heinous bugs.
scarface_74 [3 hidden]5 mins ago
How is this philosophy working for the end user? How is a similar problem working out for Android phones getting updates?
AnthonyMouse [3 hidden]5 mins ago
The drivers for most PC hardware are in the kernel tree. That part is working pretty well.
It's clear that something more aggressive needs to be done on the mobile side to get the drivers into the kernel tree because the vendors there are more intransigent. Possibly something like right to repair laws that e.g. require ten years of support and funds in escrow to provide it upon bankruptcy for any device whose supporting software doesn't have published source code, providing a stronger incentive to publish the code to avoid the first party support requirement. Or greater antitrust enforcement against e.g. Qualcomm, since they're a primary offender and lack of competition is a major impediment to making it happen. If Google wanted to stop being evil for a minute they could also exert some pressure on the OEMs.
The real problem is that the kernel can't easily be relicensed to directly require it, so we're stuck with indirect methods, but that's hardly any reason to give up.
pm215 [3 hidden]5 mins ago
I don't think I'd put the difference between the pc and server markets vs mobile down to vendor "intransigence". Rather, I would say it's a result of the market incentives. For PCs and especially for servers, customers insist that they can install an already released OS and it Just Works. This means that vendors are strongly incentivised to create and follow standards, not to do odd things, and to upstream early. On the other hand in mobile and embedded there is basically no customer demand to be able to run preexisting distro releases, and so vendors feel very little pressure to put in the extra work to get support upstream or to avoid deviating from common practice. On the contrary they may see their custom deviations as important parts of what makes their product better or more feature rich than the competition.
Right to repair laws as you suggest might do something to shift the incentives of vendors in these markets; I don't think they're ever going to "see the light" and suddenly decide they've been doing it wrong all these years (because measured in commercial consequences, they haven't)...
smitty1e [3 hidden]5 mins ago
This gets at the heart of The Famous Article: my pet project vs. corporate-grade stuff.
The individual vs. the group.
Where I agree with the author is the need to keep individual tinkering possible.
However, generalizing anyone's idiosyncratic tastes is impossible.
scarface_74 [3 hidden]5 mins ago
Is that a real article? Do you have a reference for it?
inglor_cz [3 hidden]5 mins ago
"And it's really nice being able to trust that binaries I built a decade ago just run on Windows."
Wouldn't this need be solved by an emulator of older architectures?
There would be a performance cost, but maybe the newer processors would more than make up for it.
robinsonb5 [3 hidden]5 mins ago
Funny you should say that.
I have a legally-purchased copy of Return to Castle Wolfenstein here, both the Windows version and the official Linux port.
One of them works on modern Linux (with the help of Wine), one of them doesn't.
I wrote some specialist software for Linux round about 2005 to cover a then-business need, and ported it to Windows (using libgtk's Windows port distributed with GIMP at the time.) The windows port still works. Attemping to build the Linux version now would be a huge ordeal.
Rohansi [3 hidden]5 mins ago
What older architecture? Windows has been on x86 for decades. You should be able to run any 32-bit application built 20+ years ago on modern Windows, assuming the application didn't rely on undocumented/internal Windows APIs.
scarface_74 [3 hidden]5 mins ago
If you have ever read any of Raymond Chen’s stuff, you would know that is a big “if”
dpkonofa [3 hidden]5 mins ago
>Apple has been able to migrate between 5 processors during its existence by breaking backwards compatibility and even remove entire processing subsystems from ARM chips by removing 32 bit code compatibility.
I would consider myself an Apple evangelist, for the most part, and even I can recognize what's been lost by Apple breaking backwards compatibility every time they need to shift direction. While the philosophy is great for making sure that things are modern and maintained, there is definitely a non-insignificant amount of value that is lost, even just historically but also in general, by the paradigm of constantly moving forward without regard for maintaining compatibility with the past.
scarface_74 [3 hidden]5 mins ago
What was the alternative? Sticking with 65x02, 68K, or PPC?
They could have stuck with x86 I guess. But was moving to ARM really a bad idea?
They were able to remove entire sections of the processor by getting rid of 32 bit code and saving memory and storage by not having 32 bit and 64 bit code running at the same time. When 32 bit code ran it had to load 32 bit version of the shared linked library and 64 bit code had to have its own versions.
dpkonofa [3 hidden]5 mins ago
>What was the alternative? Sticking with 65x02, 68K, or PPC?
No, including an interpreter like they did (Rosetta) was an alternative. The "alternative" really depends on what the goals were. For Apple, their goal is modern software and hardware that works together. That's antithetical to backwards compatibility.
>They could have stuck with x86 I guess. But was moving to ARM really a bad idea?
I don't think I ever suggested that it was or that they couldn't have...
>They were able to remove entire sections of the processor by getting rid of 32 bit code and saving memory and storage by not having 32 bit and 64 bit code running at the same time.
Yes, and, in doing so, they killed any software that wasn't created for a 64-bit system. Again, for even a purely historical perspective, the amount of software that didn't survive each of the instanced transitions is non-negligible. Steam now has an entire library of old Mac games that can't run on modern systems anymore because of the abandonment of 32-bit without any consideration for backwards compatibility. Yes, there are emulators and apps like Wine and CrossOver than can somewhat get these things working again but there's also a whole subsection of software that just doesn't work anymore. Again, that's just a byproduct of Apple's focus on modern codebases that are currently maintained but it's still a general detriment that so much useable software was simply lost immediately because of these changes when there could have been some focus on maintaining compatibility.
fpoling [3 hidden]5 mins ago
If Apple users really appreciated backward-compatibility, the would be a significant market for 3-rd party emulators and VMs to run old software using no longer supported hardware or software API. It is not there. There are VMs, but they are mostly used by developers or by people who want to run Windows software on Mac, not old Mac software. So from Apple perspective if their users do not want to pay for backward compatibility, why should Apple provide it?
scarface_74 [3 hidden]5 mins ago
> No, including an interpreter like they did (Rosetta) was an alternative.
The downside of including an interpreter with no end of life expectations is that some companies get lazy and will never update their software to modern standards. Adobe is a prime example. They would have gladly stuck with Carbon forever if Apple hadn’t changed their minds about a 64 bit version of Carbon.
That was the sane reason that Jobs made it almost impossible to port legacy text based software to early Macs. Microsoft jumped onboard developing Mac software and Lotus and WordPerfect didn’t early on.
But today you would have to have emulation software for Apple //es, 68K, PPC and 32 bit and 64 bit x86 software and 32 bit and 64 bit ARM (iOS) software all vying for resources.
Today because of relentlessly getting rid of backwards compatibility, the same base OS can run on set top boxes, monitors (yeah the latest Apple displays have iPhone 14 level hardware in them and run a version of iOS), phones, tablets, watches and AR glasses.
Someone has to maintain the old compatibility layers and patch them for vulnerabilities. How many vulnerabilities have been found in some old compatible APIs on Windows?
kelnos [3 hidden]5 mins ago
> The downside of including an interpreter with no end of life expectations is that some companies get lazy and will never update their software to modern standards. Adobe is a prime example. They would have gladly stuck with Carbon forever if Apple hadn’t changed their minds about a 64 bit version of Carbon.
I don't see that as a downside; I see it as a strength. Why should everyone have to get on the library-of-the-year train, constantly rewriting code -- working code! -- to use a new API?
It's just a huge waste of time. The forced-upgrade treadmill only helps Apple, not anyone else. Users don't care what underlying system APIs an app uses. They just care that it works, and does what they need it to do. App developers could be spending time adding new features or fixing bugs, but instead they have to port to new library APIs. Lame.
> Someone has to maintain the old compatibility layers and patch them for vulnerabilities.
It's almost certainly less work to do that than to require everyone else rewrite their code. But Apple doesn't want to spend the money and time, so they force others to spend it.
caspper69 [3 hidden]5 mins ago
This may come as a surprise to you, but the vast majority of users absolutely hate it when their software changes.
They don't want all new interfaces with all new buttons and options and menus.
People get used to their workflows and they like them. They use their software to do something. The software itself is not the goal (gaming excepted).
I'm not suggesting that things should never be gussied up, or streamlined or made more efficient from a UX perspective, but so many software shops change just to change and "stay fresh".
dpkonofa [3 hidden]5 mins ago
I'm not sure why you're responding to me. Nothing that you're saying is anything that I've mentioned or brought up. I know what the downsides are. I'm just saying that the goals that Apple has optimized for have resulted in a loss of things that many would consider valuable.
scarface_74 [3 hidden]5 mins ago
You said No, including an interpreter like they did (Rosetta) was an alternative
BuyMyBitcoins [3 hidden]5 mins ago
Plus, the reduced power consumption and battery life extension is insanely good now. Whereas I could only get an hour and a half from an Intel MacBook Pro, I can now get over a days use out of the M4. I have not been affected by the lack of “legacy” software support and I am more than happy to have this tradeoff.
dpkonofa [3 hidden]5 mins ago
I think most people, me included, are more than happy with the trade-off. That doesn't mean that nothing of value was lost in each transition.
scarface_74 [3 hidden]5 mins ago
Engineering is always about trade offs. Microsoft could never make the trade offs that Apple has made and while it has suffered because of it in new markets, it’s gained the trust of Big Enterprise. Microsoft should not be trying to act like Apple.
There is room for both. But if you are a gamer, the Mac isn’t where you want to be anyway. Computers have been cheap enough for decades to have both a Windows PC and a Mac. Usually a desktop and a laptop.
dpkonofa [3 hidden]5 mins ago
I never suggested that they should, in either case. I'm just saying that there are things that are lost by completely ignoring backwards compatibility. There are plenty of Mac-only applications that aren't games that are now obsolete and unusable because of architecture changes.
3ple_alpha [3 hidden]5 mins ago
You can also reproduce it within a week without hosted cloud services. What matters is that you don't have to develop custom software and instead spend that week writing config files and orchestration scripts, be it cloud stuff, docker containers or whatever.
scarface_74 [3 hidden]5 mins ago
I can reproduce it without cloud services sure. But then I have to maintain it. Make it fault tolerant. Make sure it stays patched and backed up, buy enough hardware to make sure I can maintain peak capacity instead of having the elasticity, etc.
I have done all of this myself with on prem servers that I could walk to. I know exactly what’s involved and it would be silly to do that these days
aqueueaqueue [3 hidden]5 mins ago
If it was 3TB then you should to be fair compare it to 3PB now.
eviks [3 hidden]5 mins ago
So what was the great benefit of removing 32 bits brought? Are you are to use a single string type without conversions with Swift or has that part never disappeared in those 5 migrations?
scarface_74 [3 hidden]5 mins ago
Apple was able to physically remove hardware support for 32 bit code and shrink the processor/use the die space for other purposes.
Also, when you have 32 bit and 64 bit code, you have to have 32 bit and 64 bit versions of the framework both in memory and on the disk.
This is especially problematic with iOS devices that don’t have swap.
eviks [3 hidden]5 mins ago
I replied to a comment about desktop and software, yours is about mobile lacking swap and hardware?!
(and Apple shipped a gig of high-def screenshot and other garbage it cares so much about having frameworks on disk)
scarface_74 [3 hidden]5 mins ago
You know that their desktop processors are minor variants of their mobile processors as are their operating systems? In fact, they ship the same processors in their iPads and Macs.
And it’s not just about disks it’s about memory.
worik [3 hidden]5 mins ago
> As far as backwards compatibility, worshipping at the thrown of backwards compatibility is one reason that Windows is the shit show it is.
Not entirely, there are other reasons too
But we should respect semantic versioning. Python is a dreadful sinner in that respect.
foobiekr [3 hidden]5 mins ago
Semantic versioning is an illusion. It's a human-managed attempt to convey things about the surfaces and behaviors of software systems. Best case, it isn't completely misleading and a waster of everyone's time.
There is no perfection here, but the correct way to reason about this is to have schema-based systems where the surfaces and state machines are in high level representations and changes can be analyzed automatically without some human bumping the numbers.
forrestthewoods [3 hidden]5 mins ago
Windows is such a shit show that the number one Linux gaming platform works because it turns out the best API for Linux is… Win32.
Linux is a far bigger shit show. At least at the platform level. Windows is a lesser shitshow at the presentation layer
scarface_74 [3 hidden]5 mins ago
On second thought, as a user, Windows itself is…fine. Even as a developer with VSCode + WSL, it’s…fine
It’s more about x86. Using an x86 laptop feels archaic in 2025. Compared to my M2 MacBook Air or my work M3 MacBook Pro.
The only thing that makes MacOS much better for me is the ecosystem integration
lelanthran [3 hidden]5 mins ago
> It’s more about x86. Using an x86 laptop feels archaic in 2025. Compared to my M2 MacBook Air or my work M3 MacBook Pro.
From the user PoV, isn't that one of those things that are irrelevant? The users, even the very technical ones, neither know nor care that the computer they are using is x86 or ARM or ...
You might say that the battery life is part of UX, so, sure, I get that, but while, in practice, battery life on the M1/M2/M3/M4 is superior, as a trade-off the user gets a limited set of software that they can run, which is also part of the UX.
So the user gets to choose which UX trade-off they want to make.
scarface_74 [3 hidden]5 mins ago
If I want the balls to the wall fastest computer that I can buy at a given price, it’s going to be an x86 PC with an Nvidia based video card. If it’s a desktop, I don’t really care about performance per watt.
I personally care because I travel a lot and I need a laptop. For non gamers of course Mac desktops including the new base Mac Mini is good enough.
forrestthewoods [3 hidden]5 mins ago
I mostly use a 32-core Threadripper. But I’d kill for an M4 Ultra + 4090 setup. Or I guess now 5090.
The upcoming Nvidia Linux AI boxes are interesting.
forrestthewoods [3 hidden]5 mins ago
Windows laptops are genuinely atrocious.
I don’t think that’s really an x86 vs ARM things. That’s a red herring.
scarface_74 [3 hidden]5 mins ago
x86 laptops can’t be as hood as ARM processors as far as performance per watt. Mac laptops using x86 processors the same atrocious battery life, heat issues and fan noises as x86 Windows laptops.
forrestthewoods [3 hidden]5 mins ago
Again, I don’t think that’s a fundamental x86 vs ARM issue. That’s more of a “Intel mobile chipsets from 2019 were terrible” issue.
There’s like 3 big factors at play.
1. x86 vs ARM
2. Apple Silicon Engineers vs others
3. Apple’s TSMC node advantage
I think the x86 vs ARM issue is a relatively small one. At least fundamentally.
worthless-trash [3 hidden]5 mins ago
Incorrect. Android has more games than Linux installations.
scarface_74 [3 hidden]5 mins ago
Pay to win games and other in app purchase monstrosities?
I’m not saying iPhones are any better. It came out in the Epic trial that 90% of Apple’s App Store revenue comes from games
forrestthewoods [3 hidden]5 mins ago
I do not consider Android synonymous with Linux. YMMV.
lucianbr [3 hidden]5 mins ago
> I obviously do have other Linux devices, including an Android phone – so personally I’d really love for it to take over in that market too.
Linus Torvalds. But what does he know, eh?
otabdeveloper4 [3 hidden]5 mins ago
Most likely he doesn't know, as he isn't a phone OEM.
In reality pretty much no Android phone runs a stock upstream Linux. They all have horrible proprietary modified kernels.
So no, Android isn't even Linux in the narrow technical sense of the kernel it runs.
(The Android userland, of course, is an entirely different OS that has nothing at all whatsoever to do with Linux or any other flavor of UNIX, current or historical.)
worthless-trash [3 hidden]5 mins ago
The uname -a will disagree.
immibis [3 hidden]5 mins ago
oh then Internet Explorer is Mozilla because of the User-Agent string
worthless-trash [3 hidden]5 mins ago
Your analogy is faulty because one is lying about its origin of source.
et1337 [3 hidden]5 mins ago
A lot of these points are opposites of each other, so much so that it seems intentional. We’re to maintain backward compatibility, yet eschew complexity.
We’re to reinvent the wheel for ourselves instead of relying on huge dependency trees, yet new languages and frameworks (the most common form of wheel reinventing) are verboten.
The only way I can think to meet all these demands is for everyone (other than you, of course) to stop writing code.
And I gotta say, a part of me wishes for that at least once a day, but it’s not a part of me I’m proud of.
afro88 [3 hidden]5 mins ago
It's a vent rather than a thesis. There isn't really any logic to it, it's just a list of frustrations said in a poetic way.
In my opinion, vents are fine amongst friends. Catharsis and all that. But in public they stir up a lot of negative energy without anything productive to channel it to (any solutions, calls to action etc). That sucks.
pdimitar [3 hidden]5 mins ago
Sucks or not, it's pretty logical. Venting is not interesting to me if I am reading something on HN. Or if they are, it's rare.
Also what you call "negative energy" I would often call "rightful criticism on non-distilled thoughts that have internal controversies".
enriquto [3 hidden]5 mins ago
> A lot of these points are opposites of each other, so much so that it seems intentional.
The writing style is reminiscent to this famous text (not written in jest, you have to understand that most of these statements depend on a context that is for you to provide):
"Property is theft." -- P.J. Proudhon
"Property is liberty." -- P.J. Proudhon
"Property is impossible." -- P.J. Proudhon
"Consistency is the hobgoblin of small minds." -- R.W. Emerson
derangedHorse [3 hidden]5 mins ago
One can eschew complexity without breaking changes. If the initial abstractions are done well enough, a lot of things can be swapped out with minimal breakage. There are also ways to keep existing apis while adding new ones. Outdated apis also don't need to change every time an underlying dependency changes if the abstraction is good enough.
siev [3 hidden]5 mins ago
It also reads to me like the contradictions are intentional. Something about software today feeling samey and churned-out.
abathur [3 hidden]5 mins ago
I agree (and I think the ~point is that weighing and being thoughtful about how tradeoffs affect a given project is part of the process).
austin-cheney [3 hidden]5 mins ago
The most direct resolution to the conflict you mention, and this works for just about everything in life is:
Just ask yourself: Why would I want to do that?
When somebody suggests nonsense to you just ask yourself that one simple question. The answers is always, and I mean ALWAYS, one of these three things:
* evidence
* ethos, like: laws, security protocols, or religious beliefs
* narcissism
At that point its just a matter of extremely direct facts/proofs or process of elimination thereof. In the case of bad programmers the answer is almost universally some unspoken notion of comfort/anxiety, which falls under narcissism. That makes sense because if a person cannot perform their job without, fill in the blank here, they will defend their position using bias, logical fallacies they take to heart, social manipulation, shifting emotion, and other nonstarters.
As to your point about reinventing wheels you are making a self-serving mistake. Its not about you. Its about some artifact or product. In that regard reinventing wheels is acceptable. Frameworks and languages are not the product, at least not your product. Frameworks and languages are merely some enabling factor.
pdimitar [3 hidden]5 mins ago
How idealistic of you to assume people will even be able to agree on what's a sensical and nonsensical suggestion.
OP indeed has mutually exclusive phrases. If we ever get to the "extremely directs facts/proofs" then things get super easy, of course.
99% of the problem when working with people is to even arrive at that stage.
I predict a "well, work somewhere else then" response which I'll just roll my eyes at. You should know that this comes at a big cost for many. And even more cannot afford it. Literally.
austin-cheney [3 hidden]5 mins ago
It’s a personal decision assessment method. Other people are irrelevant. I really believe you did not understand the article or my comment.
pdimitar [3 hidden]5 mins ago
To take a face value such a blatant generalization like "I really believe you did not understand the article or my comment" you might want to give an example?
I did not find the article illuminating one bit. I agree with some of the criticisms wholeheartedly, mind you, but the article is just a rant.
austin-cheney [3 hidden]5 mins ago
It’s a well founded rant because most people employed to write software cannot program, communicate at an adult level, or follow simple instructions. The comments here unintentionally qualify this assessment.
pdimitar [3 hidden]5 mins ago
> The comments here unintentionally qualify this assessment.
Which comments might those be? Be concrete.
You also didn't give any example as I asked, you are happy just generalizing so it's not really interesting to engage with you, is what I am finding.
Dansvidania [3 hidden]5 mins ago
which of the three pillars you list above does this statement fall under?
austin-cheney [3 hidden]5 mins ago
Evidence. Its demonstrable, provable, and quantifiable. It's also why interviewing is so bad.
kryptiskt [3 hidden]5 mins ago
> We are destroying software pushing for rewrites of things that work.
Software written in C continues to be riddled with elementary security holes, despite being written and reviewed by experts. If anything the push to rewrite is too weak, we have known about the dangers for decades at this point.
We aren't destroying software, it was never all that. The software of the 90s was generally janky in a way that would never be tolerated today.
dvhh [3 hidden]5 mins ago
The software does not necessarily need to be written in C ( or C++) for these elementary security holes to happen.
blueflow [3 hidden]5 mins ago
It surely had better manpages and documentation tho.
perfmode [3 hidden]5 mins ago
Is someone pushing to rewrite Redis?
thom [3 hidden]5 mins ago
Microsoft created a managed, wire-compatible alternative:
At my first professional software job, where we wrote C, because that was all you could realistically write commercial software in, there was one person on our floor who could do software builds. He used some commercial build tool the company had licensed and he was the only one who knew how to use it or would be allowed to learn how to use it. His customers were every product team in our division --- about 12 of them. We had to petition to get beta releases done. The builds took hours.
I think we're doing fine.
YZF [3 hidden]5 mins ago
What year are we talking about here?
Circa 1982 or so (and some years before) IBM was shipping mainframe software written in Assembly that anyone could build or modify. They had a bug database that customers could access. Around the same era Unix was shipping with source code and all the tooling you needed to build the software and the OS itself.
So maybe compared to some of that we're doing worse.
tptacek [3 hidden]5 mins ago
1998.
YZF [3 hidden]5 mins ago
Circa 1999 I was with a startup. We wrote C and C++ for Windows and embedded platforms. We used source control. Every developer could build everything on their PC. We used Visual Studio.
So I think we knew around that time what patterns were good ones... But sure, lots of software organizations didn't even use source control and failed Joel's test miserably.
EDIT: And sadly enough many orgs today fail Joel's test as well. We forgot some things that help make better software.
tptacek [3 hidden]5 mins ago
The shop I was at was Network Associates (you'd know them now as McAfee).
pvg [3 hidden]5 mins ago
we knew around that time what patterns were good ones
The source control system I had at one job around that time was a dvcs! At a different one, the source control system had its own filesystem was generally insane. It had its own fulltime maintainer sort of like tptacek's build person.
The big difference, really, was that all this software cost a lot of money compared to now where it mostly does not.
dgb23 [3 hidden]5 mins ago
This cracked me up. I love hearing those stories, also from my father, where he would talk about "we already did that in the 80's and it was called...", and then he would tell me about some obscure assembler tricks to patch binaries, in order to keep them compatible with some system I never heard about.
But still, I think we can do better. That story you shared highlights a gross inefficiency and diminishing of agency that comes from dependencies.
phendrenad2 [3 hidden]5 mins ago
I kind of agree. I think software has a certain amount of "badness" that will always exist, it's a mathematical equilibrium. There are a myriad of things that can make your process bad, and if you fix all of them, you'll never ship. The list Pope gives here are the most common issues, but not all teams will check the whole list.
skydhash [3 hidden]5 mins ago
> I think software has a certain amount of "badness" that will always exist
It's fitting a messy, real world process, to the virtual, reduced (because formalism) computing world backed by failable, messy hardware components through an error-prone, misunderstanding-prone of programming.
So much failure points, and no one is willing to bear the costs to reduce them.
aqueueaqueue [3 hidden]5 mins ago
We are doing badly. I bet he was paid enough to buy a good house and it was a low stress issue
Job!
dilyevsky [3 hidden]5 mins ago
Same here except Motorola circa 2007. The build took hours and compiler was licensed from Windriver. We had to schedule time to do builds and test runs using some Rational pos software and we had a dedicated eng in charge of doing merges on release branch
mewpmewp2 [3 hidden]5 mins ago
All the statements in that post are trade offs. In all cases you are sacrificing something to gain something else. Therefore in a way you are always "destroying" something.
Sometimes it is valid to not reinvent the wheel. Sometimes wheel needs to be reinvented to learn. Both actions are done. Sometimes the decision was right. Sometimes not.
Overall as a whole we are creating things, more than we are destroying. I don't see the need to take a negative stance.
gmueckl [3 hidden]5 mins ago
"Destroying software" is broader than the creation of new, working software artifacts in the moment. The phrase refers to changes in engineering culture in software and it's long term effects, not the immediate successes.
Writing a new green field project using 10.000 npm dependencies for an electron based front end is shockingly easy. But how do you keep that running for the next 15 years? Did the project need npm? Or a web browser? How do all the layers between the lamguage of choice and the bare metal actually behave and can you reason about that aggregate accurately?
The field has come to a point where a lot of projects are set up with too many complexities that are expedient in the short term and liabilities in the long term.
The current generation of junior devs grows up in this environment. They learn that these mistakes as "the right thing to do" when they are questionable and require constant self-reflection and reevaluation. We do not propagate a hacking culture enough that values efficiency and simplicity in a way that leads to simple, efficient, stable, reliable and maintainable software. On a spectrum of high quality craftsmanship to mass-produced single use crap, software is trending too much to the latter. It's always a spectrum, not a bunary choice. But as a profession, we aren't keeping the right balance overall.
tremendoussss [3 hidden]5 mins ago
I've been a backend engineer for about 10 years, with my last job doing an aws lambda stack.
I started a job in manufacturing a few months ago and having to think that this has to work for the next 20 years has been a completely different challenge. I don't even trust npm to be able to survive that so web stuff has been been an extra challenge. I landed on lit web components and just bringing it in via a local CDN.
mewpmewp2 [3 hidden]5 mins ago
World is full of abstractions on many different levels. Something being on a lower level doesn't inherently mean superior. You can go in any direction on the scale or spectrum. Do you know how exactly atoms behave that computers are made out of? There are plenty of people working on all different sorts of abstractions, new abstractions appear and demand for lower level increases when it is needed. You could say that as more abstractions are built on top of lower level the balance of all the field will go higher in abstraction level on average, but that is the natural way to evolve. Abstractions allow you to build faster and the abstractions are possible because of lower level elements. In the end if you are measuring what an average level of abstraction for current industry is you can draw the line arbitrarily. You could include the people who use website builders and you can calculate the average to be even higher. We need people working on at all different levels of abstraction. We could divide the groups with 2 different naming convention for lower level engineers and higher level, then technically you could go back to calculating that average is still where it used to be.
I definitely use npm (or rather pnpm) because I know it will allow me to build whatever I want much faster.
gmueckl [3 hidden]5 mins ago
Abstractions are only part of the whole issue. Maybe I focused too much on that. But I'll argue that point once more.
How much complexity is actually required? What changed in software in the last 20 years so that the additional bloat and complexity is actually required? Hardware has become more powerful. This should make software less reliant on complicated optimizations and thus simpler. The opposite is happening. Why? What groundbreaking new features are we adding to software today that we didn't 20 years ago? User experience hasn't improved that much on average. In fact, measurements show that systems are responding more sluggishly on average.
Intrinsic complexity of the problems that software can solve hasn't really changed much as far as I can see. We add towers of accidental complexity on top that mostly aren't helpful. Those need to be questioned constantly. That isn't happening to the extent that it should. Web-based stuff is the poster child of that culture and it's hugely detrimental.
mfuzzey [3 hidden]5 mins ago
> What changed in software in the last 20 years
Backends handling tens / hundreds of thousands or more of concurrent users rather than locally deployed software on a single machine or a small server with a few 10s of users?
Mobile?
Integration with other software / ecosystems?
Real time colaboration amoung users rather than single user document based models?
Security?
Cryptography?
Constant upgrades over the web rather than shipping CDs once a year?
I'll pass on AI for the moment as it's probably a bit too recent.
gmueckl [3 hidden]5 mins ago
Why is a single, scaled up backend required in products effectively have onky multi-tennancy?
Software can be distributed onto client machines and be kept up to date. That was first solved with Linux packages managers more than 25 years ago.
Before mobile we had a wide range of desktop operating systems with their own warts.
TLS 1.0 was introduced in 1999. So cryptography already a concern back then.
So what is really new?
gopher_space [3 hidden]5 mins ago
It's not a personal value judgment, it's a debugging issue.
dahart [3 hidden]5 mins ago
Agreed and well said. Furthermore, a lot of the statements in the post are making opposing tradeoffs when you put them together. A bunch of them value experimenting and breaking things, and a bunch of others value using what we already have and not breaking things.
A few of them aren’t decisions any individuals have control over. Most coders aren’t jumping onto new languages and frameworks all the time; that’s an emergent group behavior, a result of there being a very large and growing number of programmers. There’s no reason to think it will ever change, nor that it’s a bad thing. And regardless, there’s no way to control it.
There are multiple reasons people write software fast rather than high quality. Because it’s a time/utility tradeoff, and time is valuable. It’s just a fact that software quality sometimes does not matter. It may not matter when learning or doing research, it may not matter for solo projects, it may not matter for one-off results, and it may not matter when software errors have low or no consequence. Often it’s a business decision, not an engineering decision; to a business, time really is money and the business wants engineering to maximize the utility/time ratio and not rabbit hole on the minutiae of craftsmanship that will not affect customers or sales.
Sometimes quality matters and time is well spent. Sometimes individuals and businesses get it wrong. But not always.
ryandrake [3 hidden]5 mins ago
I guess the rant should be renamed "business is destroying software" because several of the tradeoffs he mentions can be root caused to a commercial entity cutting corners and sacrificing everything on the altar of "developer time" in order to save money. Only a business would come up with the madness of "Move Fast And Break Things."
martpie [3 hidden]5 mins ago
I mean, I hate business as much as any other engineer, but what’s the point of software without a business? (excl. the beauty of open source)
antirez [3 hidden]5 mins ago
> Overall as a whole we are creating things, more than we are destroying. I don't see the need to take a negative stance.
Fair point: each one of us can think about the balance and understand if it's positive or negative. But an important exercise must be accomplished about this: totally removing AI from the complexity side.
Most of the results that neural networks gave us, given the hardware, could be recreated with a handful lines of code. It is evident every day that small teams can rewrite training / inference engines from scratch and so forth. So AI must be removed from the positive (if you believe it's positive, I do) output of the complexities of recent software.
So if you remove AI since it belongs to the other side, the "complicated software world" what gave us, exactly, in recent times?
mewpmewp2 [3 hidden]5 mins ago
If we discard the AI, which I don't think we should, but if we do - my life has been enriched a lot in terms of doing things I want to do vs things I don't want to. Very quick deliveries, I never have to go to a physical store, digital government online services, never having to wait in any queue, ability to search and find answers without having to go to libraries or know specific people. Online films, tv shows on demand, without ads. There are tons of those things that I feel have made my life so much easier.
LocalH [3 hidden]5 mins ago
The services that enable the things you desire also create harm (Amazon's problems are well documented, digital government services are often a divide that sometimes exclude freedom-minded indivuduals who don't use a "mainstream" OS, to name a couple).
AI has the potential to make the situation much worse, as many laypeople confer it an air of "authority" or "correctness" that it's not really owed. If we're not careful, we'll have an AI-driven Idiocracy, where people become so moronic that nobody can do anything about the system when it takes a harmful action.
mewpmewp2 [3 hidden]5 mins ago
Sure, there are trade offs and risks to everything and everything new. Cars made us move faster, but can pollute and cause injury or death. But summing all of those things together, I would not pick any other time before now to live. And same with software development.
azemetre [3 hidden]5 mins ago
I'm sure factory owners said the same thing in England in the early 1800s.
It needs to be noted that the average person's lot didn't improve until 150 years later. There's no reason why technology can't be decided by democratic means rather than shoved in our faces by people that just want to accumulate wealth and power.
willturman [3 hidden]5 mins ago
What more could someone want than instantaneous consumption and on-demand video? We’re truly living in a frictionless utopia.
mewpmewp2 [3 hidden]5 mins ago
I may have worded it poorly, but everyone can choose the content they consume. And activities they do. You can choose mindless things or things that allow you to learn about the World and understand. Both are easier.
antirez [3 hidden]5 mins ago
Why this is a result of software complexity? I'm not in favor of less capable software.
mewpmewp2 [3 hidden]5 mins ago
I am not sure I understand you then. The post was saying we are destroying something, but I feel like we are constantly gaining and that things are getting better.
0xbadcafebee [3 hidden]5 mins ago
All of those things have been happening for over a decade. There is no actual discipline of software design today. It's just people googling shit, copy-and-pasting, and praying.
I often work with people who refuse to use software unless it's so well known that they can google for stackoverflow answers or blog walkthroughs. Not because well-known software is stable or feature-filled; no, they literally are just afraid of having to solve a problem themselves. I'm not sure they could do their jobs without canned answers and walkthroughs. (and I'm not even talking about AI)
This is why I keep saying we need a professional standards body. Somebody needs to draw a line in the sand and at least pretend to be real adults. There needs to be an authoritative body that determines the acceptable way to do this job. Not just reading random people's blogs, or skimming forums for sycophants making popular opinions look like the correct ones. Not just doing whatever a person feels like, with their own personal philosophy and justification. There needs to be a minimum standard, at the very least. Ideally also design criteria, standard parts, and a baseline of tolerances to at least have the tiniest inkling if something is going to fall over as soon as someone touches it. Even more should be required for actual safety systems, or things that impact the lives of millions. And the safety-critical stuff should have to be inspected, just like buildings and cars and food and electric devices are.
The lunatics are running the asylum, so that's not happening anytime soon. It will take a long series of disasters for the government to threaten to regulate our jobs, and then we will finally get off our asses and do what we should have long ago.
caseyy [3 hidden]5 mins ago
I agree with the frustration but I think heavily regulated professions often copy+paste even more. See: modern Western medicine, where most of what being a general physician is involves following a flow chart. You get really bad outcomes from it too.
I’d like to have standard professional certification because I could use it as proof of the effort I put into understanding software engineering that many ICs have not. But I think that many people have “that’ll do it” values and whatever professional context you put them in, they will do the worst possible acceptable job. The best you can do is not hire them and we try to do that already — with LeetCode, engineering interviews, and so on. That effort does work when companies make it.
anon-3988 [3 hidden]5 mins ago
A standard body standardized C++, clearly having a standard doesn't help.
rcxdude [3 hidden]5 mins ago
The design work in fields which are heavily regulated like that is even more copy-and-pasted. Not only will the average angineer be afraid of solving problems themselves, anyone who is willing to do it will be actively discouraged by the processes from doing so, even if the copy-and-paste answers have severe, known flaws. The grass is not greener on the other side here.
(Safety critical work is, in fact, inspected and accredited like you would wish, and I have seen the ugly, ugly, terrifying results inside the sausage factory. It is not a solution for people who don't care or don't have a clue, in fact it empowers them)
0xbadcafebee [3 hidden]5 mins ago
Oh for sure there's some bullshit out there. The self-attestations of the DoD alone are laughable. But I have also seen a number of critical systems with no inspection. Water, power, financial, health care, emergency services, etc. The kind of shit that from a National Security perspective we should have some eyes on, but don't.
nthingtohide [3 hidden]5 mins ago
This reminds me of Jonathan Blow's talk. Software decays just like everything else if we don't tend to it.
Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
Software technology is in decline despite appearances of progress. While hardware improvements and machine learning create an illusion of advancement, software's fundamental robustness and reliability are deteriorating. Modern software development has become unnecessarily complex, with excessive abstraction layers making simple tasks difficult. This complexity reduces programmer productivity and hinders knowledge transfer between generations. Society has grown to accept buggy, unreliable software as normal. Unless active steps are taken to simplify software systems across all levels, from operating systems to development tools, civilization faces the risk of significant technological regression similar to historical collapses.
_hao [3 hidden]5 mins ago
I think that talk might be Jonathan Blow's most important work to date actually. I love The Braid and The Witness, but "Preventing the Collapse of Civilization" managed to articulate what at least me and my circle of devs have talked about and discussed for long time, but were never quite able to put into words. I'm very grateful for people like him and others like Casey Muratori, Mike Acton etc. who continue to point this out very real danger in the (at least) last decade.
Unfortunately my stance is that fundamentally things won't change until we get hit with some actual hardware limitations again. Most devs and people in general prefer a semblance of a working solution quickly for short-term gains rather than spending the actual time that's needed to create something of high quality that performs well and will work for the next 30 years. It's quite a sad state of affairs.
With that said I'm generally optimistic. There is a small niche community of people that does actually care about these things. Probably won't take over the world, but the light of wisdom won't be lost!
cageface [3 hidden]5 mins ago
You can get all the software quality you want if you're willing to pay for it.
Users have now been taught that $10 is a lot to pay for an app and the result is a lot of buggy, slow software.
0dayz [3 hidden]5 mins ago
The problem is that we generally don't have anyone a good track record of what good software is valued at, it USED to around 300-500$ and with companies being incentivized to go subscription based who knows intuitively what that is.
dartos [3 hidden]5 mins ago
We’ve also taught users that extremely expensive software like SAP and Blackboard are also crap (at least from an end users perspective)
cageface [3 hidden]5 mins ago
This is the inevitable result of decades of feature creep in software that tries to be too general and meet every enterprise edge case.
dartos [3 hidden]5 mins ago
Yep, but the end user doesn’t care.
Those big software packages are sold to admins anyway.
59nadir [3 hidden]5 mins ago
I work in a two man team making software that is 500-1000 times faster than the competition and we sell it at ~40% of their price. Granted, this is in a niche market but I would be very careful in stating price/costs are the entire picture here. Most developers, even if you suddenly made performance a priority (not even top priority, mind you), wouldn't know how to actually achieve much of anything.
Realistically only about 5% or so of my former colleagues could take on performance as a priority even if you said to them that they shouldn't do outright wasteful things and just try to minimize slowness instead of optimizing, because their entire careers have been spent optimizing only for programmer satisfaction (and no, this does not intrinsically mean "simplicity", they are orthogonal).
cageface [3 hidden]5 mins ago
If you really think what you're doing can be easily generalized then you're leaving a lot of money on the table by not doing it.
59nadir [3 hidden]5 mins ago
Generalized? Probably not. Replicated to a much higher degree than people think? I think so. It wouldn't matter much to me personally outside of the ability to get hired because I have no desire to create some massive product and make that my business. My business is producing better, faster and cheaper software for people and turning that into opportunity to pay for things I want to do, like making games.
Disclaimer: Take everything below with a grain of salt. I think you're right that if this was an easy road to take, people would already be doing it in droves... But, I also think that most people lack the skill and wisdom to execute the below, which is perhaps a cynical view of things, but it's the one I have nonetheless.
The reason I think most software can be faster, better and cheaper is this:
1. Most software is developed with too many people, this is a massive drag on productivity and costs.
2. Developers are generally overpaid and US developers especially so, this compounds for terrible results with #1. This is particularly bad since most developers are really only gluing libraries together and are unable to write those libraries themselves, because they've never had to actually write their own things.
3. Most software is developed as if dependencies have no cost, when they present some of the highest cost-over-time vectors. Dependencies are technical debt more than anything else; you're borrowing against the future understanding of your system which impacts development speed, maintenance and understanding the characteristics of your final product. Not only that; many dependencies are so cumbersome that the work associated with integrating them even in the beginning is actually more costly than simply making the thing you needed.
4. Most software is built with ideas that are detrimental to understanding, development speed and maintenance: Both OOP and FP are overused and treated as guiding lights in development, which leads to poor results over time. I say this as someone who has worked with "functional" languages and FP as a style for over 10 years. Just about the only useful part of the FP playbook is to consider making functions pure because that's nice. FP as a style is not as bad for understanding as classic OOP is, mind you, but it's generally terrible for performance and even the best of the best environments for it are awful in terms of performance. FP code of the more "extreme" kind (Haskell, my dearest) is also (unfortunately) sometimes very detrimental to understanding.
cageface [3 hidden]5 mins ago
I think I'd need a lot more than one claim with no evidence that this is true but I appreciate the thoughtful response all the same.
Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.
59nadir [3 hidden]5 mins ago
I don't think either of us needs to convince the other of anything, I'm mostly outlining this here so that maybe someone is actually inspired to think critically about some of these things, especially the workforce bits and the cost of dependencies.
> Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.
This seems like a generous take to me, but I suppose it's usually better for the soul to assume the best when it comes to people. Companies with explicit performance requirements will of course self-select for people capable of actually considering performance (or die), but I don't take that to mean that the rest of the software development workforce is actually able to, because I've seen so, so many examples of the exact opposite.
bathtub365 [3 hidden]5 mins ago
Oftentimes there isn’t a need for something to work for the next 30 years as the business will change in a much shorter timeframe and software exists to serve the business. While I agree that software quality varies wildly if the business can’t get off, the ground because the software isn’t ready, the software team will quickly stop existing with the rest of the business.
1dom [3 hidden]5 mins ago
The way I interpret the decline of software is that part of the problem is because more software has been written with the mentality that it doesn't need to last 30 years.
It's a self-perpetuating issue: people build stuff saying "it won't/can't last 30 years" for various reasons (money, time, skill, resources, expectations, company culture, business landscape etc). So then software doesn't last 30 years for those same various reasons.
The idea that we think systems did used to last longer is probably some survivor bias. However, software that has survived decades was probably created with a completely different set of methodologies, resources and incentives to modern software.
BobbyTables2 [3 hidden]5 mins ago
Practically speaking, software should at least outlast one’s employment at a particular company.
Writing bad code to just get past the next sprint or release is madness.
naasking [3 hidden]5 mins ago
Writing bad code for any reason is bad, the question is whether you can write good code to get the next release out. Or are you saying there's no such thing as good code that meets the next release' requirements?
BobbyTables2 [3 hidden]5 mins ago
I think many people write sloppy code blindly thinking they won’t have to worry about it again.
bigiain [3 hidden]5 mins ago
That's kinda the wo9ld business model of outsourced development teams...
eastbound [3 hidden]5 mins ago
Linux and Java has survived since 1995-1998, which is the dusk of times.
We swap JS frameworks constantly, but when we’ll reach a good paradigm, we’ll stick with it. At one point, React might be the final framework, or rather, one of its descendants.
renewedrebecca [3 hidden]5 mins ago
Or just throw modern web app development with its insane ways of trying to work around the fast that JS is a broken paradigm and start over.
Develop a good vm that can either be part of the browser or can be easily launched from it and get all of the browser/OS makers on the same page.
We only have what we have because of a lack of real leadership.
maccard [3 hidden]5 mins ago
If history had taught us anything in computing it’s that breaking backwards compatibility/rewriting it isn’t the panacea. See Perl, python, C++, longhorn, mosaic for example. My 4k, 120hz monitor with 32 bit color depth and 16 4.5GHz cpu cores attached to it still renders a terminal at 80 characters wide and chokes on cat bigfile.bin because we’re that married to not changing things.
ajoseps [3 hidden]5 mins ago
was there a large breaking backwards compatibility break for C++? I was there for some of the Python2/3 transition but I’ve always thought C++ was adamantly backwards compatible (minor exceptions being stuff like auto_ptr/unique_ptr, which I don’t think was as big as a break as Python’s)
maccard [3 hidden]5 mins ago
Sorry - I meant that C++ as an “improved” C never managed to remove C’s foothold, it just fractured the ecosystem,
When the c++11 abi break happened it was a big pain in the ass, but once MSVC decided in 2015 that they were going to stop breaking ABI I think it was the stability that c++ needed to fossilize…
Spoken as a C++ fan.
fragmede [3 hidden]5 mins ago
cat nyan.jpeg works of you enable sixel support. Better tools are out there, don't wait for someone else to package it up for you!
makapuf [3 hidden]5 mins ago
Cat has a jpeg decoder now ? There are terminal image viewers using sixel or kitty image protocol, but cat I'm not sure.
fragmede [3 hidden]5 mins ago
by "enable sixel support" I mean alias cat=sixcat, which I'm sure breaks some sort of rule
icedchai [3 hidden]5 mins ago
We had a VM almost 30 years ago. Remember Java applets, then Web Start? It was too bloated for that era. These days, you wouldn't even notice.
fragmede [3 hidden]5 mins ago
and maybe that's the lesson. They were too early.
karmakaze [3 hidden]5 mins ago
There are many large companies running Cobol programs in production that wished software didn't last 30 years.
MonkeyClub [3 hidden]5 mins ago
Their prospective replacements didn't last the rewrite, though.
dijit [3 hidden]5 mins ago
I don’t mind this philosophy, but in aggregate I think very slow applications that are cumbersome and widely deployed have cost humanity many human lifetimes because a handful of developers were not able (or not given the opportunity) to optimise them even a little.
I am aware that capitalism essentially dictates externalising cost as much as possible, but with software- much for the same reason capitalism loves it (a copy is cheap and can be sold at full price despite being constructed just once) means that these externalities can scale exponentially.
Teams in particular is an outlier as in most cases it is essentially forced on people.
Wolfenstein98k [3 hidden]5 mins ago
Very shallow definition of "capitalism".
It doesn't dictate externalising cost as much as possible unless you have a very short-term view.
Short-term view businesses get eaten pretty quickly in a free capitalist system.
People forget that half of capitalism's advantage is the "creative destruction" part - if businesses are allowed to fail, capitalism works well and creates net value.
cassianoleal [3 hidden]5 mins ago
What defines a "free capitalist system", and where does it exist?
asmor [3 hidden]5 mins ago
That is untrue for almost all software written outside of companies who's primary product is tech or software. I work for a publisher and while there's a ton of disposable microsites and project work (that nobody ever cleans up or documents, so you never know if it's okay to remove), there's also ancient monoliths written in forgotten programming languages that are so important for production and have so many hidden dependencies and unknown users ("random FTP server" being my favorite), that you can barely put on new paint.
I'm writing software with the assumption that it'll be used for at least 30 years there, with a lot of guard rails and transparency/observability mechanisms, because I know the next person working there will thank me.
dinkumthinkum [3 hidden]5 mins ago
It’s interesting that all three of the people you mention are very concerned with performance, something most programmers don’t even think about anymore or think they aren’t supposed to.
FridgeSeal [3 hidden]5 mins ago
As a group, we have trained many, many programmers out of even considering performance with the proliferation of quotes like “premature optimisation is the root of all evil” ands ideas like “who cares, just get a faster computer/wait for hardware”.
Premature optimisation is bad, but there’s now so many devs who don’t do _any _ at all. They don’t improve any existing code, they’re not writing software that is amenable to later optimisation, inefficient architectures and unnecessary busywork abounds.
Are we surprised that years of “product first, bug fixes later, performance almost never” has left us with an ecosystem that is a disaster?
anymouse123456 [3 hidden]5 mins ago
As someone who used to quote this quote, I've come to believe this one line has caused nearly as much damage as the invention of NULL.
_hao [3 hidden]5 mins ago
Yes, people think that writing performant software is something that's nice-to-have when in fact if you program with the intention of things being performant then that branches out to better overall design, better user experience, better quality etc. It means you actually care about what you're doing. Most people don't care and the results are apparent. We're surrounded by waste everywhere. Bad software wastes resources like money, electricity and most importantly - time.
The fact that people don't stay long enough in companies or work on a long project themselves to see the fruits of their labour down the line is a point that is discussed in some of the other comments here in this thread. I agree with it as well. In general if you job hop a lot you won't see the after effects of your actions. And the industry is such that if you want to get paid, you need to move. To reiterate - it's a sad state of affairs.
ChrisMarshallNY [3 hidden]5 mins ago
> Society has grown to accept buggy, unreliable software as normal.
That’s really the entirety of the issue, right there.
People aren’t just accepting bad software, they’re paying for it, which incentivizes a race to the bottom.
It’s like “The War on Drugs,” that looks only at the suppliers, without ever considering the “pull,” created by the consumers.
As long as there are people willing to pay for crap, there will be companies that will make and sell crap.
Not only that, companies that try to make “not-crap,” will be driven out of business.
Animats [3 hidden]5 mins ago
> Society has grown to accept buggy, unreliable software as normal.
That's easy to stop. Disallow warranty disclaimers.
immibis [3 hidden]5 mins ago
Bye bye FOSS.
The EU has something like this, the Cyber Resilience Act, and it has an exception for FOSS.
turdprincess [3 hidden]5 mins ago
As an opposite view point I work on a 10 year old legacy iOS app which has almost no abstractions and it’s a huge mess. Most classes are many thousands of lines long and mix every concern together.
We are fixing it up by refactoring - many through adding abstractions.
I’m sure code with bad abstractions can scale poorly, but I’m not clear how code without abstractions can scale at all.
epolanski [3 hidden]5 mins ago
> Most classes are many thousands of lines long and mix every concern together.
That's quite unrelated to abstractions. It's just poorly written code, for whatever reasons may have led there.
turdprincess [3 hidden]5 mins ago
Abstraction is a an abstract word, but as an example, I would consider the process of refactoring a big piece of code which mixed together api requests and view presentation into a 2 classes (a presenter, and an api client) as adding abstraction, especially if those new components are defined by interfaces.
And I’d rather work in a codebase where the owners are constantly doing such refactors than not.
sethammons [3 hidden]5 mins ago
I'd call that separation of concerns. I call that design more than abstraction.
danparsonson [3 hidden]5 mins ago
But how do you define "poorly written"?
edit: if you feel the need to downvote, feel free to share why you think my question is problematic - I think that "poorly written" to describe excessive code file sizes is such a wooly phrase as to be basically useless in a discussion like this, and when examined more closely usually comes down to "poorly abstracted". But I stand to be corrected.
datavirtue [3 hidden]5 mins ago
Difficult to read, loaded with surprises.
nradov [3 hidden]5 mins ago
The notion that software defects could destroy civilization is just so silly. I think some people have the impression that the quality of tools and technologies was somehow better before. But since the beginning of the industrial age, most of our stuff has been kind of crap. You might see good quality stuff preserved by enthusiasts or museums but that's survivorship bias and what most people used was worse. People muddle through somehow.
bdangubic [3 hidden]5 mins ago
except everything fucking runs on software now mate…
nradov [3 hidden]5 mins ago
It used to be everything ran on mechanical machines and analog electronics. That stuff failed all the fucking time. We're no worse off today with software.
bdangubic [3 hidden]5 mins ago
I wonder if there is a mechanical thing that can launch a nuclear missile or fuck up X billion devices that rely on GPS…
mechanical shit was local, electronic shit is global, not comparable at all
SamPatt [3 hidden]5 mins ago
Dropping nukes on Japan was about as mechanical as you can get.
The first half of the 20th century excelled in mechanical destruction. Thus far, the electronic age has been much less bloody.
epr [3 hidden]5 mins ago
Do you really believe what you're saying, or are you just trying to "win" the argument? The nukes functioned as intended...
SamPatt [3 hidden]5 mins ago
I'm responding to the implication that "mechanical shit" is local and thus less damaging.
Since they mentioned nukes it seemed like an obvious example where local things can be catastrophic.
The theoretical risk of electronic things malfunctioning in some global way that they mentioned has never resulted in any nuclear weapons being deployed, but we've actually seen the local mechanical approach they disregard be devastating.
bdangubic [3 hidden]5 mins ago
it is remarkable we are comparing this but hey, we’ve done crazier things on HN :)
firesteelrain [3 hidden]5 mins ago
The stuff you first mention is held to a much higher standard and regulatory process.
washadjeffmad [3 hidden]5 mins ago
That's a feature. Permanent systemization of short term worldviews and lenses is pretty horrifying. That's how those techno-theocratic civilizations in pop culture happen.
Imagine if a company had been able to systemize the whims of a paranoid regime, allowing them to track and spy on their citizens with impunity in secret, and the population became so inured to the idea that it became an accepted facet of their culture.
Or what if a political party dominated a state and systematized a way to identify and detect oppositional thought, stamping it out before a counterculture could ever arise. Maybe that thought was tied to a particular religious, ethnic, and/or cultural group.
What if these companies are here today, selling those products to the highest (nation-state) bidders, and the methods they're employing to keep the conceptual wheels turning at scale rely on filtering that selects for candidates who will gladly jump through abstract hoops set before them, concerned only with the how and never the why of what they're doing.
danparsonson [3 hidden]5 mins ago
I think this is a straw man personally.
Think about what happens when two people video call each other from opposite sides of the world. How many layers of hardware and software abstraction are they passing through, to have a reliable encrypted video conversation on a handheld computer where the majority of the lag is due to the speed of light? How much of that would you like to remove?
I would venture an alternative bogeyman - "move fast and break things" AKA the drive for profits. It's perfectly possible (as illustrated above) to reliably extract great performance from a tower of abstractions, while building those abstractions in a way that empowers developers; what's usually missing is the will to spend time and money doing it.
thefz [3 hidden]5 mins ago
I can't take Blow seriously after his meltdown on how difficult it is to test games on different OSes while some developers already released on multiple platforms... all in a one man's band (i.e. https://www.executionunit.com/blog/2019/01/02/how-i-support-...).
Aside, he's treated like a celebrity in the game developer niche and I can't understand why.
the_mitsuhiko [3 hidden]5 mins ago
It can be difficult and at the same time be possible. Clearly he also released his games on multiple platforms. Braid I think was everywhere from Xbox 360, Windows, Mac, Linux, Switch and mobile phones.
JasserInicide [3 hidden]5 mins ago
He has some good takes (like the talk he gave in the OP) but also has some questionable ones. How he feels on work-life balance comes to mind. He seems to legitimately hate having to employ other people to help create his games.
cableshaft [3 hidden]5 mins ago
Just because one person is willing to do everything needed to test on all platforms doesn't mean everyone should therefore be willing to put the time and effort into it.
Depending on what tech you use it can be easier or harder to do as well. I'm making a game with Love2D now, which has made supporting Mac and Linux rather trivial so far, although I've run into challenges with mobile platforms and the web even though it support them (it does work, but it takes more low-level glue code to support native phone features, and the web doesn't seem to be well maintained and my game is throwing webassembly errors currently when I try to run it).
And my previous game (which is on the backburner for now) was made with Monogame, and while that technically has support for Mac and Linux (as well as mobile), I've had quite a few issues even just getting the Mac version working well, like issues with resolution, rendering 3D properly, getting shaders not to break, etc. And they haven't kept up with the latest Mac updates the past few years and have had to make a special push to try to get it caught back up. I've probably sunk a good 20+ hours trying to get it working well before putting that aside to work on the actual game again and I still might have to rearchitect things a bunch in order to get that working.
Meanwhile Unity would probably be pretty dirt simple to port, for the most part, but it comes with other tradeoffs, like not being open source, and trying to pull a stunt a couple years ago where they pulled the rug out from under developers with changing licensing to something aggressive (that convinced other developers to port their games away from the platform), etc.
And there's Godot, which seems to be getting more support again (which is great, I even considered it or my current game, I just like coding in Love2D a bit better), but if you ever want your game on consoles you have to pay a third party to port your games to consoles for you.
The guy you linked makes their own engine (and to be fair, so does Jonathan Blow, who you're critiquing), which is great, but not everyone wants to get that low level. I would rather spend more time focusing on building the games themselves, which is already hard enough, rather than spending all that time building an engine.
It was for that reason that I spent several years focused on board game design instead (as I can make a working game with drawing on index cards and some colored plastic cubes in less than an hour), although that has its own frustrations with significant hurdles to get your game signed by publishers as an unknown designer (I did get one signed four years ago, and it's still not released yet), and large financial risks being made for manufacturing and distribution.
Edit: Also the person you linked to isn't even sure it was financially worth it to support all of those platforms, they just do it for other reasons:
"Do you make your money back? It’s hard to say definitely which could mean no. Linux and macOS sales are low so the direct rewards are also low. ...For Smith and Winston [their current game] right now I don’t think it’s been worth it financially (but it will probably make it’s money back on Launch)"
ryandrake [3 hidden]5 mins ago
Link to the meltdown? I could use a good chuckle this morning.
jhatemyjob [3 hidden]5 mins ago
Look up Indie Game: The Movie
thefz [3 hidden]5 mins ago
Is that the documentary in which Phil Fish has a part on FEZ?
efnx [3 hidden]5 mins ago
Yes. There’s also Indy game 2, which is ok.
jack_h [3 hidden]5 mins ago
> Software technology is in decline despite appearances of progress. While hardware improvements and machine learning create an illusion of advancement, software's fundamental robustness and reliability are deteriorating. Modern software development has become unnecessarily complex, with excessive abstraction layers making simple tasks difficult. This complexity reduces programmer productivity and hinders knowledge transfer between generations. Society has grown to accept buggy, unreliable software as normal. Unless active steps are taken to simplify software systems across all levels, from operating systems to development tools, civilization faces the risk of significant technological regression similar to historical collapses.
I haven't watched that talk by Blow yet so maybe he covers my concern.
I think you have to be mindful of incentives structures and constraints. There's a reason the industry went down the path that it did and if you don't address that directly you are doomed to failure. Consumers want more features, the business demands more stuff to increase its customer base, and the software developers are stuck attempting to meet demand.
On one hand you can invent everything yourself and do away with abstractions. Since I'm in the embedded space I know what this looks like. It is very "productive" in the sense that developers are slinging a lot of code. It isn't maintainable though and eventually it becomes a huge problem. First no one has enough knowledge to really implement everything to the point of it being robust and bug free. This goes against specialization. How many mechanical engineers are designing their own custom molding machine in order to make parts? Basically none, they all use mold houses. How many electrical engineers are designing their own custom PCB(A) processing machines or ICs/components? Again, basically none. It is financially impossible. Only in software I regularly see this sentiment. Granted these aren't perfect 1-to-1 analogies but hopefully it gets the idea across. On the other hand you can go down the route of abstractions. This is really what market forces have incentivized. This also has plenty of issues which are being discussed here.
One thought that I've had, admittedly not fully considered, is that perhaps F/OSS is acting negatively on software in general. When it comes to other engineering disciplines there is a cost associated with what they do. You pay someone to make the molds, the parts from the molds, etc... It's also generally quite expensive. With software the upfront cost to adopting yet another open source library is zero to the business. That is there is no effective feedback mechanism of if we adopt X we need to pay $Y. Like I said, I haven't fully thought through this but if the cost of software is artificially low that would seem to indicate the business and by extensions customers don't see the true cost and are themselves incentivized to ask for more at an artificially low price thus leading to issues we are currently seeing. Now don't misread me, I love open source software and have nothing but respect for their developers; I've even committed to my fair share of open source projects. As I've learned more about economics I've been trying to view this through the lens of resource allocation though and it has lead me to this thought.
caseyy [3 hidden]5 mins ago
Interesting. My experience is that bulky abstraction layers are harder to maintain than own software.
In game development, whenever we go with highly abstract middleware, it always ends up limiting us in what we can do, at what level of performance, how much we can steer it towards our hardware targets, and similar. Moreover, when teams become so lean that they can only do high level programming and something breaks close to the metal, I’ve seen even “senior” programmers in the AAA industry flail around with no idea how to debug it, and no skills to understand the low level code.
I’ve seen gameplay programmers who don’t understand RAII and graphics programmers who don’t know how to draw a shape with OpenGL. Those are examples of core discipline knowledge lost in the games industry. Aka what we have now, we might not know anymore how to build from scratch. Or at least most software engineers in the industry wouldn’t. It cannot end well.
Building your own in my exp is a better idea — then you can at least always steer it, improve and evolve it, and fix it. And you don’t accidentally build companies with knowledge a mile wide and an inch deep, which genuinely cannot ship innovative products (technically it is impossible).
FridgeSeal [3 hidden]5 mins ago
> Consumers want more features
I’m not even sure if this is true anymore. We got new features foisted on us most of the time.
MonkeyClub [3 hidden]5 mins ago
> Consumers want more features
I no longer think that's true. Instead, I think consumers want reliability, but more features is a way to justify subscription pricing segregation and increases.
paulryanrogers [3 hidden]5 mins ago
Everyone has a different a tipping point. But generally I see folks want more features and don't value reliability unless it's something they use really often and has no workaround.
I play games with known bugs, and on imperfect hardware, because I unwilling to pay more. Some experiences are rare, so I tolerate some jank because there aren't enough competitors.
crummy [3 hidden]5 mins ago
Games I can see that. But what features are you missing in your operating system?
paulryanrogers [3 hidden]5 mins ago
Operating Systems are quite mature. I suppose they do need to evolve to take advantage of newer hardware and new UI conventions. For example, swipe typing and H264 decoding are table stakes on mobile.
nradov [3 hidden]5 mins ago
Most large enterprise IT departments are fully aware that the cost of adopting yet another open source library is very high even if the price is zero. This cost comes in the form of developer training, dependency management, security breaches, patent troll lawsuits, etc. There is a whole niche industry of tools to help those organizations manage their open source bill of materials.
paulryanrogers [3 hidden]5 mins ago
Adopting libraries and 3P solutions is like jumping in the pool, easy to do. Getting out of the pool is much harder. Or in some cases like jumping into quick sand. Sometimes it can be hard to tell which before you're in it.
williamcotton [3 hidden]5 mins ago
There are maintenance costs and then there is depreciation/amortization.
alabhyajindal [3 hidden]5 mins ago
All Jonathan Blow talks, or rants more accurately, are the same. He personifies old man yells at cloud.
ahofmann [3 hidden]5 mins ago
While I highly respect antirez, I think this post is full of good sounding, short statements, that wouldn't hold in a discussion.
One example: Newbies shouldn't reinvent the wheel. I think they should use the tools, that are available and common in the given context. When they want to tinker, they should write their own compiler. But they shouldn't use that in production.
Another: Backward API compatibility is a business decision in most cases.
Also, I think it doesn't help to start every sentence with "We are destroying software". This sounds much more gloomy, than it really is.
bayindirh [3 hidden]5 mins ago
> Newbies shouldn't reinvent the wheel.
I strongly disagree. They should, and fail and try again and fail. The aim is not to reinvent the wheel, but to understand why the wheel they're trying to reinvent is so complex and why the way it is. This is how I learnt to understand and appreciate the machine, and gave me great insight.
Maybe not in production at first, but they don't reinvent the wheel in their spare time either. They cobble up 200 package dependency chains to make something simple, because that’s what they see and taught. I can write what many people write with 10 libraries by just using the standard library. My code will become a bit longer, but not much. It'll be faster, more robust, easier to build, smaller, and overall better.
I can do this because I know how to invent the wheel when necessary. They should, too.
> Another: Backward API compatibility is a business decision in most cases.
Yes, business decision of time and money. When everybody says that you're losing money and time by providing better service, and lower quality is OK, management will jump on it, because, monies.
> Also, I think it doesn't help to start every sentence with "We are destroying software". This sounds much more gloomy, than it really is.
I think Antirez is spot on. We're destroying software. Converting it to something muddy and something for the ends of business, and just for it.
I'm all with Antirez here. Software came here, because we developed the software just for the sake of it, and evolved it to production ready where needed. Not the other way around (Case in point: Linux).
ryandrake [3 hidden]5 mins ago
> Yes, business decision of time and money. When everybody says that you're losing money and time by providing better service, and lower quality is OK, management will jump on it, because, monies.
Often that "saving money" is just externalizing the cost onto your users. Especially in mobile development. Instead of putting in the tiny amount of effort it takes to continue support for older devices, developers just increase the minimum required OS version, telling users with older hardware to fuck off or buy a new phone.
Another example is when you don't take the time to properly optimize your code, you're offloading that cost onto the user in the form of unnecessarily higher system requirements.
bayindirh [3 hidden]5 mins ago
True. There’s no free lunch. Either developer pays it once and have happier users, or users pay it everyday and have inferior experience.
This is why I believe slow cooked software. Works better, easier on the system, and everyone is happier.
dasil003 [3 hidden]5 mins ago
> I'm all with Antirez here. Software came here, because we developed the software just for the sake of it, and evolved it to production ready where needed. Not the other way around (Case in point: Linux).
Growing up in the 80s and 90s I understand viscerally how you feel, but this take strikes me as willfully ignorant of the history of computers, and the capitalist incentives that were necessary for their creation. The first computer and the internet itself were funded by the military. The PC wouldn't have existed if mainframes hadn't proved the business value in order to drive costs down to the point the PC was viable. Even the foundational ideas that led to computers couldn't exist with funding—Charles Babbage's father was a London banker.
I think a lot of what you are reacting to is the failed promise of free software and the rise of the internet, when the culture was still heavily rooted in 60s counter-culture, but it hadn't crossed the chasm to being mainstream, so it was still possible to envision a utopian future based on the best hopes of a young, humanitarian core of early software pioneers operating largely from the sheltered space of academia.
Of course no such utopian visions ever survive contact with reality. Once the internet was a thing everyone had in their pocket, it was inevitable that software would bend to capitalist forces in ways that directly oppose the vision of the early creators. As evil as we thought Microsoft was in the early 90s, in retrospect this was the calm before the storm for the worst effects of tech. I hear Oppenheimer also had some regrets about his work. On the plus side though, I am happy that I can earn enough of a living working with computers that I have time to ponder these larger questions, and perhaps use a bit of my spare time to contribute something of worth back to the world. Complaining about the big picture of software is a fruitless and frustrating endeavour, instead I am interested in how we can use our expertise and experience to support those ideals that we still believe in.
trinsic2 [3 hidden]5 mins ago
I think what he is trying to say is that the value or focus was better when it was placed on the endeavors, not the means behind the endeavors. I don't think anything has to be inevitable. What matters is what we decide to do when challenges like these present themselves and how can we have more of a positive impact on the world when things go awry.
I take issue with your use of the word "utopian" being used in this context. Its not a lost cause to see the world from the perspective of making the world better, by finding our way though this with a better mindset on the future.
And while you are taking the time to ponder these questions because you earn enough to take the time, the world is burning around you. Sorry if my tone is harsh, but these kinds of statements really rub me the wrong way. It feels like you are saying everything that is happening is how its suppose to be and I am strongly against that. We have enough of that perspective, we really don't need it, IMHO.
dasil003 [3 hidden]5 mins ago
Fair criticism, but saying "we are destroying software" is not actionable. I want us to do better, but I also want to be effective, and not just sit around impotently wringing our hands about how bad everything is.
bayindirh [3 hidden]5 mins ago
I'll kindly disagree. For me, seeing or accepting where we are currently is enough to gently motivate me to do whatever I can do to change the current state.
This gentle motivation is good, because it allows me to look inside and be rational about my ambitions. I won't go to a blind crusade, but try to change myself for the better.
Because, I believe in changing myself to see that change in the world.
trinsic2 [3 hidden]5 mins ago
Fair point. I agree with you. Some times it just takes one person to say whats wrong with the world, to make people realize something can/has to be changed.
casey2 [3 hidden]5 mins ago
>The first computer and the internet itself were funded by the military
Completely and unjustifiably false.
caseyy [3 hidden]5 mins ago
> Newbies shouldn't reinvent the wheel. I think they should use the tools, that are available and common in the given context. When they want to tinker, they should write their own compiler. But they shouldn't use that in production.
So basically they shouldn’t learn the prod systems beyond a shallow understanding?
JKCalhoun [3 hidden]5 mins ago
> Another: Backward API compatibility is a business decision in most cases.
Agree. That statement/sentiment though doesn't refute the point that it's destroying software.
layer8 [3 hidden]5 mins ago
I actually don’t agree. Maintaining or not maintaining backwards compatibility is often a decision made on the technical level, e.g. by a tech lead, or at least heavily based on the advice from technical people, who tend to prefer not being restricted by backwards compatibility over not breaking things for relying parties.
epolanski [3 hidden]5 mins ago
> newbies shouldn't reinvent the wheel
They absolutely should, or they will never even get to understand why they are using these wheels.
Fun fact, try to question modern web developers to write a form, a simple form, without a library.
They can barely use html and the Dom, they have no clue about built-in validation, they have no clue about accessibility but they can make arguments about useMemo or useWhatever in some ridiculous library they use to build...ecommerces and idiotic crud apps.
antirez [3 hidden]5 mins ago
> When they want to tinker, they should write their own compiler. But they shouldn't use that in production.
Why? We should stop saying others how they want to write/use their code ASAP.
Many established technologies are a total shitstorm. If it is ok to use them, it is ok if somebody wants to use their own compiler.
drawkbox [3 hidden]5 mins ago
These systems also came from tinkering. Most programming languages even are really the investment of one person for a long time, doing apparently what you aren't supposed to do.
When it comes down to it, whatever works best and is usually the most simple, non-breaking, used to win out. That decision has been disconnected from the value creators to the value extractors. It is impossible to extract value before value is created.
Additionally, programming is a creative skill no matter how hard they try to make it not one. Creativity means trying new things and new takes on things. People not doing that will harm us long term.
dahart [3 hidden]5 mins ago
>> But they shouldn’t use that in production.
> Why?
Generally speaking, because that’s very likely to end up being “pushing for rewrites of things that work”, and also a case of not “taking complexity into account when adding features”, and perhaps in some cases “jumping on a new language”, too.
This is an imagined scenario, but the likelihood of someone easily replacing a working compiler in production with something better is pretty low, especially if they’re not a compiler engineer. I’ve watched compiler engineers replace compilers in production and it takes years to get the new one to parity. One person tinkering and learning how to write a compiler almost for sure does not belong in production.
vdupras [3 hidden]5 mins ago
"parity" is the keyword here. Most of the time, the problem doesn't come from sloppy execution, but ever widening scope of the software.
For example, my own "Almost C" compiler[1] is 1137 lines of code. 1137! Can it ever reach "parity" with gcc or even tcc? No! That's specifically not the goal.
Do I benefit strongly for having a otherworldly simpler toolchain? hell yeah.
The key is scope, as always. Established projects have, by virtue of being community projects, too wide a scope.
Agreed, parity is a strong constraint. Since the premise under discussion was “production” environments where some kind of parity is presumably required, I think it’s a reasonable assumption. If there is no baseline and it’s not a production situation where parity is needed, then yeah scope can and should be constrained as much as possible. I like the phrase “otherworldly simple”, I might borrow that!
vdupras [3 hidden]5 mins ago
I don't think that people deciding what goes in "production" are immune to scope inflation. Thinking that we need parity with another software without realizing the cost in terms of complexity of that parity requirement is, I think, a big driver of complexity.
ahofmann [3 hidden]5 mins ago
> Why?
If someone would hand me a project, that is full of self invented stuff, for example a PHP project, that invented its own templating or has it's own ORM, I would run. There is laravel, slim or symfony, those are well established and it makes sense to use them.
There are so much resources around those frameworks, people who posted about useful things, or packages that add functionality to those.
It just doesn't make sense to reinvent the wheel for web frameworks and thousands of packages around those.
Writing software is standing on the shoulders of giants. We should embrace that, and yes one should learn the basics, the underlying mechanisms. But one should make a difference between tinkering around and writing software, that will be in production for years and therefore worked on by different developers.
The JavaScript world shows how to not do things. Every two years I have to learn the new way of building my stuff. It is annoying and a massive waste of resources. Everyone is always reinventing the wheel and it is exhausting. I understand why it is like this, but we as developers could have made it less painful, if we would embrace existing code instead of wanting to write our own.
caseyy [3 hidden]5 mins ago
That’s an interesting take. A lot of big tech has their own tempting, for example.
I’m in games we even rewrite standard libraries (see EASTL) so that they are more fit for purpose.
Of course, it’s your preference. And that is fine. But I don’t think it speaks to the situation in many tech companies.
mukunda_johnson [3 hidden]5 mins ago
the shitstorms usually have a community behind it. Even if it sucks, it's supported and will be maintained to a point of it "working." If someone writes their own thing, chances are they won't go the extra mile and build a community for it. Then, when it comes to maintaining it later, it might grow untenable, especially if the original "tinkerer" has moved on.
dpkonofa [3 hidden]5 mins ago
This would be great if things like open source were more supported in the "real" world. Unfortunately, you're describing exactly why a community means nothing in this situation unless the community is giving back to the project. When the "original tinkerer" moves on, everything depending on that project breaks when everything else changes around it.
Ygg2 [3 hidden]5 mins ago
> Why?
Those who do not know history are doomed to repeat it. Or re-re-reinvent Lisp.
There was this anecdote about storm lamp or something. New recruit comes to a camp and sees old guard lighting lamps are turned upside down and lit sideways with a long stick. But he knows better, he tells them and they smirk. First day he lights them the optimal way with a lighter. He's feeling super smug.
But next day he finds the fuse is too short to reach so he takes the long stick...
Few months later, he's a veteran, he's turning lamp upside down using lighter sideways, with a long stick.
And the fresh recruit says he can do it better. And the now old guard smirks.
I'm sure I'm misremembering parts, but can't find the original for the life of me.
BSDobelix [3 hidden]5 mins ago
>We are destroying software telling new programmers: “Don’t reinvent the wheel!”
This is such a perfect point, no one would have invented the "tank chain" if reinventing the wheel was not allowed.
marginalia_nu [3 hidden]5 mins ago
I'd generalize the entire list to "we're destroying software with rules-based thinking".
Almost everything bad in software derives from someone rigidly following rules like don't repeat yourself, don't reinvent the wheel, don't be clever, etc.
There are instances where this is good advice, but when they turn into rigid rules, bad software is the inevitable result.
k8sToGo [3 hidden]5 mins ago
It's not that it's not allowed, but for example in security reinventing something goes wrong most of the time. By that I mean writing your own encryption etc. without having a deep expertise in security.
And do you really need to write your own library and implementation of SMTP?
Reinventing the wheel where it makes sense is still allowed. But one should think first about the reasons, in my opinion.
ivanb [3 hidden]5 mins ago
If this post was written by a non-celebrity, then it would have been immediately buried. But here were are at 771 points for some cheap maxims.
lolinder [3 hidden]5 mins ago
Cheap maxims written by someone whose immediate response to your comment was just two words: "React developer?"
Since flagged, but it's also highly relevant here, since it shows this isn't someone who's interested in serious thoughts or discussion about the merits of his arguments, just hot takes and cheap shots. Whatever he may have done in the past, I see no reason why he should be taken seriously here.
AnthonBerg [3 hidden]5 mins ago
We are destroying mutual understanding by hanging on to words†.
—
†) Please interpret my words literally and in your favor.
lucianbr [3 hidden]5 mins ago
None of the maxims actually seems to fit the verb "to destroy". We are making non-optimal software, pehaps. Perhaps even bad software. Though opinions might differ. But "destroy"? "To damage (something) to the point that it effectively ceases to exist." By adding too many, too complex features? Yeah, no. But I guess the word sounds impressive, and that's what matters.
AnthonBerg [3 hidden]5 mins ago
“Good enough for jazz” might be useful here?
jollofricepeas [3 hidden]5 mins ago
In the past, you had to practically be on a Wheaties box or major magazine cover to be considered a celebrity.
Now everyone is a “celebrity”and hubris is at an all time high.
I’ve been in tech for 25+ years and never heard of this person until now but it’s not the first time I’ve heard similar talk.
These maxims strike me more as bitter senior neck beard developer complains about the rest of his team in a passive aggressive way at the work lunch table before COVID.
If you’re a celebrity, we don’t need your snarky complaints. We need you using your “celebrity” to make things better.
gkbrk [3 hidden]5 mins ago
You've been in tech for 25+ years and never heard of Redis?
antirez [3 hidden]5 mins ago
Living is serverless world.
antirez [3 hidden]5 mins ago
[flagged]
sublinear [3 hidden]5 mins ago
We are destroying software by letting business pit devs against each other.
lucianbr [3 hidden]5 mins ago
Ad hominem much?
antirez [3 hidden]5 mins ago
If you go down with your comment, then be brave enough to get it.
decayiscreation [3 hidden]5 mins ago
I got one. Failed novelist returns to industry, the world has moved past him. Sour grapes. Get off my lawn!
antirez [3 hidden]5 mins ago
The word had moved past Redis already when I started it. Redis is an experiment in minimalism, and because of that was an industry anomaly.
Wohpe is probably the novel that sold the most copies in the recent years italian sci-fi history. This summer a new short story of mine will be released by the most important italian sci-fi series.
Meanwhile you need to post this from a fake account. What you can show me, of your accomplishments, with your real name? Here to listen, with genuine interest.
openrisk [3 hidden]5 mins ago
Software is a cultural artifact. It reflects the society and economy that produces it, just as much as music, literature, urban design, or cuisine.
So you cannot "destroy" software. But you can have fast food versus slow food, you can have walkable cities or cars-only cities, you can have literate or illiterate society etc. Different cultures imply and create different lifestyles, initially subjective choices, but ultimately objectively different quality of life.
The author argues for a different software culture. For this to happen you need to create a viable "sub-culture" first, one that thrives because it acrues advantage to its practitioners. Accretion of further followers is then rapid.
nirui [3 hidden]5 mins ago
Agreed with most part about cultural artifact, but some food is scientifically proven unhealthy, and there are unsafe way to create software. So maybe there are statically measurable way to create better software instead of just cave in to the trend.
openrisk [3 hidden]5 mins ago
There is a certain amount of "convergence" of good practices thats happening despite the exhausting trend following. E.g. people happily(?) wrote c++ code for decades but today there is strong pressure from the success of ecosystems like rust and python. Open source accelerates that competitive evolution as it improves transparency and lowers the threshold for adoption.
gregors [3 hidden]5 mins ago
>>>> We are destroying software by jumping on every new language, paradigm, and framework.
The young grow old and complain about change. The cycle of life continues.
neilv [3 hidden]5 mins ago
We are destroying software with Leetcode interviews, resume-driven development, frequent job-hopping, growth investment scams, metrics gaming, promotion-seeking, sprint theatre, bullshitting at every level of the org chart, and industry indifference.
orochimaaru [3 hidden]5 mins ago
Most of what is said are symptoms not causes.
Leetcode interviews: lack of continuous certification and changing toolsets too fast
Frequent job hopping: lack of pay upgrades because software is considered a cost center
I could go on but in reality it’s a disconnect between what business thinks the software is worth as opposed to what the engineer wants to do with it.
You can say software is an art but commodity art doesn’t make much money. In reality, the ad driven software has greatly inflated salaries (not complaining but it’s reality). Now it’s going to be an ai bubble. But your rank and file business doesn’t care what software bubble is happening but unfortunately they are bound by the costs that come with it.
Have you seen the process that happens in defense or medical equipment industries. You probably won’t complain.
dinkumthinkum [3 hidden]5 mins ago
Why is LeetCode a symptom of a lack of continuous certification or changing toolsets? I ask because LC is about neither of those things. I agree LC is a symptom of something but I think it’s something else. I also don’t think ad driven software has inflated salaries, there are many, many more software companies than ad based ones, it even only compromises at most half of FANG, which is hardly the only game in software. For defense and medical, these are places where software is not tertiary concerns.
dijit [3 hidden]5 mins ago
Well, take the macro position.
If you didnt have degree requirements and certification bodies for:
* accountants
* engineers
* doctors
* lawyers
What do you think hiring might look like?
Do you think they would build a hiring process to validate to the best of their ability your aptitude of the core fundamentals- except worse than certification and education bodies?
I would presume so at least.
hibikir [3 hidden]5 mins ago
Having spent quite a bit of time around a couple of those groups, I find most of those degree requirements and certifications as just ways to increase salaries, more than ways to increase quality. Many people pass state bars and are incompetent. Lawyers that go through residency get lazy and kill patients, and they aren't magically superior to someone that isn't allowed to work in, say, the US, because their medical training was done the wrong country.
Realistically, licensing boards are there to protect their members, and rarely do political things against people in the same body with unpopular opinions. You have to be catastrophic for most boards to do anything about you: Just like a police union will defend a union member that has committed gross negligence unless the evidence is public.
When you hire a doctor for something actually important, you don't look at the certification body: You look at long term reputation, which you also do in software. Only the largest of employers will leetcode everyone as a layer of fairness. In smaller employers, direct, personal references replace everything, which is what I'd go with if I needed an oncologist for a very specific kind of cancer. The baseline of aptitude from the certification body doesn't matter there at all.
rvz [3 hidden]5 mins ago
> Many people pass state bars and are incompetent. Lawyers that go through residency get lazy and kill patients, and they aren't magically superior to someone that isn't allowed to work in, say, the US, because their medical training was done the wrong country.
So we need the barrier to entry to be even lower for such professions that deal with life-changing outcomes? I don't think so. In such high risk fields: "long term reputation" is totally dependent on hiring extremely qualified individuals.
The barrier to entry MUST be continuously raised with the bare minimum requirement of a degree. Only then the secondary requirements can be considered.
> When you hire a doctor for something actually important, you don't look at the certification body: You look at long term reputation, which you also do in software.
I don't think you can compare the two. Since one deals with high risk to the patient such as life and death and the other in most does not. (Unless the software being written deals with safety critical systems in a heavily regulated setting.)
From what you are saying, maybe you would be OK consulting a surgeon or an oncologist that has never gone to medical school.
scarface_74 [3 hidden]5 mins ago
What makes you think certifications can’t be gamed? Brain dumps have been a thing since at least 2008 and just like you can have a dozen AWS certifications, it tells you nothing about whether they could actually be productive in the real world.
Have you ever done any of the various IT certs?
criddell [3 hidden]5 mins ago
Do you think gaming is a very big problem with accountants, doctors, lawyers, and professional engineers?
scarface_74 [3 hidden]5 mins ago
Doctors and lawyers specializes. Are you going to have specialized non gameable certifications for all the different facets of software development?
Doctors also go to school for 8 years and then do residencies. Lawyers go to school for 7. Are you proposing that?
criddell [3 hidden]5 mins ago
I'm just pointing out that certifications don't have to be meaningless. If somebody wants to use the title "Software Engineer", then perhaps we should require them to be actual professional licensed engineers.
orochimaaru [3 hidden]5 mins ago
There is a bit of a difference between other professions and software. You wouldn’t hire an orthopedic surgeon for cardiology. Now, the human body doesn’t change that fast. So both are needed. In software the rate of change is much faster. So what happens is tools change and people who want to switch streams for better opportunity have to tweak resumes. Now the only way to validate basic proficiency comes down to leetcode style interviews - for better or worse. It’s pretty much the only common denominator between an interviewer and the candidate.
YZF [3 hidden]5 mins ago
Software may be important in defense and medical but I don't think this is reflected in how software engineering is done or how software engineers work in those industries.
proc0 [3 hidden]5 mins ago
And rewarding engineers for "impact" rather than correct, reliable, and extensible software.
moandcompany [3 hidden]5 mins ago
Rewarding "engineers" for perceived "impact" :)
ldigas [3 hidden]5 mins ago
I wonder in comparison what had more impact, a software made in the 80s still used today, or an app that will be replaced in a year, with an added note of "I don't know why I'm using it at all".
scarface_74 [3 hidden]5 mins ago
If the company didn’t make a continuous stream of income from the product, from the company’s perspective, it would be that awesome pay to win game with in app purchases…
efnx [3 hidden]5 mins ago
Why is it that leadership gets these ideas all at the same time?
Henchman21 [3 hidden]5 mins ago
Because they’re all in the same tiny social circle and went to the same schools, joined the same clubs, intermarried, and just generally isolate themselves from the rest of us.
Its called “being out of touch” I believe
dasil003 [3 hidden]5 mins ago
Because the software we are paid to write has an external purpose. As scale increases, supply chains lengthen, and roles specialize, inevitably more and more people are missing the forest for the trees—this isn't an indictment, it's an inevitable result of the pursuit of efficiency and scale at all costs. Many engineers would be happy to polish the thing that exists, perhaps adding complexity the form of scalability, modularity or reusability that isn't actually needed, and in fact may make it harder to adapt to broader changes in the ecosystem that the company operates in. "Impact" is just a suitably hand-wavy management buzzword to be used in lots of different situations where they deem ICs to not sufficiently taking the big picture into account.
proc0 [3 hidden]5 mins ago
> Many engineers would be happy to polish the thing that exists, perhaps adding complexity the form of scalability, modularity or reusability that isn't actually needed, and in fact may make it harder to adapt to broader changes in the ecosystem that the company operates in
When done correctly it absolutely adds business value and should not make it harder to adapt or change, that's the point of good engineering. The problem is that you need years, if not decades, of technical experience to see this, and it's also a hard sell when there is no immediate "impact". It's basically something that happens because consumers don't know any better, so then it becomes low priority for making profit... at least until a competitor shows up that has better software and then it has a competitive edge, but that's a different matter.
dasil003 [3 hidden]5 mins ago
Sure. I'm just explaining why the term exists. Of course it is often applied by clueless managers, there's nothing you can do about that except go find a better manager. Just don't make the mistake of thinking its all bullshit because you've only dealt with muppets on the leadership side—I see this often with journeyman engineers who have never had the privilege of a good management team and it's very sad.
proc0 [3 hidden]5 mins ago
Yeah and I would extend it to a better company as well.
esafak [3 hidden]5 mins ago
They copy the leader. Word spreads instantly on the Internet.
scarface_74 [3 hidden]5 mins ago
A company exists to make money. If you can’t describe how your work either makes the company money or saves the company money? Why should they care?
proc0 [3 hidden]5 mins ago
Why are they hiring the candidate to begin with? This entire "justify your existence every quarter" is one of the bigger downsides of how the economy works, because it allows people to be exploited. it's the metaphorical writing of the blank check. There are no real boundaries of when work starts and stops or when your responsibilities end because the task is to increase business value and that is technically endless. The company keeps piling more and more work, keeps growing and growing, meanwhile they reward you with a small fraction of the total profit that you contributed.
I keep hearing how 10x engineers make their companies millions upon millions but they only get paid a fraction of that. How does that even make sense as a fair exchange? Not to mention is completely unfeasible for most people to have this kind of impact... it is only possible for those in key positions, yet every engineer is tasked with this same objective to increase company value as much as possible. There's just something very wrong with that.
scarface_74 [3 hidden]5 mins ago
Why are they hiring the candidate to begin with?
Because they feel they can extract more value from them then they are paying them.
There are no real boundaries of when work starts and stops or when your responsibilities end because the task is to increase business value and that is technically endless
I work 40 hours a week and they pay me the agreed upon amount. There was nowhere in our agreement the expectation of my working more than that. I also knew that they could put me on a plane anytime during the week.
The company keeps piling more and more work, keeps growing and growing,
That’s completely on the employee to communicate trade offs between time, cost and requirements. My time is fixed at 40 hours a week. They can choose to use my 40 hours a week to work with sales and the customer to close a deal, be a project manager, lead an implementation, be a hands on keyboard developer, or be a “cloud engineer”. It’s on them how to best use my talents for the amount of money they are paying me. But seeing that they pay my level of employee the highest of all of the ICs, they really should choose to have me working with sales and clients to close deals.
That’s not bragging. I make now what I made as a mid level employee at BigTech in 2021.
I keep hearing how 10x engineers make their companies millions upon millions but they only get paid a fraction of that. How does that even make sense as a fair exchange?
The concept of a 10x engineer except in very rare cases is a myth if you think of them as just being on a keyboard everyday. All of the things I listed I could do - project management, backend developer or a cloud engineer - I would say I’m only slightly better than average if that. My multiplier comes because i can work with all of those people and the “business” and they can put me on a plane or zoom call and I can be trusted to have the soft skills necessary and my breadth is wide enough to know what needs to be done as part of a complex implementation and how to derive business value.
If you are making a company millions of dollars and you are only getting a fraction of that - and I doubt someone is doing that on their own without the supporting organizational infrastructure - it’s on you to leverage that to meet your priority stack.
Not to mention is completely unfeasible for most people to have this kind of impact... it is only possible for those in key positions, yet every engineer is tasked with this same objective to increase company value as much as possible. There's just something very wrong with that.
If you are a junior developer, there isn’t much expected of you, you are told what to do and how to do it. You aren’t expected to know the business value.
If you are a mid level developer, you are generally told the business objective and expected to know best practices of how to get there and understand trade offs on the epic/work stream level.
If you are a “senior” developer, now you are expected to understand business value, work with the stakeholders or their proxy, understand risks, navigate XYProblems on the project implementation level and deal with ambiguity.
As you move up the more “scope”, “impact” and “dealing with ambiguity” you have to be comfortable with.
“Codez real gud” only comes into play in getting from junior to mid.
proc0 [3 hidden]5 mins ago
Right, being senior and above basically means doing less engineering. I hope that's something we can agree on because otherwise we would need to discuss the semantics.
> All of the things I listed I could do - project management, backend developer or a cloud engineer - I would say I’m only slightly better than average if that
I completely acknowledge this is a valid way to run a business, but the context here is how this sort of career progression is preventing the specialization of engineers in their domain and contributing to the widespread of software problems. Instead of investing in good engineers that specialize in their domain, companies move them away from engineering into more of an entrepreneur mindset by tasking them with adding value to the business directly, which is not something that you do as an engineer (it's nowhere in a CS degree, aside from say some electives).
A good metaphor here is a football/soccer team. What companies are doing is telling the goal keeper that he needs to score goals because more goals means winning. The team wants to win so everyone on the field has to score a goal. That obviously doesn't make sense even though the premise is true. You want a team composed of specialists and the more they specialize in their domain and work together the more you win. Even though there are only two or three offensive players that are scoring the goals, everyone is contributing to the success of the team if they specialize in their domain. Similarly, just because talking to clients and selling the product is directly contributing to the revenue of a business it doesn't mean that engineering at a higher level has no value.
And once again to stress the context here, companies can do whatever they want, but having engineers progress through their careers by moving AWAY from engineering is precisely why there is so much bad software out there. Letting engineers create better software should result in more profit in the long term, just probably not in the short term, and it's also hard for non-technical people to manage. So it is what it is.
scarface_74 [3 hidden]5 mins ago
Right, being senior and above basically means doing less engineering. I hope that's something we can agree on because otherwise we would need to discuss the semantics
Engineering is not only the physical labor. Aircraft engineers and building engineers don’t spend most of their time doing hands on work.
Designing and Planning: Engineers are responsible for designing and planning systems, structures, processes, or technologies. They analyze requirements, gather data, and create detailed plans and specifications to meet project objectives. This involves considering factors such as functionality, safety, efficiency, and cost-effectiveness.
When doing construction work, who adds more value?
The general contractor? (Staff software engineer),
The owners of the plumbing, electrical, and HVAC companies assuming they have the actual skills (senior level developers). The owners of the plumbing companies could very well be making more than the general contractors. This is where you can specialize and the sky is the limit.
The actual certified workers (mid level developers). This is the level that the completely head down people are. No matter how good they become at being a hands on plumber , there is a hard ceiling they are going to hit at this level.
The apprentices (juniors)?
I work in consulting. I am currently a staff architect (true - IC5) over a hypothetical project (not a real project). I know the project is going to need a cloud architect, a data architect , and a software architect. They are all specialists at their jobs and are all going to lead their “work streams”. They are all IC4s
I expect each architect to take the high level business objectives and work with the relevant technical people on both sides and lead their work along with some hands on keyboard work.
They will each have people under them that are not customer facing at all. While I know all of the domains at some level, I’m going to defer to their technical judgement as long as it meets the business objectives. I did my high level designs before they came on to the project. Was my design work, figuring out priorities, risks, making sure it met the clients needs, discussing trade offs, etc not “engineering”?
Each level down from myself IC5 to junior engineers (IC1) is dealing with less scope, impact and ambiguity. There is no reason that the architects shouldn’t get paid as much as I do. They bring to the table technical expertise and depth. I bring to the table delivery experience, being able to disambiguate, and breadth.
proc0 [3 hidden]5 mins ago
> Do you think “engineering” is only the physical labor? Do aircraft engineers and building engineers actually do the construction work?
No, but software is inherently different because you can leverage existing software to create more software. Every airplane has to be created individually, but software that already exists can be extended or reused by just calling functions or in the worst case copy/pasting.
> The actual certified workers (mid level developers). This is the level that the completely head down people are. No matter how good they become at being a hands on plumber , there is a hard ceiling they are going to hit at this level.
Yes, with hardware this can be the case as there is a small number of ways to build something. With software there is no ceiling, and the proof here is AI. We might soon see general intelligence that just codes anything you want. This means software can be designed to automate virtually anything in anyway shape or form, but it requires more and more expertise.
> I did my high level designs before they came on to the project. Was my design work, figuring out priorities, risks, making sure it met the clients needs, discussing trade offs, etc not “engineering”?
I agree what you're outlining is how the industry works. Perhaps the core of the issue here is how software engineering was modeled after other kinds of engineering with physical limitations. Software is closer to mathematics (arguably it's just mathematics). You can of course still design and plan and delegate, but once the role starts dealing with the high level planning, scheduling, managing, etc., there is less of a requirement for the technical details.
I've worked with architects that didn't know the specifics of a language or design patterns, not because they're bad at engineering but because they had no more time to spend on those details. These details are crucial for good software that is reliable, robust, extensible, etc. Junior and even mid level engineers also don't know these details. Only someone that has been hands on for a long time within a domain can hone these skills, but I have seen so many good engineers become senior or tech leads and then forget these details only to then create software that needs constant fixing and eventually rewriting.
I'm a senior myself and have no choice but to engage in these activities of planning, scheduling, etc., when I can clearly see they do not require technical expertise. You just need some basic general knowledge, and they just are time consuming. My time would be better spent writing advanced code that mid-level and junior level can then expand on (which has happened before with pretty good success, accelerating development and eliminating huge categories of bugs). Instead I have to resort to mediocre solutions that can be delegated. As a result I can see all kinds of problems accumulating with the codebase. It's also really hard to convince the leadership to invest in "high level" engineering because they think that you create more impact by managing an army of low to mid-level engineers instead of leveraging properly written software. I'm convinced that it does add value in the long term, it's just a hard sell. Ultimately I guess it comes down to the type of org and the business needs, which often does not include writing software that will not break. Most companies can afford to write bad software if it means they get to scale by adding more people.
scarface_74 [3 hidden]5 mins ago
> No, but software is inherently different because you can leverage existing software to create more software.
That’s true. But when I put my “software engineering”, “cloud engineer”, or “data engineer” (theoretically) hat on, I can only do work of one person. No matter how good I am at any of it, I won’t be producing more output than someone equally qualified at my own company. Distributing software does have zero marginal cost more or less and that’s why we get paid more than most industries.
but I have seen so many good engineers become senior or tech leads and then forget these details only to then create software that needs constant fixing and eventually rewriting.
This is just like both my general contractor analogy and my real world scenario. As an “staff architect”, I come up with the initial design and get stakeholder buy in. But I defer to the SMEs the cloud architect, the data architect and the software architect who still eats, sleep and breathe the details in their specialty.
Just like the owners of the plumbing company, HVAC company and electrical company are the subject matter experts. The general contractor defers to them.
In the consulting industry at least, there are two ways you can get to the top, by focusing on depth or breadth. But in neither case can you do it by being staff augmentation (the plumber, electrician, or HVAC person), you still have to deal with strategy.
> You just need some basic general knowledge, and they just are time consuming
Knowledge isn’t the issue, it’s wisdom that only comes with experience that I assume you have. Going back to the hypothetical large implementation. It involves someone who does know architecture, development and data. No matter how good your code is, high availability, fault tolerance, redundancy, even throughput comes from the underlying architecture. Code and hands on implementation is usually the least challenging part of a delivery.
Its knowing how to deal with organizational issues, managing dependencies, sussing out requirements, etc
epolanski [3 hidden]5 mins ago
Bu but...but..look at how elegant my rewrite of this part of the code that nobody uses for a product that has a handful of users is../s
There's really way too many developers that care way more about the code than the product, let alone the business.
It ends up like fine dining, where 99% of the times a big Mac would've been tastier and made the customer happier, but wouldn't justify the effort and price.
phendrenad2 [3 hidden]5 mins ago
The users often feel the "impact" like a punch in the gut, as their perfectly-fine software has now grown a useless AI appendage, or popup ads for an upsell to the next pricing tier. But hey, got the promotion!
epolanski [3 hidden]5 mins ago
Nobody cares about the software but developers.
I'm sorry but I don't buy it.
I've been way too close way too often with lisp and Haskell or many other niches (I know well both Racket/Scheme and Haskell btw) the people that care that much about this correct and reliable and extensible software care about the code more than they care about the products.
That's why the languages that breed creativity and stress correctness have very little, if any, killer software to show when PHP/Java has tons of it.
proc0 [3 hidden]5 mins ago
I think you're conflating two different things. On the one hand you have how a product is made, and on the other there is the demand for it which will affect how the product is made. In the case of software these two things are unfortunately broadly disconnected for a number of reasons.
First, hardware has improved consistently, outpacing any need for optimal software. Second, the end user does not know what they really want from software until it is provided to them, so most people will easily put up with slow, laggy and buggy software until there is a competitor that can provide a better experience. In other words, if companies can make money from making bad software, they will, and this is often what happens because most people don't know any different and also hardware becomes more and more powerful which alleviates and masks the defects in software.
I think there is also another big factor, which is that businesses prefer to scale by adding more people, rather than by making better software. Because they want to scale along human engineers, they tend to prefer low-tier software that is easy to pick up by the masses, rather than the high-level specialized software that is a hard skill to acquire. This is understandable, but is also the reason why software is so slow in advancing forward compared to hardware (and by the same token, hardware engineers require more specialization).
scarface_74 [3 hidden]5 mins ago
Were they able to trade their passion for goods and services?
leptons [3 hidden]5 mins ago
You get rewarded?
proc0 [3 hidden]5 mins ago
Yes with promotions, bonuses and overall good reviews, which are only marginally based on technical achievements (when it should be absolutely based on that).
scarface_74 [3 hidden]5 mins ago
Technical achievements that are not in line with business objectives are worthless
leptons [3 hidden]5 mins ago
That sounds amazing. I haven't spoken to my boss in about 2 years, haven't had a raise in over 4 years. I guess my reward is that I still have a job?
proc0 [3 hidden]5 mins ago
Yes, that's part of the reward, but normally you get raises, bonuses, etc. based on yearly or quarterly reviews. If you are not entry level, and you just focus on programming while getting a good pay, and you are not seen as underperforming, consider yourself lucky.
dahart [3 hidden]5 mins ago
If you want a raise or promotion, you should talk to your boss or otherwise figure out how to remind them regularly of your value. Ask them for a quarterly or monthly one-on-one, and take an interest in what they do and what the team priorities are. I don’t know about your boss, they’re not all the same, but managers tend to like see initiative, engineers who make other engineers more productive, and engineers who have and spread an optimistic attitude. Promotions are about taking greater responsibility.
An alternative but dangerous approach is to make it known you’re looking elsewhere for work. Don’t do that if it’s relatively easy to replace you, and definitely assume the management thinks it’s easy to replace you, especialy if you haven’t been talking to your boss. ;) But there is the chance that they know you’re valuable and haven’t given you a raise because you seem content and they believe they have the upper hand - which may or may not be true.
scarface_74 [3 hidden]5 mins ago
The problem is that “your boss” usually has to conform to the budget set by their manager working alongside HR.
When I left a company for increased compensation, which funny enough has only been 3x in almost 30 years across 10 jobs, it’s been between a 25%-60% raise. It’s almost impossible for any manager to push that kind of raise for anyone without a promotion.
Even at BigTech promotions usually come with lower raises than if you came in at that level.
Don’t do that if it’s relatively easy to replace you, and definitely assume the management thinks it’s easy to replace you, especialy if you haven’t been talking to your boss.
Everyone is replaceable. If you are at a company where everyone isn’t replaceable, it’s a poorly run company with a key man risk. I never saw any company outside of a one man company or professional practice where one person leaving was the end of the company.
dahart [3 hidden]5 mins ago
> The problem is that “your boss” usually has to conform to the budget set by their manager working alongside HR.
Very true! Though isn’t it also normal in that case for HR to be recommending inflation raises at least? The exception might be if you came in at a salary that’s higher than your peer group and/or high for the title range. Parent’s problem could be that - either peer group correction or not possible for manager to raise at all without a promotion by company rules. There’s lots of reasons I can imagine, but in any case I wouldn’t expect a change with status quo, right? If you haven’t been talking to your boss, continuing to not talk to your boss is unlikely to change anything.
scarface_74 [3 hidden]5 mins ago
If you haven’t gotten a raise in four years and inflation has been up 21%, your pay is actually decreasing
Mainsail [3 hidden]5 mins ago
Three years into my career and my total comp hasn’t risen once - it’s honestly exhausting at this point but I’m still learning a ton, feel as if there is solid job security, and love my team. At some point I’m going to have to make that scary jump though if it continues.
scarface_74 [3 hidden]5 mins ago
That’s fair. There have been plenty of points in my career where I chose leveling up over compensation and even chose a lower offer between two offers because it would prepare me for my n+1 job better.
ajmurmann [3 hidden]5 mins ago
Sounds like an incredibly incompetent manager
epolanski [3 hidden]5 mins ago
Nobody cares about technical achievements if they aren't making money or at least the users happier.
_dark_matter_ [3 hidden]5 mins ago
If you don't want to have any say in the work that you're doing, then sure don't judge based in impact. I'd rather have the responsibility and trust of letting me decide what to work on, rather than management tell me. The only way that works is that my judgement is good, I deliver impact, and I'm paid as such.
proc0 [3 hidden]5 mins ago
There is implicit impact by engineering good systems and software. It's like hiring a plumber and telling them "you need to improve my house" rather than "you need to improve the plumbing in my house". When your objective as an engineer is so broad that you need to worry about customers and the product, then the engineering itself suffers... that said it might not suffer to the extent that it matters but that is the point here.
Bad software keeps happening because businesses can afford it, as well as hardware improvements. It's a combination of consumers not knowing what they are missing and hardware advancements allowing for bad software to exist.
scarface_74 [3 hidden]5 mins ago
> When your objective as an engineer is so broad that you need to worry about customers and the product, then the engineering itself suffers... that said it might not suffer to the extent that it matters but that is the point here
If you aren’t writing software with the customer and business in mind, why are you doing it? That’s what you are getting paid for.
proc0 [3 hidden]5 mins ago
To clarify, I don't mean having no concern for the business or customer, but rather having that as a priority and the primary way to determine contribution. Engineers should care about the engineering first, which implicitly has value if done correctly.
That said, AI is about to change all this... but ironically this justifies my position. If software can be so powerful such that you can have a general intelligence that can do almost any cognitive task... then it follows that all this time engineers can also engineer clever systems that can add a lot of value to the company if engineered correctly.
There is no ceiling of how much performance and value can be squeezed out of a software system, but this never happens because businesses are not investing in actual engineering but rather in technical entrepreneurs that can bring something to market quickly enough.
scarface_74 [3 hidden]5 mins ago
> To clarify, I don't mean having no concern for the business or customer, but rather having that as a priority and the primary way to determine contribution. Engineers should care about the engineering first, which implicitly has value if done correctly
There is no “implicit value” of software to a company. The only value of a software to a company is whether it makes the company money or saves the company money. That’s it, there is no other reason for a company to pay anyone except to bring more value to the company than they cost to employ them.
If software can be so powerful such that you can have a general intelligence that can do almost any cognitive task... then it follows that all this time engineers can also engineer clever systems that can add a lot of value to the company if engineered correctly.
It’s just the opposite. If AI can do any of the coding that a software engineer can do and I am not saying that’s possible or ever will be, what’s becomes even more important are the people who know how to derive business value out of AI.
> There is no ceiling of how much performance and value can be squeezed out of a software system
That may be true. But what’s the cost benefit analysis? Should we all be programming in assembly? Should game developers write a custom bespoke game engines and optimize them for each platform?
proc0 [3 hidden]5 mins ago
> That’s it, there is no other reason for a company to pay anyone except to bring more value to the company than they cost to employ them.
The implicit part is that if you engineer a good system then it saves money with less bugs and breaks less, and also makes money by allowing faster development and iterations.
There are plenty of examples here. I could point at how the PlayStation network just went down for 24 hours, or how UIs are often still very laggy and buggy, or I can also point at companies like Vercel that are (I assume) very valuable by providing a convenient and easy way to deploy applications... the fact that there are many SaaS out there providing convenience of development proves that this adds value. Despite this businesses are not having their engineers do this in-house because somehow they don't see the immediate ROI for their own business. I would just call that lack of vision or creativity at the business level, where you can't see the value of a well engineered system.
Businesses are free to run their company in whichever way they please, and they can create crappy software if it makes them money, but the point is that when this is industry-wide it cripples the evolution of software and this is then felt by everyone with downtimes and bad experiences, even though hardware is unbelievably fast and performant.
> Should game developers write a custom bespoke game engines and optimize them for each platform?
This is a good example actually. Most companies that want full creative control are making their own engines. The only exception here is Unreal (other smaller engines are not used by large companies), and from what I can tell the Unreal engine is an example of great software. This is one of those exceptions where engineers are actually doing engineering and the company probably can't afford to have them do something else. Many companies could benefit from this, but it's just not as straight line from the engineering to profit and that's kind of the root of why there is so much bad software out there.
scarface_74 [3 hidden]5 mins ago
I could point at how the PlayStation network just went down for 24 hours
Part of “meeting requirements” is always RTO, RPO, the availability requirements, latency, responsiveness, etc.
proc0 [3 hidden]5 mins ago
Right, but it's an example that illustrates a broader problem with how software still has the same issues it had for decades, especially in network programming and web. In contrast hardware has made huge advancements.
esafak [3 hidden]5 mins ago
You are optimizing for your resume and pay, not the product's needs. That's what people are criticizing; what constitutes impact, how it is measured, and whether that's what's needed to make the product better.
_dark_matter_ [3 hidden]5 mins ago
In fact I'd argue it's the _exact_ opposite. I've seen countless engineers architect big complicated systems that are completely unnecessary. If anyone was measuring, they'd realize the impact of that system was: no incremental users, no incremental revenue, increased operational overhead, higher headcount, and more burden in further development. Is that what the user needs? You tell me.
chocolatkey [3 hidden]5 mins ago
What is wrong with frequent changing of jobs? It’s one of the easiest tools for increasing my compensation. The job market ideally should be so flexible you can switch to another company any time you want, no noncompetes.
Ekaros [3 hidden]5 mins ago
That is the problem. That job-hopping is only way to get better compensation. The history, the domain knowledge and accountability is lost when one who made a programming decision is gone.
Why care about quality or maintainability if you are gone in year or two anyway...
bbor [3 hidden]5 mins ago
You've hit the nail on the head -- the problem is systemic, not the implied sudden lack of virtue. People job hob in our industry b/c even the giants with hundreds of billions in the bank are caught up in absurd quarter-by-quarter performances for their shareholders, which sets the direction of the whole industry.
trinsic2 [3 hidden]5 mins ago
Exactly. On the level the parent is describing, having options is a good thing, but the side-effect of this systemic problem is that it fragments our ability to write good software because the process is not about developing good software, its about to profit. Which enshitifies the endeavor, devaluing the quality of the end result.. Slowly, over time, this enshifitication is going to, or already is causing problems in our technological infrastructure and maybe other areas of society.
ChrisMarshallNY [3 hidden]5 mins ago
I was just chatting with a friend of mine, this morning, about this kind of thing.
He works as a highly-skilled tech, at a major medical/scientific corporation. They have invested years of training in him, he brings them tremendous value, and they know it. He was just telling me how he used that value to negotiate a higher compensation package for himself. Not as good as if he swapped jobs, but he really has a sweet gig.
People who stay, take Responsibility for the code they write. They will need to face the music, if it doesn't work, even if they are not responsible for maintaining it.
They are also worth investing in specialized training, as that training will give great ROI, over time.
But keeping skilled people is something that modern management philosophy (in tech, at least) doesn't seem to care about.
Until corporations improve the quality of their managers; especially their "first-line" managers, and improve their culture, geared towards retaining top talent (which includes paying them more -but there's a lot more that needs doing), I can't, with good conscience, advise folks not to bounce.
rr808 [3 hidden]5 mins ago
Sounds like he's lucky though. Many companies are happy to let you specialize in their area of business, own the special software, get to know all the vendors & business contacts then really not pay you well. You dont get to find out until 5 years in when you have skills that aren't really transferable to a new job.
ChrisMarshallNY [3 hidden]5 mins ago
His skills are quite transferable. Many of the company’s customers would love to hire him away. The company doesn’t have a noncompete, and he’d probably quit, if they tried.
scarface_74 [3 hidden]5 mins ago
> Not as good as if he swapped jobs, but he really has a sweet gig
If your main motivation for working is to exchange your labor for the maximum amount of money possible, I don’t see how that is the positive outcome you think it is.
I personally wouldn’t leave my current job if another one for $100K more fell into my lap. But the “unlimited PTO” where the custom is to take at least 5-6 weeks off during the year not including paid holidays and it being fully remote is hard to beat.
ChrisMarshallNY [3 hidden]5 mins ago
Not sure how you got that from what I wrote.
I mean pretty much exactly what you said.
I apologize for being unclear.
eastbound [3 hidden]5 mins ago
> But keeping skilled people is something that modern management philosophy (in tech, at least) doesn't seem to care about.
I’m a founder for 10 people and this is the first thing we think about. Except for low performers; except that youngsters need a variety of experience to be proficient at life; except that the team is not performing well(1). 25% or 30% increases for half the workforce are frequent.
(1) The biggest remark from management coaches is that giving raises lowers employee performance, which I can fully witness in my company. It’s not even good for morale. I’m just happy that people exit the company fitter and with a girlfriend, even a kid and sometimes a permanent residency, but business-wise I’ve been as good as a bad leader.
I’m reaching the sad conclusion that employees bring it upon themselves.
ChrisMarshallNY [3 hidden]5 mins ago
Paying more is not the answer (but it is also not not the answer -we need to pay well). Improving the workplace environment, into a place people want to stay, is important.
My friend could double his salary, moving almost anywhere else, but he gets a lot of perks at his work, and is treated extremely well by his managers.
They just gave him a rave review, and that did more to boost his willingness to stay, than a 10% raise. He will still negotiate a better salary, but is more likely to be satisfied with less than he might have, if they tried to treat him badly.
Treating employees with Respect can actually improve the bottom line. They may well be willing to remain in difficult situations, if they feel they are personally valued.
I know this from personal experience. When they rolled up my team, after almost 27 years, the employee with the least tenure had a decade. These were top-shelf C++ image processing engineers, that could have gotten much higher salaries, elsewhere.
trinsic2 [3 hidden]5 mins ago
Yeah I agree with this. It really depends on the culture of the business. If the employee feels valued, giving raises increases feeling valued.
The problem is our current form of corporate culture. Employees don't feel like they matter, there efforts are a cog in a wheel. If you get a raise in this type culture, it only matters to the bottom line and there is no incentive produce because the employee is already unhappy in the first place.
Change your business culture and these problems will disappear, IMHO.
nouripenny [3 hidden]5 mins ago
Is there any alternative to raises that don't lower employee performance, like some kind of bonus scheme? Or do you find some unfortunate relation between money vs motivation?
scarface_74 [3 hidden]5 mins ago
Truly unlimited PTO where you judge employees by their performance. I just spoke to my manager at the job I started late last year and he said it is customary for people to take 5-6 weeks kid a year.
We also have a 401K match with an immediate vest.
FridgeSeal [3 hidden]5 mins ago
I could finally get all the skiing in that I wanted to with time off like that!
scarface_74 [3 hidden]5 mins ago
How is paying people less than market value good for morale?
7bit [3 hidden]5 mins ago
> But keeping skilled people is something that modern management philosophy (in tech, at least) doesn't seem to care about.
If they would care then job hopping would not exist. If staying at s company would be more. Beneficial to your salary, why would you ever want to change company, if you are otherwise happy?
omgbear [3 hidden]5 mins ago
It's harder to learn the impact of your design decisions -- Seeing how software evolves to meet changing business goals and how the design choices made play out in the long run helped teach me a lot.
Coming up with a neat API that turns out to be difficult to modify in the future, or limiting in ways you didn't imagine would when writing it is a good learning experience.
Or seeing how long a system can survive growing usage -- Maybe a simple hack works better than anyone expected because you can just pay more for RAM/CPU each year rather than rebuild into a distributed fashion. Or the opposite, maybe there's some scaling factor or threshold you didn't know existed and system performance craters earlier than predicted.
wrs [3 hidden]5 mins ago
The topic is “we’re destroying software”, not “we’re destroying techniques to increase your compensation”. Individual compensation and collective quality are not somehow inherently correlated.
bluedino [3 hidden]5 mins ago
I guess you could argue something along the lines of people never staying long enough to build complex things from start to finish. New people moving in and working on the project without the proper understanding, not caring since it will be someone else's problem in a few months...
marcosdumay [3 hidden]5 mins ago
> It’s one of the easiest tools for increasing my compensation.
This is the root problem. None of the problems the GP pointed up were created by software developer.
Now, if you want to know the consequences, it causes an entire generation of people that don't really know what they are doing because they never see the long-term consequences of their actions. But again, it's not the software developers that are causing this, nor are they the ones that should fix it.
pjmlp [3 hidden]5 mins ago
Not everyone is fortunate to live in regions where that is easy doable, nor all cultures see job hopping as positive.
scarface_74 [3 hidden]5 mins ago
And when you change jobs, you control the narrative. Unlike when you have to deal with promo docs and internal politics
sfpotter [3 hidden]5 mins ago
Can't have stewardship without institutional knowledge and long-term employment.
davidw [3 hidden]5 mins ago
I don't get the downvotes. This is a rational point of view for an individual. The problem is higher up, where the incentives align to make it rational. It'd be better if people could stay longer and still grow their compensation.
fullstackwife [3 hidden]5 mins ago
Computer programs are like sausages. It's better not to see them being made.
swat535 [3 hidden]5 mins ago
Except at least sausages are enjoyable to consume, unlike today's software.. unfortunately, it fails on all fronts.
vosper [3 hidden]5 mins ago
I've enjoyed a great many of the games I've played lately :)
dqft [3 hidden]5 mins ago
If you did then you would know all the tricks to Stephen's rare and beautiful thing.
onemoresoop [3 hidden]5 mins ago
You forgot move fast break things
asdev [3 hidden]5 mins ago
best Leetcoders generally add the least value business wise
begueradj [3 hidden]5 mins ago
I add AI to that list
shove [3 hidden]5 mins ago
It’s honestly a huge tell that it was omitted
simonw [3 hidden]5 mins ago
What does it tell you?
readyplayernull [3 hidden]5 mins ago
LoC metrics instead of QoC.
unification_fan [3 hidden]5 mins ago
Literally all of that is because capitalism incentivizes short-term profit over meaningful, society-benefiting work.
FOSS doesn't have these problems.
Ekaros [3 hidden]5 mins ago
Just how many FOSS projects there are that are dead? Just how many different ways to do things have been invented? Say Linux desktop environments. Or parts of subsystems.
FOSS is certainly guilty too.
_Algernon_ [3 hidden]5 mins ago
How is FOSS guilty of the same? The code is there free for you to take, modify and fix, even if the project is otherwise abandoned.
unification_fan [3 hidden]5 mins ago
Fair, but then again FOSS is born from individual hackers who want to learn how to build X, or who feel like the ecosystem doesn't provide the X they would like to have.
It fosters a culture where everyone can hack something together, and where everyone is knowledgeable enough to make responsible use of technology.
Working as a for-hire developer doesn't let you experience all of that because you're building a product that someone else wants you to build. No wonder one does not give a shit about writing good software at that point! You've taken all the fun and personal fulfillment out of it!
We can build anything we put our mind to -- but most of us are too busy churning out CRUD boilerplate like factory workers. That's depressing.
bawolff [3 hidden]5 mins ago
I feel like resume-driven development is certainly a thing that happens in FOSS.
sieve [3 hidden]5 mins ago
Well, I agree with him to a great extent.
The complexity/dependency graph of a random application now-a-days is absolutely insane. I don't count everything in this, including the firmware and the OS like Muratori does in his video[1], but it is close enough. The transitive dependency problem needs to be solved. And we need to do something about Bill/Guido taking away all that Andy gives.
I consider the OS (Win32 API, Linux syscalls) to be the only hard dependency for anything I write in C. Tend to avoid libc because of distribution issues. But you have no control over this layer once you switch over to Java/Python.
The only thing you can then do is stop depending on every library out there to avoid writing a couple of hundred lines of code specific to your situation. It definitely increases the maintenance burden. But dependencies are not maintenance-free either. They could have the wrong API which you have to wrap around, or will break compatibility at random times, or become abandonware/malware or have some security flaw in them (rsync had a major security vulnerability just last month).
My personal max line count for a useful project that does one thing is 5-10KLOC of Java/JS/Python. It is something I can go through in a couple of hours and can easily fix a few years down the line if I need to.
I agree that you should avoid too many dependences. When I need dependencies, I usually try to reduce how many I need. Often, when I do need dependencies, I will include them with the program (which avoids the problem of breaking compatibility) and they should be small and should not have any of their own dependencies (other than possibly the standard library); often they will be programs that I wrote by myself. Dependencies will sometimes be optional, and those that cannot be included often should be made to be possible to replace with an alternative implementation.
rednafi [3 hidden]5 mins ago
When an ecosystem tries to go in the opposite direction, the mob shreds it to pieces, chanting, “Smart devs don’t use it,” “Only mid people like that,” and “The universe is memory unsafe, gotta rewrite.”
Then, PL creators whose language isn’t even at 1.0 light the fuse.
anon-3988 [3 hidden]5 mins ago
Turns out writing software is hard? Actually no, its people that is hard. If we all can just agree on using whatever language X, framework Y or tool Z, then we would just be done with it.
oneplane [3 hidden]5 mins ago
It's not only people that is hard, it's network effects that are very significant here. The article also describes all sorts of truths in isolation, but the processes where these things happen do not exist in isolation.
If all people were identical and everyone was on the same level, things would be different. But that is not how the real world works.
stmw [3 hidden]5 mins ago
He is exactly right. Complexity is eating the world. Faster CPUs, larger memories and faster networks have brought great capability -- but removed the natural governor on software complexity.
agumonkey [3 hidden]5 mins ago
the internet era also changed things
- never ending upgrades
- unstable ui, unstable ecosystems
- fix it later
- shallow business models (subscriptions, freemium, adinfested)
- social everything (it used to be that my computer was a little escape space where and when i wanted, now i'm tethered to the web noise)
aiono [3 hidden]5 mins ago
I am a bit confused because many things he criticise are opposite of each other. Such as adding complexity with new features and backward compatibility, or not inventing the wheel and using new tools.
sirnicolaz [3 hidden]5 mins ago
A bit of incoherence between this
'We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels.'
and this
'We are destroying software pushing for rewrites of things that work.'
But generally speaking I grok it.
antirez [3 hidden]5 mins ago
The rewrite line is about "Let's rewrite this big system from X to Y", not about experimenting creating new variants.
Trasmatta [3 hidden]5 mins ago
I think that's actually pretty pragmatic. Sometimes the answer is to reinvent the wheel. Sometimes the answer is to keep improving the system that already works.
The problem, IMO, is globally applying either rule.
jvanderbot [3 hidden]5 mins ago
"We are destroying software by always thinking that the de-facto standard for XYZ is better than what we can do, tailored specifically for our use case."
I think about this a lot. A general purpose approach is easy to hop between shallow solutions to many problems. So technologists love to have a few of these on hand, esp because they job hop. They're well known and managers love to say "why not just use XYZ".
But it's obvious that a fine tuned hand crafted solution (possibly built from a general one?) is going to significantly out perform a general one.
Dansvidania [3 hidden]5 mins ago
consider the other side too, general purpose "tools" make the engineers relatively interchangeable
I am unable to reconcile these two seemingly contradictory takes in the article (rest of it I concur with):
"We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels."
"We are destroying software pushing for rewrites of things that work."
By rewrite do you mean 1-to-1 porting the code to another language because reason is: faster/easier/scalable/maintainable/_____ (insert your valid reason here)?
swaraj [3 hidden]5 mins ago
I spent 10 hrs this week upgrading our pandas/snowflake libs to latest bc there was apparently a critical vulnerability in the version we used (which we need to fix bc a security cert we need requires us to fix these). The latest versions are not major upgrades, but completely changed the types of params accepted. Enormous waste of time delivering 0 value to our business
parasti [3 hidden]5 mins ago
Security updates are probably the only type of updates that I wouldn't ever call a waste of time. It sucks when they are conflated with feature updates or arbitrary changes, but by itself I don't understand calling them a waste of time.
ecocentrik [3 hidden]5 mins ago
Most novice programmers reject the complexity created by others in favor of their own.
rcarmo [3 hidden]5 mins ago
This is timely. I just spent a few hours rewriting my custom homepage/dashboard around a bit of JavaScript that takes a JSON file, fills a few tabs with icons and provides type ahead find functionality to quickly get to the services I want.
There are dozens of homelab solutions for this (homepage, heimdall, etc.) but none of them was simple enough, framework-free or able to run completely offline (or even from cache).
Being able to code something that fits into a couple of screenfuls of text and does 99% of what I need still feels like “the right way”.
snowstormsun [3 hidden]5 mins ago
We are destroying software by letting non-technical people make technical decisions.
airstrike [3 hidden]5 mins ago
We are destroying software by thinking the web is the only platform in which we can write GUIs
layer8 [3 hidden]5 mins ago
I can only hope that the tech stack for server-served apps will be much more sane in a couple of decades (for the developers of the future; it’ll be too late for me). We are really in a very bad place now in that regard.
scarface_74 [3 hidden]5 mins ago
What do you propose? Do you remember how bad it was trying to upgrade and maintain windows programs across a large organization?
zifpanachr23 [3 hidden]5 mins ago
These are all complaints specific to a particular attitude and kind of web software development that just happens to be most prevalent within a place like SV.
I think the situation (loosely, wasting time spinning your wheels by modernizing things that are actually fine the way they are and may be made worse by adopting "hot new thing") looks worse when you see it through that lens than it actually is throughout the industry as a whole. There are plenty of opportunities for modernization and doing some of the things described in this article that actually make some sense when applied to appropriate situations.
In other words, I totally understand the vibes of this post, it's one of the reasons I don't work in the parts of the industry where this attitude is most prevalent. I would never feel the push to write a post like it though because the poster is I think being a bit dramatic. At least that's the case looking at the industry from my (id argue broader) vantage point of being an expert in and working quite a bit with "legacy" companies and technologies that maybe could stand to have a new UI implemented and some tasteful modern conventions adopted at least as options for the end users.
Syzygies [3 hidden]5 mins ago
Yes. There are better and worse ways to lead a team, just as there are better and worse ways to prompt AI.
This post reads as a description of how the wheels come off the wagon if you don't do things well.
With the evolution of AI agents, we'll all be virtual CTOs whether we like it or not, mastering scale whether we like it or not. We need to learn to do things well.
Managerial genius will be the whole ball of wax.
meltyness [3 hidden]5 mins ago
You can't buy proprietary software. You can buy a specification. "my computer does this". If you're building software under agile, the outputs should be a hierarchical specification clearly traceable to code, the specification should remain clean, it could be simply the GUI in some rare, simple cases but that may be short lived, often the docs are an important part of the specification. Ideally, just a handful of high-level user stories that do not close.
Without a spec, the world falls into disarray and chance, next you need QA tests, and security auditors, and red teams, salespeople, solutions engineers, tech support, training, and a cascade of managers who try and translate "want" into "does" and "when" and to understand and accept the results. Architects and seniors who are both domain experts and skilled coders as the single truth on what the GUI is even supposed to mean. Taking on varying levels of risk with contracts, hires or expanding and contracting the previously mentioned R&D units. That's not software anymore, that's consulting. It's so expensive and unsustainable that it's only a matter of time until you're the leg that gets gnawed off, which is inevitably the result, when burn and panic (or other negative factors) leads you away from turning pain points (like issues, training, or difficult usage) into ice cubes for a cocktail.
remoquete [3 hidden]5 mins ago
To the list I'd add:
We're destroying software when we think documentation does not matter.
antirez [3 hidden]5 mins ago
Well put. And even when we think that writing doc is a less important thing.
JKCalhoun [3 hidden]5 mins ago
There was the point about "comments" not mattering.
remoquete [3 hidden]5 mins ago
Comments are just one bit of the docs.
contingencies [3 hidden]5 mins ago
Came to post the same.
I was going to phrase it: We are destroying software by neglecting to documenting it and its design decisions.
skybrian [3 hidden]5 mins ago
This seems rather alarmist because it's so focused on the bad things and ignores the improvements that come along with the churn.
With each new language and new library, there's a chance to do it better. It's quite a wasteful process, but that's different from things getting worse.
the_mitsuhiko [3 hidden]5 mins ago
You can read it as alarmist, or you can read it as a rather sobering reflection on the current state of affairs. The world will not end, nothing will explode. What however I think will happen is that the next generation of engineers is going to judge us by our creations and learn from it.
pvg [3 hidden]5 mins ago
I think CJ (Continuous Judging) is already the well-established, default practice.
the_mitsuhiko [3 hidden]5 mins ago
I only take issue with the "alarmist" part of it.
davedx [3 hidden]5 mins ago
Depends where you work and what on and who with.
Software engineering is far from a monoculture.
Maybe what I’ve seen change over the years is the strategy of “pull a bunch of component systems together and wire them up to make your system” used to be more of an enterprise strategy but is now common at smaller companies too. Batteries included frameworks are out, batteries included dependencies in a bunch of services and docker images are in.
It’s true many people don’t invent wheels enough. But there are a lot of developers out there now…
kreyenborgi [3 hidden]5 mins ago
> We are destroying software by making systems that no longer scale down: simple things should be simple to accomplish, in any system.
I am reminded of howto guides that require downloading a scaffolding tool to set up a project structure and signing up with some service before even getting to the first line of code.
kibwen [3 hidden]5 mins ago
The older I get the less stock I put in merely pointing out flaws without offering solutions.
You might say "I don't need to be able to propose a solution in order to point out problems", and sure, but that's missing the point. Because by pointing out a problem, you are still implicitly asserting that some solution exists. And the counter to that is: no, no solution exists, and if you have no evidence in favor of your assertion that a solution exists, then I am allowed to counter with exactly as much evidence asserting that no solution exists.
Propose a solution if you want complaints to be taken seriously. More people pointing out the problems at this point contributes nothing; we all know everything is shit, what are you proposing we do about it?
asadotzler [3 hidden]5 mins ago
Defining or clarifying the specifics of the problem is a critical step in solving (or not solving) it. We don't have a good understanding of all of the factors and how they contribute to this problem so having more people take a stab at understanding the problem and sharing that is a net positive. You may think that "we all know it already" but we don't. I discover new and meaningful ways that systems and people are fucking up software just about every year and have been for 25-30 years so I take strong issue with your "we all know" when clearly we don't, and in fact very much still disagree on the details of that problem, the very things we need to understand in order to best solve the problem.
JKCalhoun [3 hidden]5 mins ago
My rather broad solution has always been: let engineers own a part of a stack. Let an engineer own the UI for an app, own the database front-end. Let an engineer own the caching mechanism, let an engineer own the framework.
You give an engineer ownership and let them learn from their own mistakes, rise to the occasion when the stakes are high. This presumes they will have the last word on changes to that sand box that they own. If they want to rewrite it — that's their call. I'm the end they'll create a codebase they're happy to maintain and we will all win.
(And I think they'll be a happier engineer too.)
lupusreal [3 hidden]5 mins ago
> by pointing out a problem, you are still implicitly asserting that some solution exists
Weird take.
Hizonner [3 hidden]5 mins ago
> Because by pointing out a problem, you are still implicitly asserting that some solution exists.
Um, no you're not.
> And the counter to that is: no, no solution exists,
If that's the case, it's probably helpful to know it.
> we all know everything is shit, what are you proposing we do about it?
Give up on whatever doomed goal you were trying to reach, instead of continuing to waste time on it?
ChrisMarshallNY [3 hidden]5 mins ago
These all resonate deeply with me:
> We are destroying software with an absurd chain of dependencies, making everything bloated and fragile.
> We are destroying software by always thinking that the de-facto standard for XYZ is better than what we can do, tailored specifically for our use case.
> We are destroying software mistaking it for a purely engineering discipline.
> We are destroying software claiming that code comments are useless.
> We are destroying software by always underestimating how hard it is to work with existing complex libraries VS creating our stuff.
This one begs the question: "What is 'fast'?". I mean, is it high-performance, or quickly-written? (I think either one is a problem, but "quickly-written" leads to bad software, and overly-optimized software can be quite hard to maintain and extend).
> We are destroying software trying to produce code as fast as possible, not as well designed as possible.
LargeWu [3 hidden]5 mins ago
I take "fast" here to mean "written to delivery business value as soon as possible"
FridgeSeal [3 hidden]5 mins ago
> Don’t reinvent the wheel!” But reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels.
>We are destroying software pushing for rewrites of things that work.
> We are destroying software by jumping on every new language, paradigm, and framework.
Whilst I largely agree with the premise of the post, some of these points feel a little bit dissonant and contradictory. We can have stability and engineering quality _and_ innovation. What I think the author is trying to target is unnecessary churn and replacements that are net-worse. Some churn is inevitable, as it’s part of learning and part of R&D.
aorloff [3 hidden]5 mins ago
A huge amount of software is not designed. In fact, simply actually designing software instead of just allowing people to go hack away at things requires a culture around it.
That culture usually starts at the top if you have it in your company, but occasionally you will just be lucky enough to have it in your team.
So before you write some software, you describe it to your team in a document and get feedback (or maybe just your manager).
If you don't do this, you don't really have a software design culture at your company. My rule was basically that if a piece of software required more than 1 Jira ticket, then create a doc for it.
icedchai [3 hidden]5 mins ago
You are correct, but this goes against what is perceived as "agile." Today, we typically "fix it in the next sprint." It results in poor quality products. And it sucks.
johannessjoberg [3 hidden]5 mins ago
> We are destroying software trying to produce code as fast as possible, not as well designed as possible.
Legacy systems are born from rushing into implementation to feel productive rather than understanding the problem. Slow is smooth. Smooth is fast.
caseyy [3 hidden]5 mins ago
Also, simple is evident, smooth, maintainable, and fast.
If asked to implement a templated dictionary type, many C++ programmers will write a templated class. If they want to be responsible, they’ll write unit tests. But the simplest way is:
template<typename Key, typename Value>
using Dictionary = std::vector<std::pair<Key, Value>>;
It is trivially correct and won’t have any edge cases which the tests may or may not catch.
Some programmers would consider this implementation beneath them. In just 800 lines, they could build the type from scratch and with only a few months’ long tail of bugs.
YZF [3 hidden]5 mins ago
Though it's not the only way to create legacy systems. Sometimes they are of sound design which deteriorates over time. Or the requirements shifted such that the design is no longer adequate. Or as the article mentions, rewrites for no good reason.
When I do a git blame of something coherent, that worked well in production, that I wrote years ago, almost none of my original code survived, every line is now written by a different person. That's the entropy of software systems left unchecked. It was not really rushed, there was a ton of time debating design before a single line was written.
jmclnx [3 hidden]5 mins ago
I agree and you are talking to the choir here :)
Many of these have been an issue since I started programming many decades ago on minis. Seems as the hardware gets more powerful we find ways to stress out the hardware more and more. This making your list true to a greater extent year after year.
BTW, I like your WEB Site format a lot!
tqi [3 hidden]5 mins ago
> We are destroying software by no longer taking complexity into account when adding features or optimizing some dimension.
The reality is that man's "unnecessary complexity" is another's table stakes feature. The post isn't entirely without merit, but most of it reads like a cranky old timer who yearns for a simpler time.
antirez [3 hidden]5 mins ago
If you read the post as: software should not evolve, you are not reading it right.
Also I'm very enthusiastic with modern AI and definitely open to new things that make a difference. The complexity I point my finger to, in the post, is all unnecessary for software evolution. Actually it prevents going forward making it better because one have to fight with the internal complexity or with a culture that consider innovation rewriting the software with X or using Y.
Take for example the web situation. We still write into forms, push "post" buttons. Now we trigger millions of lines of code but the result is basically the same as 20 years ago.
austin-cheney [3 hidden]5 mins ago
Holy fuck. I internally explode in anger every time I hear Don’t reinvent the wheel.
To me this is the most sure way to identify the adults from the children in the room. People that can actually program, strangely enough, aren’t bothered by programming.
ludston [3 hidden]5 mins ago
The ones considered adults aren't so emotionally disregulated that they get angry about sometimes useful idioms.
austin-cheney [3 hidden]5 mins ago
The grown ups do sometimes get angry at children that lie about their capabilities and equivocate with bullshit excuses.
braebo [3 hidden]5 mins ago
My theory is that software has gotten easier to build, the barrier to entry lowered, the demand for more advanced functionality boomed, and the incentives promoting maintainability and longevity usurped by an ever increasing demand for rapid financial growth.
talles [3 hidden]5 mins ago
> We are destroying software mistaking it for a purely engineering discipline.
This one packs a lot of wisdom.
ks2048 [3 hidden]5 mins ago
This was one of the few that didn't resonate. What does it mean?
Doches [3 hidden]5 mins ago
I think the final point illustrates this one pretty succinctly: 'what will be left will no longer give us the joy of hacking.' Personally I build my own version of almost every software tool for which I regularly (like, daily) use a UI. So for e.g. personal note-taking, continuous integration, server orchestration, even for an IDE: I could use Apple Notes, CircleCI, Chef, VSCode, but I instead build my own versions of these.
I'm not a masochist; they're often built on top of components, e.g. my IDE uses the Monaco editor. But working on these tools gives me a sense of ownership, and lets me hack them into exactly the thing I want rather than e.g. the thing that Microsoft's (talented! well-paid! numerous!) designers want me to use. Hacking on them brings me joy, and gives me a sense of ownership.
Like an idealised traditional carpenter, I make (within reason) my own tools. Is this the most rational, engineering-minded approach? I mean, obviously not. But it brings me joy -- and it also helps me get shit done.
mceachen [3 hidden]5 mins ago
If I had to guess, antirez was describing engineering managers and tech leads that have (mis)read “clean code” or similar works, and take them as commandments from on high rather than tools and approaches that may be helpful or applicable in some circumstances.
Or, more generally, the fact that most of what the software industry produces is much more in line with “art” than “engineering” (especially when looked at from a Mechanical Engineer or Civil Engineer). We have so much implementation flexibility to achieve very similar results that it can be dizzying from the standpoint of other engineering fields. consider
JanneVee [3 hidden]5 mins ago
In my view it is about design that requires taste and creativity. Engineering is about function, design is about form. If I build something that solves a problem but if it isn't well designed it can mean that no one actually uses even if it is good piece of engineering.
ks2048 [3 hidden]5 mins ago
Yeah, I guess that's what he meant. In my mind, though, good "engineering" would also include good design, simplicity, etc.
petermcneeley [3 hidden]5 mins ago
It means you are an engineer.
dewey [3 hidden]5 mins ago
Today I was working on wrapping some simple Python library into a gRPC service so I can use it from some other Go services. The business logic is around 20 lines.
Setting up uv, Dockerfiles, GitHub Actions, Secrets etc. took me basically the whole day. I can very much relate to "We are destroying software with complex build systems."
ongytenes [3 hidden]5 mins ago
When developers become too dependent on AI to "assist" them in coding, they're more likely to be unable to debug "their own" code as they won't have a full grasp/understanding of it.
People tend to adapt to technology by becoming more lazy, putting less effort in understanding. Look at how after of calculators became common, we got generations of people who struggle to do basic math.
fujinghg [3 hidden]5 mins ago
People struggle at basic mathematics because they don't need it for daily use in society. And you forget things when you don't use them.
You'll also find that most people do the same stuff they would have done without a calculator still without a calculator. The advantage now is when they do reach for the calculator they aren't working with perhaps 2 digits precision like the slide rule that preceded them or having to tabulate large amounts of figures to maintain precision.
sillysaurusx [3 hidden]5 mins ago
But there have always been people who struggle to do basic math. It doesn’t seem true that people will be unable to debug “their own” code, even if it was generated by AI, because people already learn how to debug open source software not written by themselves.
epolanski [3 hidden]5 mins ago
> Look at how after of calculators became common, we got generations of people who struggle to do basic math.
Do you have any data behind this claim?
> When developers become too dependent on AI to "assist" them in coding, they're more likely to be unable to debug "their own" code as they won't have a full grasp/understanding of it.
Again, do you have any evidence that this happen?
layer8 [3 hidden]5 mins ago
Maybe we should restrict code generation to the dumber models so that we can still use the more lucid models for debugging. ;)
(This is an allusion to Kernighan’s lever.)
scarface_74 [3 hidden]5 mins ago
Old folks (raises hand) said the same thing about “high level languages” like C when people don’t learn assembly.
picafrost [3 hidden]5 mins ago
I don’t disagree. But if you want to work as an artisan you don’t work at a factory. Similarly, you can’t expect the products coming out of a factory to be at the artisan level.
What is it about software that causes us to search for aesthetic qualities in the instructions we write for a machine? Ultimately the problems we’re solving, for most of us, will be utterly meaningless or obsolete in ten years at most.
futuraperdita [3 hidden]5 mins ago
I think a lot of software engineers have been in denial that we have been working in factories as a lot of the ZIRP-era technical structures were constructed in such a way to make it seem like we were part of guilds. The average company cares about craft only insomuch as it feeds enough product reliability and productivity.
That illusion has been lifted a little harshly for a lot of people over the past year or so. I still enjoy software-as-craft but I don't hold any false belief that my day job does.
aredestroyoursz [3 hidden]5 mins ago
What makes anyone think that anything in the universe is protected from the innate natural process of divide, conquer, exploit and hoard...
He makes very good points. But he missed one. We are destroying software(or anything else) by waiting till something goes wrong to fix it. ex: Software Security, US Food Standards and their relation to the health of it's citizens, etc...
IshKebab [3 hidden]5 mins ago
These are just barely meaningful and dubiously accurate rants that are designed to make the author feel superior.
> We are destroying software by no longer caring about backward APIs compatibility.
Come on, who believes this shit? Plenty of people care about API backwards compatibility.
> We are destroying software pushing for rewrites of things that work.
So never rewrite something if it "works"? There are plenty of other good reasons to rewrite. You might as well say "We are destroying software by doing things that we shouldn't do!"
> We are destroying software claiming that code comments are useless.
No we aren't. The crazy "no comments" people are a fringe minority. Don't confuse "no comments" (clearly insane) with "code should be self-documenting" (clearly a good idea where possible).
Worthless list.
ninkendo [3 hidden]5 mins ago
> We are destroying software by no longer caring about backward APIs compatibility.
My take: SemVer is the worst thing to happen to software engineering, ever.
It was designed as a way to inform dependents that you have a breaking change. But all it has done is enable developers to make these breaking changes in the first place, under the protective umbrella of “I’ll just bump the major version.”
In a better universe, semver wouldn’t exist, and instead people would just understand that breaking changes must never happen, unless the breakage is obviously warranted and it’s clear that all downstreams are okay with the change (ie. Nobody’s using the broken path any more.)
Instead we have a world where SemVer gives people a blank check to change their mind about what API they want, regularly and often, and for them to be comfortable that they won’t break anyone because SemVer will stop people from updating.
But you can’t just not update your dependencies. It’s not like API authors are maintaining N different versions and doing bug fixes going all the way back to 1.0. No, they just bump majors all the time and refactor all over the place, never even thinking about maintaining old versions. So if you don’t do a breaking update, you’re just delaying the inevitable, because all the fixes you may need are only going to be in the latest version. So any old major versions you’re on are by definition technical debt.
So as a consumer, you have to regularly do breaking upgrades to your dependencies and refactor your code to work with whatever whim your dependency is chasing this week. That callback function that used to work now requires a full interface just because, half the functions were renamed, and things you used to be able to do are replaced with things that only do half of what you need. This happens all the god damned time, and not just in languages like JavaScript and Python. I see it constantly in Rust as well. (Hello Axum. You deserve naming and shaming here.)
In a better universe, you’d have to think very long and very carefully about any API you offer. Anything you may change your mind on later, you better minimize. Make your surface area as small as possible. Keep opinions to a minimum. Be as flexible as you can. Don’t paint yourself into a corner. And if you really, really need to do a big refactor, you can’t just bump major versions: you have to start a new project, pick a new name (!), and find someone to maintain the old one. This is how software used to work, and I would love so much to get back to it.
antirez [3 hidden]5 mins ago
I think likewise. Semver in theory was a good idea (at least in principle: I never liked the exact form, since the start), and the original idea was that the version number was a form of documentation. But what it became is very different: a justification for breaking backward compatibility "because I bumped the version".
debeloo [3 hidden]5 mins ago
>But all it has done is enable developers to make these breaking changes in the first place, under the protective umbrella of “I’ll just bump the major version.”
Which is just fine when it is a non funded free software project. No one owes you anything in that case, let alone backwards compatibility.
ninkendo [3 hidden]5 mins ago
The problem is the normalization of breaking changes that has happened as a result. Sure, you don’t owe anybody backwards compatibility. You don’t owe anybody anything. But then whole ecosystems crop up built out of constituent parts which each don’t owe anyone backwards compatibility, and the result is that writing software in these ecosystems is a toxic wasteland.
It’s not an automatic outcome of free software either. The Linux kernel is famous for “we don’t break user space, ever”, and some of Linus’s most heated rants have come from this topic. All of GNU is made of software that doesn’t break backwards compatibility. libc, all the core utilities, etc, all have maintained deprecated features basically forever. It’s all free software.
colonial [3 hidden]5 mins ago
This attitude doesn't fix anything. Refusing to ever break an interface is how you get stuck with trash like C++'s std::regex, which is quite literally slower than shelling out to PHP - and it's not like we can just go and make C+++ to fix that.
Forbidding breaking changes isn't going to magically make people produce the perfect API out of the gate. It just means that fixes either don't get implemented at all, or get implemented as bolt-on warts that sow confusion and add complexity.
ninkendo [3 hidden]5 mins ago
Of course, mistakes happen. But the degree to which they happen reflects bad engineering in the first place, and this sort of bad engineering is the very thing that SemVer encourages. Move fast and break things, if you change your mind later you can just bump the major version. This attitude is killing software. I’d rather have a broken std::regex and a working std::regex2 than have a world where C++ code just stops compiling every time I update my compiler.
karmakaze [3 hidden]5 mins ago
It's a good message with poor framing. If anything we're not destroying software, we're making much more than ever, of reduced quality and scope. Could be adjusted to destroying software quality.
mihaaly [3 hidden]5 mins ago
Related: "We are not dying. That's how we live."
Or others may say: "There are software, and there are things remind people of software".
brian-armstrong [3 hidden]5 mins ago
Seems like maybe antirez has lost the joy of coding, at least for a little while.
It's easy to get caught up in what you dislike about software, but you have to make yourself try new stuff too. There's always more delight to be found. Right now, I think it's good to embrace LLM assistants. Being able to sit down and avoid all the most tedious parts and focus purely on engineering and design makes every session much more enjoyable.
Software itself is better now than it has ever been. We're slowly chipping away at our own superstitions. Just because some people are fully employed building empty VC scams and software to nowhere does not damn the rest of the practice.
antirez [3 hidden]5 mins ago
I never lost the joy of coding since I simply do what I want and refuse to do what I don't enjoy. But unfortunately most people can't because of many reasons and I know many that are now very disappointed with the state of software.
brian-armstrong [3 hidden]5 mins ago
Thanks for replying, and sorry for implying the wrong state of things.
I think that writing software for an employer has always kind of sucked. There's never enough time to improve things to the state they should be and you're always one weird product feature from having to completely mess up your nice abstraction.
I do feel like writing hobby software is in a great state now though. I can sit down for 30 minutes and now with Cursor/LLM assistance get a lot of code written. I'm actually kind of excited to see what new open source projects will exist in a few years with the aid of the new tools.
Arguing against 10x complexity & abstractions, with 1x coding :-)
cmgriffing [3 hidden]5 mins ago
We are destroying software by directly contradicting ourselves when we describe how we are destroying software.
antirez [3 hidden]5 mins ago
Sometimes, to write a post that makes people think about complicated processes, you need to write statements that may sound dissonant. Reality is complicated and multi-sided, and I want to ignite a reflection on the state of the software which is not specific of a given point of view, but more general: not a single, but a community.
portaouflop [3 hidden]5 mins ago
The less software exists the better.
edanm [3 hidden]5 mins ago
This seems like an interesting time to post this. With the rise of LLMs, Software development has changed more in the last two years than at any time I can previously recall. And I strongly suspect that's just the beginning.
I believe we'll soon be looking for "the joy of programming" in a totally different way, far more outcome-oriented than it is today. (Which is, in my book, a good thing!)
layer8 [3 hidden]5 mins ago
I mean, you’re the CEO of an AI company.
edanm [3 hidden]5 mins ago
Umm, yeah, so? What is the implication of this comment supposed to be?
SaucyWrong [3 hidden]5 mins ago
The implication is that you’re fronting. It’s fine, I’m a technical founder of an AI company. The business demands that what you say is true. But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.
edanm [3 hidden]5 mins ago
I've been a software dev for 27 years, professionally for 21 years.
This idea is getting the causality arrows backwards. I'm not talking up AI because I'm in AI - I'm in AI because I believe it is revolutionary. I've been involved in more fields than most software devs, I believe, from embedded programming to 3d to data to (now) AI - and the shift towards Data & AI has been an intentional transition to what I consider most important.
I have the great fortune of working in what I consider the most important field today.
> But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.
This is an interesting sentiment. I certainly share it to some extent, though as I've evolved over the years, I've chosen, somewhat on purpose, to focus more on outcomes than on the programming itself. Or at least, the low-level programming.
I'm actually pretty glad that I can focus on big picture nowadays - "what do I want to actually achieve" vs "how do I want to achieve it", which is still super technical btw, and let LLMs fill in the details (to the extent that they can).
Everyone can enjoy what they want, but learning how to use this year's favorite library for "get back an HTML source from a url and parse it" or "display a UI that lets a user pick a date" is not particularly interesting or challenging for me; those are details that I'd just as soon avoid. I prefer to focus on big picture stuff like "what is this function/class/file/whatever suppoed to be doing, what are the steps it should take", etc.
js4ever [3 hidden]5 mins ago
Amen! I totally agree with Antirez
stego-tech [3 hidden]5 mins ago
My own version, from the world of IT.
We continue to add complexity for the sake of complexity, rarely because of necessity. We never question adding new things, yet there's always an excuse not to streamline, not to remove, not to consolidate.
We must deliver new features, because that is progress. We must never remove old systems, because that harms us. We must support everything forever, because it's always too costly to modernize, or update, or replace.
It doesn't matter if the problem doesn't exist, what matters is that by incorporating this product, we solve the problem.
We must move everything off the mainframe and onto servers. We must move every server into virtual machines. We must move every virtual machine into AWS. Every EC2 instance into a container. Every container into Kubernetes. Into OpenShift. We must move this code into Lambda. Do not question the reasoning or its value, it must be done.
How did our budget balloon in size? Why are our expenses always going up? Why is our cloud bill so high? We must outsource, clearly. We must hire abroad. We must retain contractors and consultants. We need more people with certifications, not experience.
Why is everything broken? Why is our data leaking into the public realm? How did we get hacked? We must bring in more contractors, more consultants. That is the only answer. We don't need internal staff, we need managers who can handle vendors, who can judge KPIs, who can identify better contractors at cheaper costs year after year.
Why is the competition beating us? What vendor should we go with to one-up our competition? We must consult the Gartner report. We must talk to other CIOs. We must never talk to our staff, because they don't understand the problem.
We don't like our vendor anymore. They raised prices on us, or didn't take us to that restaurant we liked for contract negotiations, or didn't get us box seats for that event this year. They must go. What do you mean we can't just leave AWS this quarter? What do you mean we can't just migrate our infrastructure to another competitor? That proprietary product is an industry standard, so just move to another one. What are we even paying you for?
We checked all the boxes. We completed all the audits. We incorporated all the latest, greatest technologies. We did everything we were told to. So why aren't we successful? Why aren't we growing?
...ah, that's why. We didn't incorporate AI. That must be it. That will fix it all.
Then we'll be successful. Then everything will work.
meltyness [3 hidden]5 mins ago
Is adopting the manifesto a violation of the manifesto?
andrewstuart [3 hidden]5 mins ago
So, stop using dependencies, write everything yourself.
Trasmatta [3 hidden]5 mins ago
> We are destroying software pushing for rewrites of things that work.
And sometimes the reverse problem is what is destroying software: being unwilling to push for rewrites of things when they desperately need one. I think we may have over indexed on "never rewrite" as an industry.
> We are destroying software mistaking it for a purely engineering discipline.
And I'm also seeing this the other way: engineers are beginning to destroy non software things, because they think they can blindly apply engineering principles to everything else.
DocTomoe [3 hidden]5 mins ago
We are also losing the joy of hacking together software by not adapting - or sometimes abandoning - useful development paradigms.
I just tried to develop a simple CRUD-style database UI for a mechanical hobby project. Being on the backend/systems side of the spectrum, for the UI I decided "yeah, considering I work on a Mac now, and there's no such thing as WinForms here, let's quickly throw together something small with python and tkinter". God, that was a mistake and lead to both suffering and lost days which I did not work on the main project on.
How is it that in 2025, we still do not have some sort of Rapid Application Development tool for Mac? How do we still not have some sort of tool that allows us to just drag a database table into a window, and we have basic DB functionality in that window's app? Jobs demonstrated that capability in the NeXT demo, Visual Studio has been doing it for decades, so has Delphi. But on Mac?
Swift is a train wreck, tkinter is like painting a picture with yourself being blindfolded, Qt does a lot of things right, but has a funky license model I am not comfortable with - and has pain points of its own.
I eventually coughed up the 100 bucks and got a Xojo license ... now I am working in BASIC, for the first time since 1992 - but it has an interface designer. For a massive boost in usability, I have to get back into a language I wished to forget a long time ago.
And that does not spark joy.
Yes, bloat is bad. Yes, I too make fun of people who have to import some nom package to check whether a number is odd or even.
But sometimes, you are not Michelangelo, chiselling a new David. Sometimes, you just need a quick way to do something small. Actively preventing me from achieving that by refusing to go with the times is destructive.
petepete [3 hidden]5 mins ago
Would FileMaker have worked? It sounds like it's a good fit.
DocTomoe [3 hidden]5 mins ago
Yes, but it appears that FileMaker only works with their cloud-based DB these days, and I really like my data to stay at home, in something like SQLIte or a NAS-hosted MySQL - mostly so I can directly interface my self-designed scanning/photography/sorting bot to it.
semiinfinitely [3 hidden]5 mins ago
this person must have never seen a bell curve meme
50 IQ: haha software is complicated whatever
100 IQ: (this post)
150 IQ: software is complicated whatever
cedws [3 hidden]5 mins ago
Capitalism doesn't incentivise building quality software, it incentivises building as quickly as possible and covering up the flaws because "we'll fix it later" (spoiler: it doesn't get fixed later.)
Until the incentives change, the outcome won't change.
Pointless meetings and bureaucracy also doesn't help. Instead of giving engineers time and breathing room to build well-defined systems, organisations treat them like fungible workhorses that must meet arbitrary deadlines.
bdhcuidbebe [3 hidden]5 mins ago
”we are” not antirez.
Semi-related: Start using profane variable names because apparantly it will cause Copilot to stop analyzing your code.
uses [3 hidden]5 mins ago
Yeah, 20 years ago when I started building websites I was still in my CS major. We didn't have these huge layers of libraries and frameworks. I wonder how much I would know now if I didn't have to just build everything myself. We would make all kinds of things and we did it all from scratch - upload files, log in, save stuff in databases using actual sql, entire ecommerce experiences, I would write the whole "front end" and "back end" by hand. And everything was really fast.
recursivedoubts [3 hidden]5 mins ago
lisan al gaib!
alexashka [3 hidden]5 mins ago
So much of life amounts to complaining that others don't think like you and don't value the same things you value :)
thebiglebrewski [3 hidden]5 mins ago
Interesting takes. I got a sort of "Yom Kippur Vidui" vibe from this if anyone else in the tribe is reading haha.
Read responsively
"We are destroying software by no longer taking complexity into account when adding features or optimizing some dimension.
And we are destroying software with complex build systems.
We are destroying software with an absurd chain of dependencies, making everything bloated and fragile.
And we are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels."
Yie-die-die-die-diiiiieee-dieeee,Yie die die die dieee dieeee
kittikitti [3 hidden]5 mins ago
“Don’t reinvent the wheel!”
I think people vastly underestimate "the wheel". The wheel was something recreated independently over and over again for thousands of years across human history. Even your favorite programming language or web framework is not comparable to "the wheel".
scotty79 [3 hidden]5 mins ago
We are destroying software by writing it. Perfect, bug-free software is the software that doesn't exist.
throwaway81523 [3 hidden]5 mins ago
destroyallsoftware.com is a excellent web site :).
rvz [3 hidden]5 mins ago
Now we continue to destroy software due to the immediate need to off-load our thinking to LLMs that pretend to 'reason' and generate code that the engineer has little understanding of which then increases the risk of introducing hidden bugs and lowering software quality.
Maybe we are destroying software, but at the same time we can deliver product/services to customers.
moktonar [3 hidden]5 mins ago
We are not destroying software, money is.
foxhop [3 hidden]5 mins ago
For somebody who cares about software why are you advertising for disqus comments? Link in bio to a public domain commenting system forn static sites.
thomastjeffery [3 hidden]5 mins ago
We are destroying software to build on top of it over and over again. What could we do instead?
When we build software, we answer three questions: "what?", "how?", and "why?". The answer to what becomes the data (and its structure). The answer to how is the functionality (and the UI/UX that exposes it). The answer to why is...more complicated.
The question why is answered in the process of design and implementation, by every decision in the development process. Each of these decisions becomes a wall of assumption: because it is the designer - not the end user - making that decision.
Very rarely can the end user move or replace walls of assumption. The only real alternative is for the user to alter the original source code such that it answers their why instead.
Collaboration is the ultimate goal. Not just collaboration between people: collaboration between answers. We often call this "compatibility" or "derivative work".
Copyright, at its very core, makes collaboration illegal by default. Want to make CUDA implementation for AMD cards? You must not collaborate with the existing NVIDIA implementation of CUDA, because NVIDIA has a copyright monopoly. You must start over instead. This is NVIDIA's moat.
Of course, even if copyright was not in the way, it would still be challenging to build compatibility without access to source code. It's important to note that NVIDIA's greatest incentive to keep their source code private is so they can leverage the incompatibility that fills their moat. Without the monopoly granted/demanded by copyright, NVIDIA would still have a moat of "proprietary trade secrets", including the source code of their CUDA implementation.
Free software answers this by keeping copyright and turning it the other direction. A copyleft license demands source code is shared so that collaboration is guaranteed to be available. This works exclusively for software that participates, and that is effectively its own wall.
I think we would be better off without copyright. The collaboration we can guarantee through copyleft is huge, but it is clearly outweighed by the oligopoly that rules our society: an oligopoly constructed of moats whose very foundations are the incompatibility that is legally preserved through copyright.
nurettin [3 hidden]5 mins ago
We are destroying software by selling it to greedy corporates who try to monopolize all open source clients for that software on github.
namuol [3 hidden]5 mins ago
Who is “we”? The forces eroding software quality are disproportionately caused by short-sighted VCs.
revskill [3 hidden]5 mins ago
Enjoying programming is harder than u think.
gijoeyguerra [3 hidden]5 mins ago
Not I.
ctrlp [3 hidden]5 mins ago
Not a fan of screeds like this. "We" are not doing anything. Some software sucks. Some doesn't. You can choose not to follow practices that produce shitty software. The cream will rise. None of these items is necessary and anyone can simply opt out. Just don't do it that way.
nbzso [3 hidden]5 mins ago
We are destroying humanity trough software.
Someone driven by greed and social validation needs.
Someone by technocratic dreams of transhumanism.
The cult of acceleration brings inherited error.
False productivity and systems for control of thinking and behavioral patterns. In a normal and truly rational world, software is a tool for intent and operation.
In a world of mass hallucination (psychosis), doom-scrolling and international capital, software decay is a logical process. Your Leaders have no incentive to think systemically when they have guaranteed immunity.
A good software is a result of meritocratic system with accountability and transparency.
Just take a statistical view on investment in education vs investment in AI infrastructure and the picture becomes clear.
The designer fallacy. Techno-optimism. Detachment from the real problems of humanity. Desensitized obedience for career growth or social benefits.
We build software on the shoulders of giants with a sense of reality and human connection. We lost this skill.
You cannot abstract to infinity. You cannot complicate things just because and expect quality and maintainability to emerge from the ether.
bbor [3 hidden]5 mins ago
IMO this is some quintessential false nostalgia mixed with uncritical, confirmation-bias driven cynicism. When exactly are we pointing to when we express these complaints? I get the sense that the answer is invariably either "my first job out of college" or "before React", whenever this comes up.
Even more fundamentally, it's built on the ubiquitous mistake made by those who make good money in the current status quo: it doesn't name the actual cause of rushed work et al., which is obviously capitalist cost-cutting, not lazy professionals.
TacticalCoder [3 hidden]5 mins ago
> We are destroying software with complex build systems.
> We are destroying software with an absurd chain of dependencies, making everything bloated and fragile.
> We are destroying software by making systems that no longer scale down: simple things should be simple to accomplish, in any system.
That's true and I'd say we've got proof for that with the fact that many software is now run in containers.
I always get downvoted for saying that it's not normal we now all run things in containers but I do run my own little infra at home. It's all VMs and containers. I know the drill.
It's not normal that to do something simple and which should be "dumb", it's easier to just launch a container and then interface with the thing using new API calls that are going to be outdated at the next release. We lost something and it's a proof we gave up.
This containerization-of-all-the-things is because we produce and consume turds.
Build complexity went through the roof so we have to isolate a specific build environment in a container file (or, worse, a specific environment tailored to be accept one build already made).
Criticize Emacs as much as you want: the thing builds just fine from source with way more lines of code (moreover in several languages) than most projects. And it builds fine since decades (at least for me). And it doesn't crash (emacs-uptime -> 8 days, 5 hours, 48 minutes and that's nothing. It could be months but I sometimes turn my computer off).
Nowadays you want to run this or that: you better deploy a container to deal with the build complexity, deployment complexity and interacting complexity (where you'll use, say, the soon-to-be-updated REST calls). And you just traded performance for slow-as-molasses-I-wrap-everything-in-JSON calls.
And of course because you just deployed a turd that's going to crash anyway, you have heartbeats to monitor the service and we all applaud when it automatically gets restarted in another container once it crashed: "look what a stable system we have, it's available again?" (wait what, it just crashed again, oh but no problem: we just relaunched another time)
It's sad really.
jumploops [3 hidden]5 mins ago
We destroyed software when we stopped writing machine code by hand.
kadushka [3 hidden]5 mins ago
2 years from now 99.9% of all software will be written by AI. So just put those statements into your system prompt.
hacker_homie [3 hidden]5 mins ago
My take away is that packages were a mistake I have never heard anyone happy to deal with packages or versioning.
Dependencies should be expensive to slow the bloat.
nickjj [3 hidden]5 mins ago
The timing of this is pretty convenient.
I just open sourced a CLI tool for income and expense tracking yesterday at https://github.com/nickjj/plutus and I'd like to think I avoided destruction for each of those bullets.
You have a choice on the code you write and who you're writing it for. If you believe these bullets, you can adhere to them and if you're pressured at work to not, there are other opportunities out there. Don't compromise on your core beliefs. Not all software is destroyed.
Yes, it did some things right, but also did plenty of them bad, lets not worship it as the epitome of OS design, cloning it all the time without critical thinking.
Alone the fact that its creators went on to design Plan 9, Inferno, Alef, Limbo and Go, shows even they moved on to better approaches.
Also to note that outside FOSS circles worshiping UNIX, no one cares about said philosophy, including commercial UNIX vendors.
I think you're confusing "different" with "better", and you're confusing someone'small almost personal experiments implemented as proof-of-concept projects as actually being improvements.
I mean, Plan 9 was designed with a radical push for distributed computing in mind. This is not "better" than UNIX's design goals, just different.
Nevertheless, Plan 9 failed to gain any traction and in practice was pronounced dead around a decade ago. In the meantime, UNIX and Unix-like OSes dominate the computing world still up to this day. How does that reflect in your "better approaches" assertion?
The argument on the Go programming language is particularly perplexing. The design goal of Go has nothing to do with the design goal of C. Their designers were very clear in how their design goals was to put together a high-level programming language and tech stack designed to improve Google's specific problems. This wasn't C's design requirements, were they?
https://go.dev/talks/2012/splash.article
> We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy.
Go is basically Limbo in a new clothing, Limbo took up the lessons on Alef's design failure.
They could have designed their C++ wannabe replacement in many other ways.
Only rule 7. and 9. are measurable and not purely subjective.
If you design for an ephemeral state, it doesn't make sense to be long lasting.
3D printing a door handle that perfectly fits my hand, my door, the space it moves in and only lasts until I move to another house can be the ultimate perfect design _for me_.
I'd see the same for prosthetic limbs that could evolve as the wearer evolves (e.g. growth up or ages) or what they expect from it changes.
https://web.stanford.edu/class/archive/cs/cs240/cs240.1236/o...
https://en.m.wikipedia.org/wiki/Unix_philosophy
> The Unix philosophy emphasizes building simple, compact, clear, modular, and extensible code that can be easily maintained and repurposed by developers other than its creators
Nobody says "we want to build complicated, sprawling, unclear, unmodular and rigid code", so this isn't a statement that sets UNIX apart from any other design. And if we look at the competing space of non-UNIX platforms, we see that others arguably had more success implementing these principles in practice. Microsoft did COM, which is a much more aggressive approach to modularity and composable componentization than UNIX. Apple/NeXT did Objective-C and XPC, which is somewhat similar. Java did portable, documented libraries with extensibility points better than almost any other platform.
Many of the most famous principles written down in 1978 didn't work and UNIX practitioners now do the exact opposite, like:
• "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features" yet the --help for GNU grep is 74 lines long and documents 49 different flags.
• "Don't hesitate to throw away the clumsy parts and rebuild them." yet the last time a clumsy part of UNIX was thrown away and rebuilt was systemd, yielding outrage from people claiming to defend the UNIX philosophy.
About the only part of the UNIX philosophy that's actually unique and influential is "Write programs to handle text streams, because that is a universal interface". Yet even this principle is a shadow of its former self. Data is exchanged as JSON but immediately converted to/from objects, not processed as text in and of itself. Google, one of the world's most successful tech companies, bases its entire infrastructure around programs exchanging and storing binary protocol buffers. HTTP abandoned text streams in favor of binary.
Overall the UNIX philosophy has little to stand apart other than a principled rejection of typed interfaces between programs, an idea that has few defenders today.
Implementation simplicity meant one important thing: Unix could be quickly developed and iterated. When Unix was still new, this was a boon and Unix grew rapidly, but at one point backward compatibility had to be maintained and we remained with a lot of cruft.
Unfortunately, since implementation simplicity and development speed nearly always took precedence over everything else, this cruft could be quite painful. If you look at the C standard library and traditional Unix tools, they are generally quite user hostile. The simple tools like "cat" and "wc" are simple enough to make them useful, but most of the tools have severe shortcomings, either in the interface, lack of critical features or their entire design. For example:
1. ls was never really designed to output directory data in a way that can be parsed by other programs. It is so bad that "Don't parse ls" became a famous warning for shell script writers[1].
2. find has a very weird expression language that is hard to use or remember. It also never really heard about the "do one thing well" part of Unix philosophy and decided that "be mediocre at multiple things" is a better approach. Of course, finding files with complex queries and executing complex actions as a result is not an easy task. But find makes even the simplest things harder than they should be.
A good counterexample is "fd"[2]. You want to find that has a "foo" somewhere in its name in the current directory and display the path in a friendly manner? fd foo vs find . -name 'foo' -Printf "%P\n". What to find all .py files and run "wc -l" on each of them? fd --extension py --exec wc -l (or "fd -e py -x wc -l" if you like it short). "Find requires you to write find . -name '*.py' -exec wc -l {} ;". I keep forgetting that and have to search the manual every time.
Oh, and as a bonus, if you forget to quote your wildcards for find they may (or may not!) be expanded by the shell, and end up giving you completely unexpected results. Great foolproof design.
3. sed is yet another utility which is just too hard to learn. Most people use it as mostly as a regex find-and-replace tool in pipes nowadays, but its regex syntax is quite lacking. This is not entirely sed's fault, since it predates Perl and PCRE which set the modern standard for regular expressions that we expect to more or less work the same everywhere. But it is another example of a tool that badly violates the principles of good design.
The Unix Haters Handbook is full of many more examples, but the reality is that Unix won because other OSes could not deliver what their users needed fast enough. Unix even brought some good ideas to the mass market (like pipes) even if the implementation was messy. We now live under the shadow of its legacy, for better or worse.
But I don't think we should idolize the Unix philosophy. It is mostly a set of principles ("everything is a file", "everything is text" and "each tool should do one job", "write programs to work together") that was never strictly followed (many things in UNIX are not files, many commands do multiple jobs, most commands don't interact nicely with each other unless you just care about opaque lines). But most importantly, the Unix philosophy doesn't extend much beyond designing composable command line tools that handle line-oriented text for power users.
[1] https://mywiki.wooledge.org/ParsingLs
[2] https://github.com/sharkdp/fd
Why?
Maybe it's meant in an artistic sense, but under an engineering one I just don't see it.
"We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels."
There are very real drawbacks to relying on other people's solutions to your problems. Sometimes they are outweighed by the hassle of implementing your own solution, but in many cases they do not.
> > Why?
> a "good design" is indeed innovative in some way
Proof by repetition, I guess? You haven't answered the question in any meaningful way, just repeated the original assertion. It's still as unsupported as it ever was.
Sometimes the things we consider already solved can be solved better with nuances that maybe weren't considered before.
innovative vs useful, understandable vs honest, long lasting vs thorough, aesthetic vs unobtrusive,
What?
https://www.vitsoe.com/us/about/good-design#good-design-is-i...
> Everything is a file
To me it could be:
Something accessible via a file descriptor that can be read from and/or written to. Feel free to add some other details like permissions, as needed.
Perhaps they should allow for folders as well, since a complex piece of hardware undoubtedly needs to expose multiple files to be robust, but the underlying intention was to create a standardized way of interacting with hardware.
Sectors on disk, switches, analog devices like a speaker, i2c and other hardware ideas all need to be read from or written to in some form to be useful
The most common example of soemthing almost all programs interact with universally is BSD sockets. In Plan9, which goes out of its way to follow this everything is a file philosophy, TCP/UDP connections are actually represented as files. You open a file and write something to it to create the connection, and then you read or write to other files to read the streams, and you write again to the control file to close the connection. On the server side, you similarly write to a control file to start accepting packets, and monitor a directory to check for new connections, and so on.
Note that "file" here has a pretty clear definition: anything that can be interacted with using strictly the C FILE api - open()/read()/write()/close().
Calling that a "file" is ... a humongous stretch, to put it mildly.
> Xorg
I guess it didn’t say pleasing?
“Good design is as little design as possible” ok cool but I have 30 different feature requests coming in every week from actual users, that doesn’t really help me make concrete design decisions
“Good design is aesthetic” whose aesthetic? Aesthetic is extremely cultural and arbitrary, my designers are fighting over whether a photorealistic leather texture is more aesthetic than a gradient texture background, how does that help?
“Good design makes a product useful” uh yeah okay I’ve never had a design problem that was solved by someone saying “you know what, we should just make this product useful” “oooh right how did we not think of that”.
I mean these principles all sound good and high falutin’ as abstract statements, but I’ve never found them useful or actionable in my 15+ years working as a designer.
“Good design is as little design as possible”
What you create should be a natural flow of what your clients needs to do. Don't go and add lot of options like a plane cockpit. Which usually means try to find the common theme and adding on top, and also clamping down on fantasy wishes
"Good design is aesthetic"
I'd take the definition of pleasing instead of beautiful for the term. When learning to draw, an often given advice is just to focus and detail only a single part of the whole picture, everything else can be left out. So discussion over a single thing is usually meaningless. If it's not going to be the focus point of interaction, as long as it meshes into the whole, no one care about the exact details.
“Good design makes a product useful”
Usability is a whole field, and you can find the whole corpus under the HCI (Human Computer Interaction) keyword. Focus on meeting this baseline, then add your creativity on top.
> I mean these principles all sound good and high falutin’ as abstract statements, but I’ve never found them useful or actionable
It's kinda like Philosophy, you have to understand what it means for yourself. It's not a cake recipe to follow, but more of a framework from where to derive you own methodology.
Or do, because you're designing a plane cockpit :)
If you're working on a piece of software, how likely is it that people are regularly comparing it to the most effective alternative alternative means to accomplish the same task, and the revert course of it turns out you've actually created a more convoluted and time consuming path to the same outcome? Often times, software just adds gets in the way and makes life less easy than it would have been otherwise.
The opposite of these principles is often easier to reason about. For example, people attempting to make "better" versions of Hacker News seem to rarely be aware of these, and when they post to Show HN, hopefully realize that the way it is is hard to beat because it follows at least some of the principles. To make something better, you'd absolutely need to follow similar principles more effectively.
It depends; A/B testing sorta does that at the very granular level. Not so much at a high level.
You write software for a company so someone will give them money for it or so the company can save money
Everything else takes a backseat to that core mission. The first goal when writing software is to efficiently get to a point where one of those goals can be met.
It makes no sense to polish software if you are going to run out of money before it gets released, management cuts funding or you can’t find product market fit to convince investors to give you money depending on what your goal is.
Code doesn’t always need to be long lasting, you can’t predict how the market will change, how the underlying platform will change, when a better funded competitor will come in and eat your lunch, etc.
Good design doesn’t need to be “innovative”. It needs to fit within the norms of the platform or ecosystem it is part of.
I write little utilities for my parents, games for my son, a web shop for my wife. I write social spaces for myself and my friends. I write tools for myself.
I write software for strangers on the internet. I write software when I’m drunk, to test myself. Sometimes I write software simply for the joy of writing it, and it never runs again after that first “ah, finally!” moment. Aah, time well spent.
Equating “writing software” with “fulfilling business goals” is…quite depressing. Get outside! Feel the sunshine on your face! Write your own image-processing DSL, just for the sheer unbridled hell of it! Learn Nim! Why not?
(Ok, maybe skip the last one)
As someone who learned Nim as my first "serious" programming language, I do recommend to learn Nim. It is a delight to write and read.
Before I found Nim I looked at C, C++, Python and all of them full of cruft - old bad decisions that they're stuck with and forced to keep in the language. And it makes them a nightmare to learn.
In C there seems to be hundreds of subtly different OS-dependent APIs for every simple thing.
C++ was designed by mad scientist and extended to the point where even c++ developers have no idea what part of the language you should use.
Python is the messiest mess of OOP with non-existent documentation that is actually readable. Just to find how to do anything in Python I need to look at sites like stackoverflow and find outdated solutions for Python 2, deprecated functions and giant third party libraries. Yeah you don't learn Python nowadays, you forced to learn Python + NumPy + Pandas + Python Package Distribution (hell).
I had fun learning Nim, though.
Huh? Surely you don’t expect docs to answer generic questions like, “how do I flatten a nested list?”
Pandas (and Polars!) is an excellent library that serves specific needs. One of those is not doing basic file parsing. I’ve seen comments advocating its usage for tasks as simple as “read a CSV file in and get the Nth column.” The same goes for NumPy – powerful library that’s ridiculously over-used for things that are trivially solvable by stdlib.
My wife is out of town this weekend at a conference. I woke up, fixed breakfast, went outside and swam a few laps in the pool enjoying this 80 degree weather (the joys of living in Florida), hung out at the bar downstairs, came back upstairs and took a shower and I am heading back downstairs to hang out at one of the other bars downstairs and just shoot the shit with the bartender who is a friend of mine and whoever else shows up while drinking soda (I go down to hang out not always to drink) and listening to bad kaorake.
When my wife comes back tomorrow, we will hang out during the day and go back downstairs to the bar tomorrow to watch the Super Bowl.
We have 8 vacations planned this year not including 5-6 trips to fly up to our home town in Atlanta (where we lived all of our adult lives until two years ago) for things going on in our friends lives and to fly to my child hood home to see my parents and family.
Not bragging, most of our vacations aren’t anything exotic or expensive and I play the credit card point/sign up bonus/churnimg game to offset some of the costs.
My focuses on how to add business value was what allowed me to find strategic consulting jobs where most jobs are still fully remote.
Or even staying inside and spending time with family
In my original comment:
whoever else shows up while drinking soda (I go down to hang out not always to drink) and listening to bad kaorake.
Hmm. I run a solo-founder SaaS business. I write software for my customers so that they can improve their workflows: essentially, work faster, with fewer mistakes, and make work easier. My customer pay me money if my software improves their lives and workflows. If it doesn't live up to the promise, they stop paying me money.
Most of Dieter Rams's design rules very much do apply to software that I write. I can't always afford to follow all of these rules, but I'm aiming to.
And while I don't always agree with antirez, his post resonated with me. Many good points there.
Incidentally, many of the aberrations he mentioned are side-effects of work-for-hire: if you work for a company and get a monthly salary, you are not directly connected to customers, do not know where the money comes from, and you are not constrained by time and money. In contrast to that, if you are the sole person in a business, you really care about what is the priority. You don't spend time on useless rewrites, you are super careful with dependencies (because they end up costing so much maintenance in the future), you comment your code because you do so much that you forget everything quickly, and you minimize complexity, because simpler things are easier to build, debug and maintain.
So your goal is to write software so that customers will give you money because they see that your software is valuable to them. How does that conflict with what I said? That’s the goal of every legitimate company.
I work in consulting. I work with sales, I am the first technical person a customer talks to on a new engagement and when they sign the contract, I lead the implementation and work with the customer. I know exactly where the money comes from and what the customer wants.
If a developer is not close to the customer and doesn’t have as their focus the needs of the business, they are going to find themselves easily replaced and it’s going to be hard to stand out from the crowd when looking for a job
Everybody still can write software however you like just don’t expect to earn money on that.
We had our own internal load balancer, web servers, mail servers, ftp servers to receive and send files, and home grown software.
Now I could reproduce the entire setup within a week at the most with some yaml files and hosted cloud services. All of the server architected is “abstracted”. One of the things he complains about.
As far as backwards compatibility, worshipping at the thrown of backwards compatibility is one reason that Windows is the shit show it is. Even back in the mid 2000s there was over a dozen ways to represent a string when programming and you had to convert back and forth between them.
Apple has been able to migrate between 5 processors during its existence by breaking backwards compatibility and even remove entire processing subsystems from ARM chips by removing 32 bit code compatibility.
This is my personal bugbear, so I’ll admit I’m biased.
Infrastructure abstractions are both great and terrible. The great part is you can often produce your desired end product much more quickly. The terrible part is you’re no longer required to have the faintest idea of how it all works.
Hardware isn’t fun if it’s not working, I get that. One of my home servers hard-locked yesterday to the point that IPMI power commands didn’t work, and also somehow, the CPUs were overheating (fans stopped spinning is all I can assume). System logs following a hard reset via power cables yielded zero information, and it seems fine now. This is not enjoyable; I much rather would’ve spent that hour of my life finishing the game of Wingspan.
But at the same time, I know a fair amount about hardware and Linux administration, and that knowledge has been borne of breaking things (or having them break on me), then fixing them; of wondering, “can I automate X?”; etc.
I’m not saying that everyone needs to run their own servers, but at the very least, I think it’s an extremely valuable skill to know how to manage a service on a Linux server. Perhaps then, the meaning of abstractions like CPU requests vs. Limits will become clear, and disk full messages will cause one to not spam logs with everything under the sun.
Windows is a shitshow beacuse the leadership is chaotic, dragged all around all the time, never finishing nothing well. They only survived because of backward compatibility! Building on the unlikely success in the 90s.
Also, why do I have to install new software in every couple of months to access my bank account, secure chat, flight booking system, etc., etc., without any noticable difference in operation and functionality. A lot of things unreasonably becoming incompatible with 'old' (we are talking about months for f's sake!!) versions. That's a nuisance and erosion of trust.
Web actually excels here because you can use service workers to manage versioning and caching so that backwards compatibility is never a concern.
Anything providing something like Linux with a polished surface and support for the tools of the rest of office IT (e.g. MS Word) was going to blow Windows away in this area.
So the success of OSX here is no surprise.
But the native way of doing it was VbScript with the Windows Script Host.
Are you talking about security updates?
Presumably a security update would mean a difference in operation somewhere. They were probably referring to the updates that just exist to add ads, promos, design changes, etc.
At least with iOS, iOS 18 supports devices introduced since 2018 and iOS 16 just got a security update late last year and that supports phones back to 2017.
I imagine a simple architecture where each application has its own crappy CPU some memory and some storage with frozen specs. Imagine 1960 and your toaster gains control over your washing machine. Why are they even in the same box?
Does it really make sense in 2025 to use Quicken (?) with a dialup modem that calls into my bank once a day to update my balances like I did in 1996?
Imagine 1960 and your toaster gains control over your washing machine. Why are they even in the same box?
Imagine in 2002 your MP3 Player, your portable TV, your camera, your flashlight, your GPS receiver, your computer you use to browse the web, and your phone being the same device…
Well, the modern hot garbage Intuit forces people to use takes 5-15 seconds to save a transaction, 3-5 seconds to mark a transaction as cleared by the bank, it sometimes completely errors out with no useful message and no recourse other than "try again", has random UI glitches such as matched transactions not being removed from the list of transactions to match once matched (wtf?), and is an abject failure at actually matching those transactions without hitting "Find matches", because for whatever reason the software can't seem to figure out that the $2182.77 transaction from the bank matches the only $2182.77 transaction in the ledger. That one really gets my goat, because seriously, WTF guys?
Not to mention the random failure of the interface to show correct totals at random inopportune moments.
Oh, and it costs 5x as much on an annual basis.
I sure would take that 1996 version with some updated aesthetics and a switch to web-based transaction downloading versus the godawful steaming pile of shit we have now- every day of the week and twice on Sunday. Hands down.
This idea that we've made progress is absolutely laughable. Every single interaction is now slower to start, has built-in latency at every step of the process, and is fragile as hell to boot because the interface is running on the framework-du-jour in javascript.
Baby Jesus weeps for the workflows forced upon people nowadays.
I mean seriously, have none of you 20-somethings never used a true native (non-Electron) application before?
What kills me about Intuit is that they _can_ make decent software: TurboTax. Obviously, I’d rather the IRS be like revenue departments in other countries, and just inform me what my taxes were at EOY, but since we live in lobbyist hell, at least Intuit isn’t making the average American deal with QuickBooks-level quality.
It’s not like the non-SaaS version of QB is any better, either. I briefly had a side business doing in-person tech support, and once had a job to build a small Windows server for a business that wanted to host QB internally for use. Due to budget constraints, this wound up being the hellscape that is “your domain controller is also your file server, application server…” Unbeknownst to me at the time, there is/was a long-standing bug with QB where it tries to open ports for its database that are typically reserved by Windows Server’s DNS service, and if it fails to get them, it just refuses to launch, with unhelpful error logs. Even once you’ve overcome that bizarre problem, getting the service to reliably function in multi-user mode is a shitshow.
> I mean seriously, have none of you 20-somethings never used a true native (non-Electron) application before?
Judging on the average web dev’s understanding of acceptable latency, no. It’s always amusing to me to watch when web devs and game devs argue here. “250 msec? That’s 15 screen redraws! What are you doing?!” Or in my personal hell, RDBMS. “The DB is slow!” “The DB executed your query in < 1 msec. Your application spent the other 1999 msec waiting on some other service you’ve somehow wrapped into a transaction.”
I think it should have been obvious from my list of complaints that I was doing something a little more involved than "checking my bank balance".
I don’t even know what you’re asking.
https://100r.co/site/computing_and_sustainability.html
Or much technology at all. If you use anything that is 1000 years old, it's probably been maintained or cared for a lot during those 1000 years
It's alarming how often the answer isn't a confident "yes".
Python for example makes breaking changes in minor releases and seems to think it's fine, even though it's especially bad for a language where you might only find that out runtime
It's a bit like asking why the army needs tanks when horses worked well the previous war
I wouldn't blame it on security, as many of them do.
...or if it is true, this mass security issues emerging from their design, then the situation is even worse than just being lazy ignorant bastards.... or perhaps the mass security problems are related to this incompetence as well?... oi!
All of that has been solved by the web at this point.
You say Windows is a shit show, but as someone who has developed a lot on both Windows and Linux, Linux is just as much a shit show just in different ways.
And it's really nice being able to trust that binaries I built a decade ago just run on Windows.
If drivers are "standard" then low quality drivers full of bugs and security vulnerabilities proliferate and only the OEM can fix them because they're closed source, but they don't care to as long as the driver meets a minimum threshold of not losing them too many sales, and they don't care about legacy hardware at all or by then have ceased to exist, even if people are still using the hardware.
If there is no driver standard then maintaining a driver outside the kernel tree is a pain and more companies get their drivers into the kernel tree so the kernel maintainers will deal with updating them when the driver interface changes, which in turn provides other people with access to fix their heinous bugs.
It's clear that something more aggressive needs to be done on the mobile side to get the drivers into the kernel tree because the vendors there are more intransigent. Possibly something like right to repair laws that e.g. require ten years of support and funds in escrow to provide it upon bankruptcy for any device whose supporting software doesn't have published source code, providing a stronger incentive to publish the code to avoid the first party support requirement. Or greater antitrust enforcement against e.g. Qualcomm, since they're a primary offender and lack of competition is a major impediment to making it happen. If Google wanted to stop being evil for a minute they could also exert some pressure on the OEMs.
The real problem is that the kernel can't easily be relicensed to directly require it, so we're stuck with indirect methods, but that's hardly any reason to give up.
Right to repair laws as you suggest might do something to shift the incentives of vendors in these markets; I don't think they're ever going to "see the light" and suddenly decide they've been doing it wrong all these years (because measured in commercial consequences, they haven't)...
The individual vs. the group.
Where I agree with the author is the need to keep individual tinkering possible.
However, generalizing anyone's idiosyncratic tastes is impossible.
Wouldn't this need be solved by an emulator of older architectures?
There would be a performance cost, but maybe the newer processors would more than make up for it.
I have a legally-purchased copy of Return to Castle Wolfenstein here, both the Windows version and the official Linux port.
One of them works on modern Linux (with the help of Wine), one of them doesn't.
I wrote some specialist software for Linux round about 2005 to cover a then-business need, and ported it to Windows (using libgtk's Windows port distributed with GIMP at the time.) The windows port still works. Attemping to build the Linux version now would be a huge ordeal.
I would consider myself an Apple evangelist, for the most part, and even I can recognize what's been lost by Apple breaking backwards compatibility every time they need to shift direction. While the philosophy is great for making sure that things are modern and maintained, there is definitely a non-insignificant amount of value that is lost, even just historically but also in general, by the paradigm of constantly moving forward without regard for maintaining compatibility with the past.
They could have stuck with x86 I guess. But was moving to ARM really a bad idea?
They were able to remove entire sections of the processor by getting rid of 32 bit code and saving memory and storage by not having 32 bit and 64 bit code running at the same time. When 32 bit code ran it had to load 32 bit version of the shared linked library and 64 bit code had to have its own versions.
No, including an interpreter like they did (Rosetta) was an alternative. The "alternative" really depends on what the goals were. For Apple, their goal is modern software and hardware that works together. That's antithetical to backwards compatibility.
>They could have stuck with x86 I guess. But was moving to ARM really a bad idea?
I don't think I ever suggested that it was or that they couldn't have...
>They were able to remove entire sections of the processor by getting rid of 32 bit code and saving memory and storage by not having 32 bit and 64 bit code running at the same time.
Yes, and, in doing so, they killed any software that wasn't created for a 64-bit system. Again, for even a purely historical perspective, the amount of software that didn't survive each of the instanced transitions is non-negligible. Steam now has an entire library of old Mac games that can't run on modern systems anymore because of the abandonment of 32-bit without any consideration for backwards compatibility. Yes, there are emulators and apps like Wine and CrossOver than can somewhat get these things working again but there's also a whole subsection of software that just doesn't work anymore. Again, that's just a byproduct of Apple's focus on modern codebases that are currently maintained but it's still a general detriment that so much useable software was simply lost immediately because of these changes when there could have been some focus on maintaining compatibility.
The downside of including an interpreter with no end of life expectations is that some companies get lazy and will never update their software to modern standards. Adobe is a prime example. They would have gladly stuck with Carbon forever if Apple hadn’t changed their minds about a 64 bit version of Carbon.
That was the sane reason that Jobs made it almost impossible to port legacy text based software to early Macs. Microsoft jumped onboard developing Mac software and Lotus and WordPerfect didn’t early on.
But today you would have to have emulation software for Apple //es, 68K, PPC and 32 bit and 64 bit x86 software and 32 bit and 64 bit ARM (iOS) software all vying for resources.
Today because of relentlessly getting rid of backwards compatibility, the same base OS can run on set top boxes, monitors (yeah the latest Apple displays have iPhone 14 level hardware in them and run a version of iOS), phones, tablets, watches and AR glasses.
Someone has to maintain the old compatibility layers and patch them for vulnerabilities. How many vulnerabilities have been found in some old compatible APIs on Windows?
I don't see that as a downside; I see it as a strength. Why should everyone have to get on the library-of-the-year train, constantly rewriting code -- working code! -- to use a new API?
It's just a huge waste of time. The forced-upgrade treadmill only helps Apple, not anyone else. Users don't care what underlying system APIs an app uses. They just care that it works, and does what they need it to do. App developers could be spending time adding new features or fixing bugs, but instead they have to port to new library APIs. Lame.
> Someone has to maintain the old compatibility layers and patch them for vulnerabilities.
It's almost certainly less work to do that than to require everyone else rewrite their code. But Apple doesn't want to spend the money and time, so they force others to spend it.
They don't want all new interfaces with all new buttons and options and menus.
People get used to their workflows and they like them. They use their software to do something. The software itself is not the goal (gaming excepted).
I'm not suggesting that things should never be gussied up, or streamlined or made more efficient from a UX perspective, but so many software shops change just to change and "stay fresh".
There is room for both. But if you are a gamer, the Mac isn’t where you want to be anyway. Computers have been cheap enough for decades to have both a Windows PC and a Mac. Usually a desktop and a laptop.
I have done all of this myself with on prem servers that I could walk to. I know exactly what’s involved and it would be silly to do that these days
Also, when you have 32 bit and 64 bit code, you have to have 32 bit and 64 bit versions of the framework both in memory and on the disk.
This is especially problematic with iOS devices that don’t have swap.
(and Apple shipped a gig of high-def screenshot and other garbage it cares so much about having frameworks on disk)
And it’s not just about disks it’s about memory.
Not entirely, there are other reasons too
But we should respect semantic versioning. Python is a dreadful sinner in that respect.
There is no perfection here, but the correct way to reason about this is to have schema-based systems where the surfaces and state machines are in high level representations and changes can be analyzed automatically without some human bumping the numbers.
Linux is a far bigger shit show. At least at the platform level. Windows is a lesser shitshow at the presentation layer
It’s more about x86. Using an x86 laptop feels archaic in 2025. Compared to my M2 MacBook Air or my work M3 MacBook Pro.
The only thing that makes MacOS much better for me is the ecosystem integration
From the user PoV, isn't that one of those things that are irrelevant? The users, even the very technical ones, neither know nor care that the computer they are using is x86 or ARM or ...
You might say that the battery life is part of UX, so, sure, I get that, but while, in practice, battery life on the M1/M2/M3/M4 is superior, as a trade-off the user gets a limited set of software that they can run, which is also part of the UX.
So the user gets to choose which UX trade-off they want to make.
I personally care because I travel a lot and I need a laptop. For non gamers of course Mac desktops including the new base Mac Mini is good enough.
The upcoming Nvidia Linux AI boxes are interesting.
I don’t think that’s really an x86 vs ARM things. That’s a red herring.
There’s like 3 big factors at play.
1. x86 vs ARM 2. Apple Silicon Engineers vs others 3. Apple’s TSMC node advantage
I think the x86 vs ARM issue is a relatively small one. At least fundamentally.
I’m not saying iPhones are any better. It came out in the Epic trial that 90% of Apple’s App Store revenue comes from games
Linus Torvalds. But what does he know, eh?
In reality pretty much no Android phone runs a stock upstream Linux. They all have horrible proprietary modified kernels.
So no, Android isn't even Linux in the narrow technical sense of the kernel it runs.
(The Android userland, of course, is an entirely different OS that has nothing at all whatsoever to do with Linux or any other flavor of UNIX, current or historical.)
We’re to reinvent the wheel for ourselves instead of relying on huge dependency trees, yet new languages and frameworks (the most common form of wheel reinventing) are verboten.
The only way I can think to meet all these demands is for everyone (other than you, of course) to stop writing code.
And I gotta say, a part of me wishes for that at least once a day, but it’s not a part of me I’m proud of.
In my opinion, vents are fine amongst friends. Catharsis and all that. But in public they stir up a lot of negative energy without anything productive to channel it to (any solutions, calls to action etc). That sucks.
Also what you call "negative energy" I would often call "rightful criticism on non-distilled thoughts that have internal controversies".
The writing style is reminiscent to this famous text (not written in jest, you have to understand that most of these statements depend on a context that is for you to provide):
"Property is theft." -- P.J. Proudhon
"Property is liberty." -- P.J. Proudhon
"Property is impossible." -- P.J. Proudhon
"Consistency is the hobgoblin of small minds." -- R.W. Emerson
Just ask yourself: Why would I want to do that?
When somebody suggests nonsense to you just ask yourself that one simple question. The answers is always, and I mean ALWAYS, one of these three things:
* evidence
* ethos, like: laws, security protocols, or religious beliefs
* narcissism
At that point its just a matter of extremely direct facts/proofs or process of elimination thereof. In the case of bad programmers the answer is almost universally some unspoken notion of comfort/anxiety, which falls under narcissism. That makes sense because if a person cannot perform their job without, fill in the blank here, they will defend their position using bias, logical fallacies they take to heart, social manipulation, shifting emotion, and other nonstarters.
As to your point about reinventing wheels you are making a self-serving mistake. Its not about you. Its about some artifact or product. In that regard reinventing wheels is acceptable. Frameworks and languages are not the product, at least not your product. Frameworks and languages are merely some enabling factor.
OP indeed has mutually exclusive phrases. If we ever get to the "extremely directs facts/proofs" then things get super easy, of course.
99% of the problem when working with people is to even arrive at that stage.
I predict a "well, work somewhere else then" response which I'll just roll my eyes at. You should know that this comes at a big cost for many. And even more cannot afford it. Literally.
I did not find the article illuminating one bit. I agree with some of the criticisms wholeheartedly, mind you, but the article is just a rant.
Which comments might those be? Be concrete.
You also didn't give any example as I asked, you are happy just generalizing so it's not really interesting to engage with you, is what I am finding.
Software written in C continues to be riddled with elementary security holes, despite being written and reviewed by experts. If anything the push to rewrite is too weak, we have known about the dangers for decades at this point.
We aren't destroying software, it was never all that. The software of the 90s was generally janky in a way that would never be tolerated today.
https://github.com/microsoft/garnet
I think we're doing fine.
Circa 1982 or so (and some years before) IBM was shipping mainframe software written in Assembly that anyone could build or modify. They had a bug database that customers could access. Around the same era Unix was shipping with source code and all the tooling you needed to build the software and the OS itself.
So maybe compared to some of that we're doing worse.
Also from 2000: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
So I think we knew around that time what patterns were good ones... But sure, lots of software organizations didn't even use source control and failed Joel's test miserably.
EDIT: And sadly enough many orgs today fail Joel's test as well. We forgot some things that help make better software.
The source control system I had at one job around that time was a dvcs! At a different one, the source control system had its own filesystem was generally insane. It had its own fulltime maintainer sort of like tptacek's build person.
The big difference, really, was that all this software cost a lot of money compared to now where it mostly does not.
But still, I think we can do better. That story you shared highlights a gross inefficiency and diminishing of agency that comes from dependencies.
It's fitting a messy, real world process, to the virtual, reduced (because formalism) computing world backed by failable, messy hardware components through an error-prone, misunderstanding-prone of programming.
So much failure points, and no one is willing to bear the costs to reduce them.
Sometimes it is valid to not reinvent the wheel. Sometimes wheel needs to be reinvented to learn. Both actions are done. Sometimes the decision was right. Sometimes not.
Overall as a whole we are creating things, more than we are destroying. I don't see the need to take a negative stance.
Writing a new green field project using 10.000 npm dependencies for an electron based front end is shockingly easy. But how do you keep that running for the next 15 years? Did the project need npm? Or a web browser? How do all the layers between the lamguage of choice and the bare metal actually behave and can you reason about that aggregate accurately?
The field has come to a point where a lot of projects are set up with too many complexities that are expedient in the short term and liabilities in the long term.
The current generation of junior devs grows up in this environment. They learn that these mistakes as "the right thing to do" when they are questionable and require constant self-reflection and reevaluation. We do not propagate a hacking culture enough that values efficiency and simplicity in a way that leads to simple, efficient, stable, reliable and maintainable software. On a spectrum of high quality craftsmanship to mass-produced single use crap, software is trending too much to the latter. It's always a spectrum, not a bunary choice. But as a profession, we aren't keeping the right balance overall.
I started a job in manufacturing a few months ago and having to think that this has to work for the next 20 years has been a completely different challenge. I don't even trust npm to be able to survive that so web stuff has been been an extra challenge. I landed on lit web components and just bringing it in via a local CDN.
I definitely use npm (or rather pnpm) because I know it will allow me to build whatever I want much faster.
How much complexity is actually required? What changed in software in the last 20 years so that the additional bloat and complexity is actually required? Hardware has become more powerful. This should make software less reliant on complicated optimizations and thus simpler. The opposite is happening. Why? What groundbreaking new features are we adding to software today that we didn't 20 years ago? User experience hasn't improved that much on average. In fact, measurements show that systems are responding more sluggishly on average.
Intrinsic complexity of the problems that software can solve hasn't really changed much as far as I can see. We add towers of accidental complexity on top that mostly aren't helpful. Those need to be questioned constantly. That isn't happening to the extent that it should. Web-based stuff is the poster child of that culture and it's hugely detrimental.
Backends handling tens / hundreds of thousands or more of concurrent users rather than locally deployed software on a single machine or a small server with a few 10s of users?
Mobile?
Integration with other software / ecosystems?
Real time colaboration amoung users rather than single user document based models?
Security?
Cryptography?
Constant upgrades over the web rather than shipping CDs once a year?
I'll pass on AI for the moment as it's probably a bit too recent.
Software can be distributed onto client machines and be kept up to date. That was first solved with Linux packages managers more than 25 years ago.
Before mobile we had a wide range of desktop operating systems with their own warts.
TLS 1.0 was introduced in 1999. So cryptography already a concern back then.
So what is really new?
A few of them aren’t decisions any individuals have control over. Most coders aren’t jumping onto new languages and frameworks all the time; that’s an emergent group behavior, a result of there being a very large and growing number of programmers. There’s no reason to think it will ever change, nor that it’s a bad thing. And regardless, there’s no way to control it.
There are multiple reasons people write software fast rather than high quality. Because it’s a time/utility tradeoff, and time is valuable. It’s just a fact that software quality sometimes does not matter. It may not matter when learning or doing research, it may not matter for solo projects, it may not matter for one-off results, and it may not matter when software errors have low or no consequence. Often it’s a business decision, not an engineering decision; to a business, time really is money and the business wants engineering to maximize the utility/time ratio and not rabbit hole on the minutiae of craftsmanship that will not affect customers or sales.
Sometimes quality matters and time is well spent. Sometimes individuals and businesses get it wrong. But not always.
Fair point: each one of us can think about the balance and understand if it's positive or negative. But an important exercise must be accomplished about this: totally removing AI from the complexity side.
Most of the results that neural networks gave us, given the hardware, could be recreated with a handful lines of code. It is evident every day that small teams can rewrite training / inference engines from scratch and so forth. So AI must be removed from the positive (if you believe it's positive, I do) output of the complexities of recent software.
So if you remove AI since it belongs to the other side, the "complicated software world" what gave us, exactly, in recent times?
AI has the potential to make the situation much worse, as many laypeople confer it an air of "authority" or "correctness" that it's not really owed. If we're not careful, we'll have an AI-driven Idiocracy, where people become so moronic that nobody can do anything about the system when it takes a harmful action.
It needs to be noted that the average person's lot didn't improve until 150 years later. There's no reason why technology can't be decided by democratic means rather than shoved in our faces by people that just want to accumulate wealth and power.
I often work with people who refuse to use software unless it's so well known that they can google for stackoverflow answers or blog walkthroughs. Not because well-known software is stable or feature-filled; no, they literally are just afraid of having to solve a problem themselves. I'm not sure they could do their jobs without canned answers and walkthroughs. (and I'm not even talking about AI)
This is why I keep saying we need a professional standards body. Somebody needs to draw a line in the sand and at least pretend to be real adults. There needs to be an authoritative body that determines the acceptable way to do this job. Not just reading random people's blogs, or skimming forums for sycophants making popular opinions look like the correct ones. Not just doing whatever a person feels like, with their own personal philosophy and justification. There needs to be a minimum standard, at the very least. Ideally also design criteria, standard parts, and a baseline of tolerances to at least have the tiniest inkling if something is going to fall over as soon as someone touches it. Even more should be required for actual safety systems, or things that impact the lives of millions. And the safety-critical stuff should have to be inspected, just like buildings and cars and food and electric devices are.
The lunatics are running the asylum, so that's not happening anytime soon. It will take a long series of disasters for the government to threaten to regulate our jobs, and then we will finally get off our asses and do what we should have long ago.
I’d like to have standard professional certification because I could use it as proof of the effort I put into understanding software engineering that many ICs have not. But I think that many people have “that’ll do it” values and whatever professional context you put them in, they will do the worst possible acceptable job. The best you can do is not hire them and we try to do that already — with LeetCode, engineering interviews, and so on. That effort does work when companies make it.
(Safety critical work is, in fact, inspected and accredited like you would wish, and I have seen the ugly, ugly, terrifying results inside the sausage factory. It is not a solution for people who don't care or don't have a clue, in fact it empowers them)
Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
https://www.youtube.com/watch?v=ZSRHeXYDLko
Software technology is in decline despite appearances of progress. While hardware improvements and machine learning create an illusion of advancement, software's fundamental robustness and reliability are deteriorating. Modern software development has become unnecessarily complex, with excessive abstraction layers making simple tasks difficult. This complexity reduces programmer productivity and hinders knowledge transfer between generations. Society has grown to accept buggy, unreliable software as normal. Unless active steps are taken to simplify software systems across all levels, from operating systems to development tools, civilization faces the risk of significant technological regression similar to historical collapses.
Unfortunately my stance is that fundamentally things won't change until we get hit with some actual hardware limitations again. Most devs and people in general prefer a semblance of a working solution quickly for short-term gains rather than spending the actual time that's needed to create something of high quality that performs well and will work for the next 30 years. It's quite a sad state of affairs.
With that said I'm generally optimistic. There is a small niche community of people that does actually care about these things. Probably won't take over the world, but the light of wisdom won't be lost!
Users have now been taught that $10 is a lot to pay for an app and the result is a lot of buggy, slow software.
Those big software packages are sold to admins anyway.
Realistically only about 5% or so of my former colleagues could take on performance as a priority even if you said to them that they shouldn't do outright wasteful things and just try to minimize slowness instead of optimizing, because their entire careers have been spent optimizing only for programmer satisfaction (and no, this does not intrinsically mean "simplicity", they are orthogonal).
Disclaimer: Take everything below with a grain of salt. I think you're right that if this was an easy road to take, people would already be doing it in droves... But, I also think that most people lack the skill and wisdom to execute the below, which is perhaps a cynical view of things, but it's the one I have nonetheless.
The reason I think most software can be faster, better and cheaper is this:
1. Most software is developed with too many people, this is a massive drag on productivity and costs.
2. Developers are generally overpaid and US developers especially so, this compounds for terrible results with #1. This is particularly bad since most developers are really only gluing libraries together and are unable to write those libraries themselves, because they've never had to actually write their own things.
3. Most software is developed as if dependencies have no cost, when they present some of the highest cost-over-time vectors. Dependencies are technical debt more than anything else; you're borrowing against the future understanding of your system which impacts development speed, maintenance and understanding the characteristics of your final product. Not only that; many dependencies are so cumbersome that the work associated with integrating them even in the beginning is actually more costly than simply making the thing you needed.
4. Most software is built with ideas that are detrimental to understanding, development speed and maintenance: Both OOP and FP are overused and treated as guiding lights in development, which leads to poor results over time. I say this as someone who has worked with "functional" languages and FP as a style for over 10 years. Just about the only useful part of the FP playbook is to consider making functions pure because that's nice. FP as a style is not as bad for understanding as classic OOP is, mind you, but it's generally terrible for performance and even the best of the best environments for it are awful in terms of performance. FP code of the more "extreme" kind (Haskell, my dearest) is also (unfortunately) sometimes very detrimental to understanding.
Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.
> Outside of really edge case stuff like real time low level systems software optimizing performance is not that hard and I've worked with many engineers over my long career that can do it. They just rarely have the incentive. In a few particular areas where it's critical and users are willing to pay software can still command a premium price. Ableton Live is a good example.
This seems like a generous take to me, but I suppose it's usually better for the soul to assume the best when it comes to people. Companies with explicit performance requirements will of course self-select for people capable of actually considering performance (or die), but I don't take that to mean that the rest of the software development workforce is actually able to, because I've seen so, so many examples of the exact opposite.
It's a self-perpetuating issue: people build stuff saying "it won't/can't last 30 years" for various reasons (money, time, skill, resources, expectations, company culture, business landscape etc). So then software doesn't last 30 years for those same various reasons.
The idea that we think systems did used to last longer is probably some survivor bias. However, software that has survived decades was probably created with a completely different set of methodologies, resources and incentives to modern software.
Writing bad code to just get past the next sprint or release is madness.
We swap JS frameworks constantly, but when we’ll reach a good paradigm, we’ll stick with it. At one point, React might be the final framework, or rather, one of its descendants.
Develop a good vm that can either be part of the browser or can be easily launched from it and get all of the browser/OS makers on the same page.
We only have what we have because of a lack of real leadership.
When the c++11 abi break happened it was a big pain in the ass, but once MSVC decided in 2015 that they were going to stop breaking ABI I think it was the stability that c++ needed to fossilize…
Spoken as a C++ fan.
I am aware that capitalism essentially dictates externalising cost as much as possible, but with software- much for the same reason capitalism loves it (a copy is cheap and can be sold at full price despite being constructed just once) means that these externalities can scale exponentially.
Teams in particular is an outlier as in most cases it is essentially forced on people.
It doesn't dictate externalising cost as much as possible unless you have a very short-term view.
Short-term view businesses get eaten pretty quickly in a free capitalist system.
People forget that half of capitalism's advantage is the "creative destruction" part - if businesses are allowed to fail, capitalism works well and creates net value.
I'm writing software with the assumption that it'll be used for at least 30 years there, with a lot of guard rails and transparency/observability mechanisms, because I know the next person working there will thank me.
Premature optimisation is bad, but there’s now so many devs who don’t do _any _ at all. They don’t improve any existing code, they’re not writing software that is amenable to later optimisation, inefficient architectures and unnecessary busywork abounds.
Are we surprised that years of “product first, bug fixes later, performance almost never” has left us with an ecosystem that is a disaster?
The fact that people don't stay long enough in companies or work on a long project themselves to see the fruits of their labour down the line is a point that is discussed in some of the other comments here in this thread. I agree with it as well. In general if you job hop a lot you won't see the after effects of your actions. And the industry is such that if you want to get paid, you need to move. To reiterate - it's a sad state of affairs.
That’s really the entirety of the issue, right there.
People aren’t just accepting bad software, they’re paying for it, which incentivizes a race to the bottom.
It’s like “The War on Drugs,” that looks only at the suppliers, without ever considering the “pull,” created by the consumers.
As long as there are people willing to pay for crap, there will be companies that will make and sell crap.
Not only that, companies that try to make “not-crap,” will be driven out of business.
That's easy to stop. Disallow warranty disclaimers.
The EU has something like this, the Cyber Resilience Act, and it has an exception for FOSS.
We are fixing it up by refactoring - many through adding abstractions.
I’m sure code with bad abstractions can scale poorly, but I’m not clear how code without abstractions can scale at all.
That's quite unrelated to abstractions. It's just poorly written code, for whatever reasons may have led there.
And I’d rather work in a codebase where the owners are constantly doing such refactors than not.
edit: if you feel the need to downvote, feel free to share why you think my question is problematic - I think that "poorly written" to describe excessive code file sizes is such a wooly phrase as to be basically useless in a discussion like this, and when examined more closely usually comes down to "poorly abstracted". But I stand to be corrected.
mechanical shit was local, electronic shit is global, not comparable at all
The first half of the 20th century excelled in mechanical destruction. Thus far, the electronic age has been much less bloody.
Since they mentioned nukes it seemed like an obvious example where local things can be catastrophic.
The theoretical risk of electronic things malfunctioning in some global way that they mentioned has never resulted in any nuclear weapons being deployed, but we've actually seen the local mechanical approach they disregard be devastating.
Imagine if a company had been able to systemize the whims of a paranoid regime, allowing them to track and spy on their citizens with impunity in secret, and the population became so inured to the idea that it became an accepted facet of their culture.
Or what if a political party dominated a state and systematized a way to identify and detect oppositional thought, stamping it out before a counterculture could ever arise. Maybe that thought was tied to a particular religious, ethnic, and/or cultural group.
What if these companies are here today, selling those products to the highest (nation-state) bidders, and the methods they're employing to keep the conceptual wheels turning at scale rely on filtering that selects for candidates who will gladly jump through abstract hoops set before them, concerned only with the how and never the why of what they're doing.
Think about what happens when two people video call each other from opposite sides of the world. How many layers of hardware and software abstraction are they passing through, to have a reliable encrypted video conversation on a handheld computer where the majority of the lag is due to the speed of light? How much of that would you like to remove?
I would venture an alternative bogeyman - "move fast and break things" AKA the drive for profits. It's perfectly possible (as illustrated above) to reliably extract great performance from a tower of abstractions, while building those abstractions in a way that empowers developers; what's usually missing is the will to spend time and money doing it.
Aside, he's treated like a celebrity in the game developer niche and I can't understand why.
Depending on what tech you use it can be easier or harder to do as well. I'm making a game with Love2D now, which has made supporting Mac and Linux rather trivial so far, although I've run into challenges with mobile platforms and the web even though it support them (it does work, but it takes more low-level glue code to support native phone features, and the web doesn't seem to be well maintained and my game is throwing webassembly errors currently when I try to run it).
And my previous game (which is on the backburner for now) was made with Monogame, and while that technically has support for Mac and Linux (as well as mobile), I've had quite a few issues even just getting the Mac version working well, like issues with resolution, rendering 3D properly, getting shaders not to break, etc. And they haven't kept up with the latest Mac updates the past few years and have had to make a special push to try to get it caught back up. I've probably sunk a good 20+ hours trying to get it working well before putting that aside to work on the actual game again and I still might have to rearchitect things a bunch in order to get that working.
Meanwhile Unity would probably be pretty dirt simple to port, for the most part, but it comes with other tradeoffs, like not being open source, and trying to pull a stunt a couple years ago where they pulled the rug out from under developers with changing licensing to something aggressive (that convinced other developers to port their games away from the platform), etc.
And there's Godot, which seems to be getting more support again (which is great, I even considered it or my current game, I just like coding in Love2D a bit better), but if you ever want your game on consoles you have to pay a third party to port your games to consoles for you.
The guy you linked makes their own engine (and to be fair, so does Jonathan Blow, who you're critiquing), which is great, but not everyone wants to get that low level. I would rather spend more time focusing on building the games themselves, which is already hard enough, rather than spending all that time building an engine.
It was for that reason that I spent several years focused on board game design instead (as I can make a working game with drawing on index cards and some colored plastic cubes in less than an hour), although that has its own frustrations with significant hurdles to get your game signed by publishers as an unknown designer (I did get one signed four years ago, and it's still not released yet), and large financial risks being made for manufacturing and distribution.
Edit: Also the person you linked to isn't even sure it was financially worth it to support all of those platforms, they just do it for other reasons:
"Do you make your money back? It’s hard to say definitely which could mean no. Linux and macOS sales are low so the direct rewards are also low. ...For Smith and Winston [their current game] right now I don’t think it’s been worth it financially (but it will probably make it’s money back on Launch)"
I haven't watched that talk by Blow yet so maybe he covers my concern.
I think you have to be mindful of incentives structures and constraints. There's a reason the industry went down the path that it did and if you don't address that directly you are doomed to failure. Consumers want more features, the business demands more stuff to increase its customer base, and the software developers are stuck attempting to meet demand.
On one hand you can invent everything yourself and do away with abstractions. Since I'm in the embedded space I know what this looks like. It is very "productive" in the sense that developers are slinging a lot of code. It isn't maintainable though and eventually it becomes a huge problem. First no one has enough knowledge to really implement everything to the point of it being robust and bug free. This goes against specialization. How many mechanical engineers are designing their own custom molding machine in order to make parts? Basically none, they all use mold houses. How many electrical engineers are designing their own custom PCB(A) processing machines or ICs/components? Again, basically none. It is financially impossible. Only in software I regularly see this sentiment. Granted these aren't perfect 1-to-1 analogies but hopefully it gets the idea across. On the other hand you can go down the route of abstractions. This is really what market forces have incentivized. This also has plenty of issues which are being discussed here.
One thought that I've had, admittedly not fully considered, is that perhaps F/OSS is acting negatively on software in general. When it comes to other engineering disciplines there is a cost associated with what they do. You pay someone to make the molds, the parts from the molds, etc... It's also generally quite expensive. With software the upfront cost to adopting yet another open source library is zero to the business. That is there is no effective feedback mechanism of if we adopt X we need to pay $Y. Like I said, I haven't fully thought through this but if the cost of software is artificially low that would seem to indicate the business and by extensions customers don't see the true cost and are themselves incentivized to ask for more at an artificially low price thus leading to issues we are currently seeing. Now don't misread me, I love open source software and have nothing but respect for their developers; I've even committed to my fair share of open source projects. As I've learned more about economics I've been trying to view this through the lens of resource allocation though and it has lead me to this thought.
In game development, whenever we go with highly abstract middleware, it always ends up limiting us in what we can do, at what level of performance, how much we can steer it towards our hardware targets, and similar. Moreover, when teams become so lean that they can only do high level programming and something breaks close to the metal, I’ve seen even “senior” programmers in the AAA industry flail around with no idea how to debug it, and no skills to understand the low level code.
I’ve seen gameplay programmers who don’t understand RAII and graphics programmers who don’t know how to draw a shape with OpenGL. Those are examples of core discipline knowledge lost in the games industry. Aka what we have now, we might not know anymore how to build from scratch. Or at least most software engineers in the industry wouldn’t. It cannot end well.
Building your own in my exp is a better idea — then you can at least always steer it, improve and evolve it, and fix it. And you don’t accidentally build companies with knowledge a mile wide and an inch deep, which genuinely cannot ship innovative products (technically it is impossible).
I’m not even sure if this is true anymore. We got new features foisted on us most of the time.
I no longer think that's true. Instead, I think consumers want reliability, but more features is a way to justify subscription pricing segregation and increases.
I play games with known bugs, and on imperfect hardware, because I unwilling to pay more. Some experiences are rare, so I tolerate some jank because there aren't enough competitors.
One example: Newbies shouldn't reinvent the wheel. I think they should use the tools, that are available and common in the given context. When they want to tinker, they should write their own compiler. But they shouldn't use that in production.
Another: Backward API compatibility is a business decision in most cases.
Also, I think it doesn't help to start every sentence with "We are destroying software". This sounds much more gloomy, than it really is.
I strongly disagree. They should, and fail and try again and fail. The aim is not to reinvent the wheel, but to understand why the wheel they're trying to reinvent is so complex and why the way it is. This is how I learnt to understand and appreciate the machine, and gave me great insight.
Maybe not in production at first, but they don't reinvent the wheel in their spare time either. They cobble up 200 package dependency chains to make something simple, because that’s what they see and taught. I can write what many people write with 10 libraries by just using the standard library. My code will become a bit longer, but not much. It'll be faster, more robust, easier to build, smaller, and overall better.
I can do this because I know how to invent the wheel when necessary. They should, too.
> Another: Backward API compatibility is a business decision in most cases.
Yes, business decision of time and money. When everybody says that you're losing money and time by providing better service, and lower quality is OK, management will jump on it, because, monies.
> Also, I think it doesn't help to start every sentence with "We are destroying software". This sounds much more gloomy, than it really is.
I think Antirez is spot on. We're destroying software. Converting it to something muddy and something for the ends of business, and just for it.
I'm all with Antirez here. Software came here, because we developed the software just for the sake of it, and evolved it to production ready where needed. Not the other way around (Case in point: Linux).
Often that "saving money" is just externalizing the cost onto your users. Especially in mobile development. Instead of putting in the tiny amount of effort it takes to continue support for older devices, developers just increase the minimum required OS version, telling users with older hardware to fuck off or buy a new phone.
Another example is when you don't take the time to properly optimize your code, you're offloading that cost onto the user in the form of unnecessarily higher system requirements.
This is why I believe slow cooked software. Works better, easier on the system, and everyone is happier.
Growing up in the 80s and 90s I understand viscerally how you feel, but this take strikes me as willfully ignorant of the history of computers, and the capitalist incentives that were necessary for their creation. The first computer and the internet itself were funded by the military. The PC wouldn't have existed if mainframes hadn't proved the business value in order to drive costs down to the point the PC was viable. Even the foundational ideas that led to computers couldn't exist with funding—Charles Babbage's father was a London banker.
I think a lot of what you are reacting to is the failed promise of free software and the rise of the internet, when the culture was still heavily rooted in 60s counter-culture, but it hadn't crossed the chasm to being mainstream, so it was still possible to envision a utopian future based on the best hopes of a young, humanitarian core of early software pioneers operating largely from the sheltered space of academia.
Of course no such utopian visions ever survive contact with reality. Once the internet was a thing everyone had in their pocket, it was inevitable that software would bend to capitalist forces in ways that directly oppose the vision of the early creators. As evil as we thought Microsoft was in the early 90s, in retrospect this was the calm before the storm for the worst effects of tech. I hear Oppenheimer also had some regrets about his work. On the plus side though, I am happy that I can earn enough of a living working with computers that I have time to ponder these larger questions, and perhaps use a bit of my spare time to contribute something of worth back to the world. Complaining about the big picture of software is a fruitless and frustrating endeavour, instead I am interested in how we can use our expertise and experience to support those ideals that we still believe in.
I take issue with your use of the word "utopian" being used in this context. Its not a lost cause to see the world from the perspective of making the world better, by finding our way though this with a better mindset on the future.
And while you are taking the time to ponder these questions because you earn enough to take the time, the world is burning around you. Sorry if my tone is harsh, but these kinds of statements really rub me the wrong way. It feels like you are saying everything that is happening is how its suppose to be and I am strongly against that. We have enough of that perspective, we really don't need it, IMHO.
This gentle motivation is good, because it allows me to look inside and be rational about my ambitions. I won't go to a blind crusade, but try to change myself for the better.
Because, I believe in changing myself to see that change in the world.
Completely and unjustifiably false.
So basically they shouldn’t learn the prod systems beyond a shallow understanding?
Agree. That statement/sentiment though doesn't refute the point that it's destroying software.
They absolutely should, or they will never even get to understand why they are using these wheels.
Fun fact, try to question modern web developers to write a form, a simple form, without a library.
They can barely use html and the Dom, they have no clue about built-in validation, they have no clue about accessibility but they can make arguments about useMemo or useWhatever in some ridiculous library they use to build...ecommerces and idiotic crud apps.
Why? We should stop saying others how they want to write/use their code ASAP.
Many established technologies are a total shitstorm. If it is ok to use them, it is ok if somebody wants to use their own compiler.
When it comes down to it, whatever works best and is usually the most simple, non-breaking, used to win out. That decision has been disconnected from the value creators to the value extractors. It is impossible to extract value before value is created.
Additionally, programming is a creative skill no matter how hard they try to make it not one. Creativity means trying new things and new takes on things. People not doing that will harm us long term.
Generally speaking, because that’s very likely to end up being “pushing for rewrites of things that work”, and also a case of not “taking complexity into account when adding features”, and perhaps in some cases “jumping on a new language”, too.
This is an imagined scenario, but the likelihood of someone easily replacing a working compiler in production with something better is pretty low, especially if they’re not a compiler engineer. I’ve watched compiler engineers replace compilers in production and it takes years to get the new one to parity. One person tinkering and learning how to write a compiler almost for sure does not belong in production.
For example, my own "Almost C" compiler[1] is 1137 lines of code. 1137! Can it ever reach "parity" with gcc or even tcc? No! That's specifically not the goal.
Do I benefit strongly for having a otherworldly simpler toolchain? hell yeah.
The key is scope, as always. Established projects have, by virtue of being community projects, too wide a scope.
[1]: https://git.sr.ht/~vdupras/duskos/tree/master/item/fs/doc/co...
If someone would hand me a project, that is full of self invented stuff, for example a PHP project, that invented its own templating or has it's own ORM, I would run. There is laravel, slim or symfony, those are well established and it makes sense to use them. There are so much resources around those frameworks, people who posted about useful things, or packages that add functionality to those. It just doesn't make sense to reinvent the wheel for web frameworks and thousands of packages around those.
Writing software is standing on the shoulders of giants. We should embrace that, and yes one should learn the basics, the underlying mechanisms. But one should make a difference between tinkering around and writing software, that will be in production for years and therefore worked on by different developers.
The JavaScript world shows how to not do things. Every two years I have to learn the new way of building my stuff. It is annoying and a massive waste of resources. Everyone is always reinventing the wheel and it is exhausting. I understand why it is like this, but we as developers could have made it less painful, if we would embrace existing code instead of wanting to write our own.
I’m in games we even rewrite standard libraries (see EASTL) so that they are more fit for purpose.
Of course, it’s your preference. And that is fine. But I don’t think it speaks to the situation in many tech companies.
Those who do not know history are doomed to repeat it. Or re-re-reinvent Lisp.
There was this anecdote about storm lamp or something. New recruit comes to a camp and sees old guard lighting lamps are turned upside down and lit sideways with a long stick. But he knows better, he tells them and they smirk. First day he lights them the optimal way with a lighter. He's feeling super smug.
But next day he finds the fuse is too short to reach so he takes the long stick...
Few months later, he's a veteran, he's turning lamp upside down using lighter sideways, with a long stick.
And the fresh recruit says he can do it better. And the now old guard smirks.
I'm sure I'm misremembering parts, but can't find the original for the life of me.
This is such a perfect point, no one would have invented the "tank chain" if reinventing the wheel was not allowed.
There are instances where this is good advice, but when they turn into rigid rules, bad software is the inevitable result.
And do you really need to write your own library and implementation of SMTP?
Reinventing the wheel where it makes sense is still allowed. But one should think first about the reasons, in my opinion.
Since flagged, but it's also highly relevant here, since it shows this isn't someone who's interested in serious thoughts or discussion about the merits of his arguments, just hot takes and cheap shots. Whatever he may have done in the past, I see no reason why he should be taken seriously here.
—
†) Please interpret my words literally and in your favor.
Now everyone is a “celebrity”and hubris is at an all time high.
I’ve been in tech for 25+ years and never heard of this person until now but it’s not the first time I’ve heard similar talk.
These maxims strike me more as bitter senior neck beard developer complains about the rest of his team in a passive aggressive way at the work lunch table before COVID.
If you’re a celebrity, we don’t need your snarky complaints. We need you using your “celebrity” to make things better.
Wohpe is probably the novel that sold the most copies in the recent years italian sci-fi history. This summer a new short story of mine will be released by the most important italian sci-fi series.
Meanwhile you need to post this from a fake account. What you can show me, of your accomplishments, with your real name? Here to listen, with genuine interest.
So you cannot "destroy" software. But you can have fast food versus slow food, you can have walkable cities or cars-only cities, you can have literate or illiterate society etc. Different cultures imply and create different lifestyles, initially subjective choices, but ultimately objectively different quality of life.
The author argues for a different software culture. For this to happen you need to create a viable "sub-culture" first, one that thrives because it acrues advantage to its practitioners. Accretion of further followers is then rapid.
The young grow old and complain about change. The cycle of life continues.
Frequent job hopping: lack of pay upgrades because software is considered a cost center
I could go on but in reality it’s a disconnect between what business thinks the software is worth as opposed to what the engineer wants to do with it.
You can say software is an art but commodity art doesn’t make much money. In reality, the ad driven software has greatly inflated salaries (not complaining but it’s reality). Now it’s going to be an ai bubble. But your rank and file business doesn’t care what software bubble is happening but unfortunately they are bound by the costs that come with it.
Have you seen the process that happens in defense or medical equipment industries. You probably won’t complain.
If you didnt have degree requirements and certification bodies for:
* accountants
* engineers
* doctors
* lawyers
What do you think hiring might look like?
Do you think they would build a hiring process to validate to the best of their ability your aptitude of the core fundamentals- except worse than certification and education bodies?
I would presume so at least.
Realistically, licensing boards are there to protect their members, and rarely do political things against people in the same body with unpopular opinions. You have to be catastrophic for most boards to do anything about you: Just like a police union will defend a union member that has committed gross negligence unless the evidence is public.
When you hire a doctor for something actually important, you don't look at the certification body: You look at long term reputation, which you also do in software. Only the largest of employers will leetcode everyone as a layer of fairness. In smaller employers, direct, personal references replace everything, which is what I'd go with if I needed an oncologist for a very specific kind of cancer. The baseline of aptitude from the certification body doesn't matter there at all.
So we need the barrier to entry to be even lower for such professions that deal with life-changing outcomes? I don't think so. In such high risk fields: "long term reputation" is totally dependent on hiring extremely qualified individuals.
The barrier to entry MUST be continuously raised with the bare minimum requirement of a degree. Only then the secondary requirements can be considered.
> When you hire a doctor for something actually important, you don't look at the certification body: You look at long term reputation, which you also do in software.
I don't think you can compare the two. Since one deals with high risk to the patient such as life and death and the other in most does not. (Unless the software being written deals with safety critical systems in a heavily regulated setting.)
From what you are saying, maybe you would be OK consulting a surgeon or an oncologist that has never gone to medical school.
Have you ever done any of the various IT certs?
Doctors also go to school for 8 years and then do residencies. Lawyers go to school for 7. Are you proposing that?
Its called “being out of touch” I believe
When done correctly it absolutely adds business value and should not make it harder to adapt or change, that's the point of good engineering. The problem is that you need years, if not decades, of technical experience to see this, and it's also a hard sell when there is no immediate "impact". It's basically something that happens because consumers don't know any better, so then it becomes low priority for making profit... at least until a competitor shows up that has better software and then it has a competitive edge, but that's a different matter.
I keep hearing how 10x engineers make their companies millions upon millions but they only get paid a fraction of that. How does that even make sense as a fair exchange? Not to mention is completely unfeasible for most people to have this kind of impact... it is only possible for those in key positions, yet every engineer is tasked with this same objective to increase company value as much as possible. There's just something very wrong with that.
Because they feel they can extract more value from them then they are paying them.
There are no real boundaries of when work starts and stops or when your responsibilities end because the task is to increase business value and that is technically endless
I work 40 hours a week and they pay me the agreed upon amount. There was nowhere in our agreement the expectation of my working more than that. I also knew that they could put me on a plane anytime during the week.
The company keeps piling more and more work, keeps growing and growing,
That’s completely on the employee to communicate trade offs between time, cost and requirements. My time is fixed at 40 hours a week. They can choose to use my 40 hours a week to work with sales and the customer to close a deal, be a project manager, lead an implementation, be a hands on keyboard developer, or be a “cloud engineer”. It’s on them how to best use my talents for the amount of money they are paying me. But seeing that they pay my level of employee the highest of all of the ICs, they really should choose to have me working with sales and clients to close deals.
That’s not bragging. I make now what I made as a mid level employee at BigTech in 2021.
I keep hearing how 10x engineers make their companies millions upon millions but they only get paid a fraction of that. How does that even make sense as a fair exchange?
The concept of a 10x engineer except in very rare cases is a myth if you think of them as just being on a keyboard everyday. All of the things I listed I could do - project management, backend developer or a cloud engineer - I would say I’m only slightly better than average if that. My multiplier comes because i can work with all of those people and the “business” and they can put me on a plane or zoom call and I can be trusted to have the soft skills necessary and my breadth is wide enough to know what needs to be done as part of a complex implementation and how to derive business value.
If you are making a company millions of dollars and you are only getting a fraction of that - and I doubt someone is doing that on their own without the supporting organizational infrastructure - it’s on you to leverage that to meet your priority stack.
Not to mention is completely unfeasible for most people to have this kind of impact... it is only possible for those in key positions, yet every engineer is tasked with this same objective to increase company value as much as possible. There's just something very wrong with that.
If you are a junior developer, there isn’t much expected of you, you are told what to do and how to do it. You aren’t expected to know the business value.
If you are a mid level developer, you are generally told the business objective and expected to know best practices of how to get there and understand trade offs on the epic/work stream level.
If you are a “senior” developer, now you are expected to understand business value, work with the stakeholders or their proxy, understand risks, navigate XYProblems on the project implementation level and deal with ambiguity.
As you move up the more “scope”, “impact” and “dealing with ambiguity” you have to be comfortable with.
“Codez real gud” only comes into play in getting from junior to mid.
> All of the things I listed I could do - project management, backend developer or a cloud engineer - I would say I’m only slightly better than average if that
I completely acknowledge this is a valid way to run a business, but the context here is how this sort of career progression is preventing the specialization of engineers in their domain and contributing to the widespread of software problems. Instead of investing in good engineers that specialize in their domain, companies move them away from engineering into more of an entrepreneur mindset by tasking them with adding value to the business directly, which is not something that you do as an engineer (it's nowhere in a CS degree, aside from say some electives).
A good metaphor here is a football/soccer team. What companies are doing is telling the goal keeper that he needs to score goals because more goals means winning. The team wants to win so everyone on the field has to score a goal. That obviously doesn't make sense even though the premise is true. You want a team composed of specialists and the more they specialize in their domain and work together the more you win. Even though there are only two or three offensive players that are scoring the goals, everyone is contributing to the success of the team if they specialize in their domain. Similarly, just because talking to clients and selling the product is directly contributing to the revenue of a business it doesn't mean that engineering at a higher level has no value.
And once again to stress the context here, companies can do whatever they want, but having engineers progress through their careers by moving AWAY from engineering is precisely why there is so much bad software out there. Letting engineers create better software should result in more profit in the long term, just probably not in the short term, and it's also hard for non-technical people to manage. So it is what it is.
Engineering is not only the physical labor. Aircraft engineers and building engineers don’t spend most of their time doing hands on work.
https://www.careerexplorer.com/careers/engineer/
Designing and Planning: Engineers are responsible for designing and planning systems, structures, processes, or technologies. They analyze requirements, gather data, and create detailed plans and specifications to meet project objectives. This involves considering factors such as functionality, safety, efficiency, and cost-effectiveness.
When doing construction work, who adds more value?
The general contractor? (Staff software engineer),
The owners of the plumbing, electrical, and HVAC companies assuming they have the actual skills (senior level developers). The owners of the plumbing companies could very well be making more than the general contractors. This is where you can specialize and the sky is the limit.
The actual certified workers (mid level developers). This is the level that the completely head down people are. No matter how good they become at being a hands on plumber , there is a hard ceiling they are going to hit at this level.
The apprentices (juniors)?
I work in consulting. I am currently a staff architect (true - IC5) over a hypothetical project (not a real project). I know the project is going to need a cloud architect, a data architect , and a software architect. They are all specialists at their jobs and are all going to lead their “work streams”. They are all IC4s
I expect each architect to take the high level business objectives and work with the relevant technical people on both sides and lead their work along with some hands on keyboard work.
They will each have people under them that are not customer facing at all. While I know all of the domains at some level, I’m going to defer to their technical judgement as long as it meets the business objectives. I did my high level designs before they came on to the project. Was my design work, figuring out priorities, risks, making sure it met the clients needs, discussing trade offs, etc not “engineering”?
Each level down from myself IC5 to junior engineers (IC1) is dealing with less scope, impact and ambiguity. There is no reason that the architects shouldn’t get paid as much as I do. They bring to the table technical expertise and depth. I bring to the table delivery experience, being able to disambiguate, and breadth.
No, but software is inherently different because you can leverage existing software to create more software. Every airplane has to be created individually, but software that already exists can be extended or reused by just calling functions or in the worst case copy/pasting.
> The actual certified workers (mid level developers). This is the level that the completely head down people are. No matter how good they become at being a hands on plumber , there is a hard ceiling they are going to hit at this level.
Yes, with hardware this can be the case as there is a small number of ways to build something. With software there is no ceiling, and the proof here is AI. We might soon see general intelligence that just codes anything you want. This means software can be designed to automate virtually anything in anyway shape or form, but it requires more and more expertise.
> I did my high level designs before they came on to the project. Was my design work, figuring out priorities, risks, making sure it met the clients needs, discussing trade offs, etc not “engineering”?
I agree what you're outlining is how the industry works. Perhaps the core of the issue here is how software engineering was modeled after other kinds of engineering with physical limitations. Software is closer to mathematics (arguably it's just mathematics). You can of course still design and plan and delegate, but once the role starts dealing with the high level planning, scheduling, managing, etc., there is less of a requirement for the technical details.
I've worked with architects that didn't know the specifics of a language or design patterns, not because they're bad at engineering but because they had no more time to spend on those details. These details are crucial for good software that is reliable, robust, extensible, etc. Junior and even mid level engineers also don't know these details. Only someone that has been hands on for a long time within a domain can hone these skills, but I have seen so many good engineers become senior or tech leads and then forget these details only to then create software that needs constant fixing and eventually rewriting.
I'm a senior myself and have no choice but to engage in these activities of planning, scheduling, etc., when I can clearly see they do not require technical expertise. You just need some basic general knowledge, and they just are time consuming. My time would be better spent writing advanced code that mid-level and junior level can then expand on (which has happened before with pretty good success, accelerating development and eliminating huge categories of bugs). Instead I have to resort to mediocre solutions that can be delegated. As a result I can see all kinds of problems accumulating with the codebase. It's also really hard to convince the leadership to invest in "high level" engineering because they think that you create more impact by managing an army of low to mid-level engineers instead of leveraging properly written software. I'm convinced that it does add value in the long term, it's just a hard sell. Ultimately I guess it comes down to the type of org and the business needs, which often does not include writing software that will not break. Most companies can afford to write bad software if it means they get to scale by adding more people.
That’s true. But when I put my “software engineering”, “cloud engineer”, or “data engineer” (theoretically) hat on, I can only do work of one person. No matter how good I am at any of it, I won’t be producing more output than someone equally qualified at my own company. Distributing software does have zero marginal cost more or less and that’s why we get paid more than most industries.
but I have seen so many good engineers become senior or tech leads and then forget these details only to then create software that needs constant fixing and eventually rewriting.
This is just like both my general contractor analogy and my real world scenario. As an “staff architect”, I come up with the initial design and get stakeholder buy in. But I defer to the SMEs the cloud architect, the data architect and the software architect who still eats, sleep and breathe the details in their specialty.
Just like the owners of the plumbing company, HVAC company and electrical company are the subject matter experts. The general contractor defers to them.
In the consulting industry at least, there are two ways you can get to the top, by focusing on depth or breadth. But in neither case can you do it by being staff augmentation (the plumber, electrician, or HVAC person), you still have to deal with strategy.
> You just need some basic general knowledge, and they just are time consuming
Knowledge isn’t the issue, it’s wisdom that only comes with experience that I assume you have. Going back to the hypothetical large implementation. It involves someone who does know architecture, development and data. No matter how good your code is, high availability, fault tolerance, redundancy, even throughput comes from the underlying architecture. Code and hands on implementation is usually the least challenging part of a delivery.
Its knowing how to deal with organizational issues, managing dependencies, sussing out requirements, etc
There's really way too many developers that care way more about the code than the product, let alone the business.
It ends up like fine dining, where 99% of the times a big Mac would've been tastier and made the customer happier, but wouldn't justify the effort and price.
I'm sorry but I don't buy it.
I've been way too close way too often with lisp and Haskell or many other niches (I know well both Racket/Scheme and Haskell btw) the people that care that much about this correct and reliable and extensible software care about the code more than they care about the products.
That's why the languages that breed creativity and stress correctness have very little, if any, killer software to show when PHP/Java has tons of it.
First, hardware has improved consistently, outpacing any need for optimal software. Second, the end user does not know what they really want from software until it is provided to them, so most people will easily put up with slow, laggy and buggy software until there is a competitor that can provide a better experience. In other words, if companies can make money from making bad software, they will, and this is often what happens because most people don't know any different and also hardware becomes more and more powerful which alleviates and masks the defects in software.
I think there is also another big factor, which is that businesses prefer to scale by adding more people, rather than by making better software. Because they want to scale along human engineers, they tend to prefer low-tier software that is easy to pick up by the masses, rather than the high-level specialized software that is a hard skill to acquire. This is understandable, but is also the reason why software is so slow in advancing forward compared to hardware (and by the same token, hardware engineers require more specialization).
An alternative but dangerous approach is to make it known you’re looking elsewhere for work. Don’t do that if it’s relatively easy to replace you, and definitely assume the management thinks it’s easy to replace you, especialy if you haven’t been talking to your boss. ;) But there is the chance that they know you’re valuable and haven’t given you a raise because you seem content and they believe they have the upper hand - which may or may not be true.
When I left a company for increased compensation, which funny enough has only been 3x in almost 30 years across 10 jobs, it’s been between a 25%-60% raise. It’s almost impossible for any manager to push that kind of raise for anyone without a promotion.
Even at BigTech promotions usually come with lower raises than if you came in at that level.
Don’t do that if it’s relatively easy to replace you, and definitely assume the management thinks it’s easy to replace you, especialy if you haven’t been talking to your boss.
Everyone is replaceable. If you are at a company where everyone isn’t replaceable, it’s a poorly run company with a key man risk. I never saw any company outside of a one man company or professional practice where one person leaving was the end of the company.
Very true! Though isn’t it also normal in that case for HR to be recommending inflation raises at least? The exception might be if you came in at a salary that’s higher than your peer group and/or high for the title range. Parent’s problem could be that - either peer group correction or not possible for manager to raise at all without a promotion by company rules. There’s lots of reasons I can imagine, but in any case I wouldn’t expect a change with status quo, right? If you haven’t been talking to your boss, continuing to not talk to your boss is unlikely to change anything.
Bad software keeps happening because businesses can afford it, as well as hardware improvements. It's a combination of consumers not knowing what they are missing and hardware advancements allowing for bad software to exist.
If you aren’t writing software with the customer and business in mind, why are you doing it? That’s what you are getting paid for.
That said, AI is about to change all this... but ironically this justifies my position. If software can be so powerful such that you can have a general intelligence that can do almost any cognitive task... then it follows that all this time engineers can also engineer clever systems that can add a lot of value to the company if engineered correctly.
There is no ceiling of how much performance and value can be squeezed out of a software system, but this never happens because businesses are not investing in actual engineering but rather in technical entrepreneurs that can bring something to market quickly enough.
There is no “implicit value” of software to a company. The only value of a software to a company is whether it makes the company money or saves the company money. That’s it, there is no other reason for a company to pay anyone except to bring more value to the company than they cost to employ them.
If software can be so powerful such that you can have a general intelligence that can do almost any cognitive task... then it follows that all this time engineers can also engineer clever systems that can add a lot of value to the company if engineered correctly.
It’s just the opposite. If AI can do any of the coding that a software engineer can do and I am not saying that’s possible or ever will be, what’s becomes even more important are the people who know how to derive business value out of AI.
> There is no ceiling of how much performance and value can be squeezed out of a software system
That may be true. But what’s the cost benefit analysis? Should we all be programming in assembly? Should game developers write a custom bespoke game engines and optimize them for each platform?
The implicit part is that if you engineer a good system then it saves money with less bugs and breaks less, and also makes money by allowing faster development and iterations.
There are plenty of examples here. I could point at how the PlayStation network just went down for 24 hours, or how UIs are often still very laggy and buggy, or I can also point at companies like Vercel that are (I assume) very valuable by providing a convenient and easy way to deploy applications... the fact that there are many SaaS out there providing convenience of development proves that this adds value. Despite this businesses are not having their engineers do this in-house because somehow they don't see the immediate ROI for their own business. I would just call that lack of vision or creativity at the business level, where you can't see the value of a well engineered system.
Businesses are free to run their company in whichever way they please, and they can create crappy software if it makes them money, but the point is that when this is industry-wide it cripples the evolution of software and this is then felt by everyone with downtimes and bad experiences, even though hardware is unbelievably fast and performant.
> Should game developers write a custom bespoke game engines and optimize them for each platform?
This is a good example actually. Most companies that want full creative control are making their own engines. The only exception here is Unreal (other smaller engines are not used by large companies), and from what I can tell the Unreal engine is an example of great software. This is one of those exceptions where engineers are actually doing engineering and the company probably can't afford to have them do something else. Many companies could benefit from this, but it's just not as straight line from the engineering to profit and that's kind of the root of why there is so much bad software out there.
Part of “meeting requirements” is always RTO, RPO, the availability requirements, latency, responsiveness, etc.
Why care about quality or maintainability if you are gone in year or two anyway...
He works as a highly-skilled tech, at a major medical/scientific corporation. They have invested years of training in him, he brings them tremendous value, and they know it. He was just telling me how he used that value to negotiate a higher compensation package for himself. Not as good as if he swapped jobs, but he really has a sweet gig.
People who stay, take Responsibility for the code they write. They will need to face the music, if it doesn't work, even if they are not responsible for maintaining it.
They are also worth investing in specialized training, as that training will give great ROI, over time.
But keeping skilled people is something that modern management philosophy (in tech, at least) doesn't seem to care about.
Until corporations improve the quality of their managers; especially their "first-line" managers, and improve their culture, geared towards retaining top talent (which includes paying them more -but there's a lot more that needs doing), I can't, with good conscience, advise folks not to bounce.
If your main motivation for working is to exchange your labor for the maximum amount of money possible, I don’t see how that is the positive outcome you think it is.
I personally wouldn’t leave my current job if another one for $100K more fell into my lap. But the “unlimited PTO” where the custom is to take at least 5-6 weeks off during the year not including paid holidays and it being fully remote is hard to beat.
I mean pretty much exactly what you said.
I apologize for being unclear.
I’m a founder for 10 people and this is the first thing we think about. Except for low performers; except that youngsters need a variety of experience to be proficient at life; except that the team is not performing well(1). 25% or 30% increases for half the workforce are frequent.
(1) The biggest remark from management coaches is that giving raises lowers employee performance, which I can fully witness in my company. It’s not even good for morale. I’m just happy that people exit the company fitter and with a girlfriend, even a kid and sometimes a permanent residency, but business-wise I’ve been as good as a bad leader.
I’m reaching the sad conclusion that employees bring it upon themselves.
My friend could double his salary, moving almost anywhere else, but he gets a lot of perks at his work, and is treated extremely well by his managers.
They just gave him a rave review, and that did more to boost his willingness to stay, than a 10% raise. He will still negotiate a better salary, but is more likely to be satisfied with less than he might have, if they tried to treat him badly.
Treating employees with Respect can actually improve the bottom line. They may well be willing to remain in difficult situations, if they feel they are personally valued.
I know this from personal experience. When they rolled up my team, after almost 27 years, the employee with the least tenure had a decade. These were top-shelf C++ image processing engineers, that could have gotten much higher salaries, elsewhere.
The problem is our current form of corporate culture. Employees don't feel like they matter, there efforts are a cog in a wheel. If you get a raise in this type culture, it only matters to the bottom line and there is no incentive produce because the employee is already unhappy in the first place.
Change your business culture and these problems will disappear, IMHO.
We also have a 401K match with an immediate vest.
If they would care then job hopping would not exist. If staying at s company would be more. Beneficial to your salary, why would you ever want to change company, if you are otherwise happy?
Coming up with a neat API that turns out to be difficult to modify in the future, or limiting in ways you didn't imagine would when writing it is a good learning experience.
Or seeing how long a system can survive growing usage -- Maybe a simple hack works better than anyone expected because you can just pay more for RAM/CPU each year rather than rebuild into a distributed fashion. Or the opposite, maybe there's some scaling factor or threshold you didn't know existed and system performance craters earlier than predicted.
This is the root problem. None of the problems the GP pointed up were created by software developer.
Now, if you want to know the consequences, it causes an entire generation of people that don't really know what they are doing because they never see the long-term consequences of their actions. But again, it's not the software developers that are causing this, nor are they the ones that should fix it.
FOSS doesn't have these problems.
FOSS is certainly guilty too.
It fosters a culture where everyone can hack something together, and where everyone is knowledgeable enough to make responsible use of technology.
Working as a for-hire developer doesn't let you experience all of that because you're building a product that someone else wants you to build. No wonder one does not give a shit about writing good software at that point! You've taken all the fun and personal fulfillment out of it!
We can build anything we put our mind to -- but most of us are too busy churning out CRUD boilerplate like factory workers. That's depressing.
The complexity/dependency graph of a random application now-a-days is absolutely insane. I don't count everything in this, including the firmware and the OS like Muratori does in his video[1], but it is close enough. The transitive dependency problem needs to be solved. And we need to do something about Bill/Guido taking away all that Andy gives.
I consider the OS (Win32 API, Linux syscalls) to be the only hard dependency for anything I write in C. Tend to avoid libc because of distribution issues. But you have no control over this layer once you switch over to Java/Python.
The only thing you can then do is stop depending on every library out there to avoid writing a couple of hundred lines of code specific to your situation. It definitely increases the maintenance burden. But dependencies are not maintenance-free either. They could have the wrong API which you have to wrap around, or will break compatibility at random times, or become abandonware/malware or have some security flaw in them (rsync had a major security vulnerability just last month).
My personal max line count for a useful project that does one thing is 5-10KLOC of Java/JS/Python. It is something I can go through in a couple of hours and can easily fix a few years down the line if I need to.
[1] The Thirty Million Line Problem (2015) https://youtu.be/kZRE7HIO3vk
Then, PL creators whose language isn’t even at 1.0 light the fuse.
If all people were identical and everyone was on the same level, things would be different. But that is not how the real world works.
'We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels.'
and this
'We are destroying software pushing for rewrites of things that work.'
But generally speaking I grok it.
The problem, IMO, is globally applying either rule.
I think about this a lot. A general purpose approach is easy to hop between shallow solutions to many problems. So technologists love to have a few of these on hand, esp because they job hop. They're well known and managers love to say "why not just use XYZ".
But it's obvious that a fine tuned hand crafted solution (possibly built from a general one?) is going to significantly out perform a general one.
I am unable to reconcile these two seemingly contradictory takes in the article (rest of it I concur with):
"We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels."
"We are destroying software pushing for rewrites of things that work."
By rewrite do you mean 1-to-1 porting the code to another language because reason is: faster/easier/scalable/maintainable/_____ (insert your valid reason here)?
There are dozens of homelab solutions for this (homepage, heimdall, etc.) but none of them was simple enough, framework-free or able to run completely offline (or even from cache).
Being able to code something that fits into a couple of screenfuls of text and does 99% of what I need still feels like “the right way”.
I think the situation (loosely, wasting time spinning your wheels by modernizing things that are actually fine the way they are and may be made worse by adopting "hot new thing") looks worse when you see it through that lens than it actually is throughout the industry as a whole. There are plenty of opportunities for modernization and doing some of the things described in this article that actually make some sense when applied to appropriate situations.
In other words, I totally understand the vibes of this post, it's one of the reasons I don't work in the parts of the industry where this attitude is most prevalent. I would never feel the push to write a post like it though because the poster is I think being a bit dramatic. At least that's the case looking at the industry from my (id argue broader) vantage point of being an expert in and working quite a bit with "legacy" companies and technologies that maybe could stand to have a new UI implemented and some tasteful modern conventions adopted at least as options for the end users.
This post reads as a description of how the wheels come off the wagon if you don't do things well.
With the evolution of AI agents, we'll all be virtual CTOs whether we like it or not, mastering scale whether we like it or not. We need to learn to do things well.
Managerial genius will be the whole ball of wax.
Without a spec, the world falls into disarray and chance, next you need QA tests, and security auditors, and red teams, salespeople, solutions engineers, tech support, training, and a cascade of managers who try and translate "want" into "does" and "when" and to understand and accept the results. Architects and seniors who are both domain experts and skilled coders as the single truth on what the GUI is even supposed to mean. Taking on varying levels of risk with contracts, hires or expanding and contracting the previously mentioned R&D units. That's not software anymore, that's consulting. It's so expensive and unsustainable that it's only a matter of time until you're the leg that gets gnawed off, which is inevitably the result, when burn and panic (or other negative factors) leads you away from turning pain points (like issues, training, or difficult usage) into ice cubes for a cocktail.
We're destroying software when we think documentation does not matter.
I was going to phrase it: We are destroying software by neglecting to documenting it and its design decisions.
With each new language and new library, there's a chance to do it better. It's quite a wasteful process, but that's different from things getting worse.
Software engineering is far from a monoculture.
Maybe what I’ve seen change over the years is the strategy of “pull a bunch of component systems together and wire them up to make your system” used to be more of an enterprise strategy but is now common at smaller companies too. Batteries included frameworks are out, batteries included dependencies in a bunch of services and docker images are in.
It’s true many people don’t invent wheels enough. But there are a lot of developers out there now…
I am reminded of howto guides that require downloading a scaffolding tool to set up a project structure and signing up with some service before even getting to the first line of code.
You might say "I don't need to be able to propose a solution in order to point out problems", and sure, but that's missing the point. Because by pointing out a problem, you are still implicitly asserting that some solution exists. And the counter to that is: no, no solution exists, and if you have no evidence in favor of your assertion that a solution exists, then I am allowed to counter with exactly as much evidence asserting that no solution exists.
Propose a solution if you want complaints to be taken seriously. More people pointing out the problems at this point contributes nothing; we all know everything is shit, what are you proposing we do about it?
You give an engineer ownership and let them learn from their own mistakes, rise to the occasion when the stakes are high. This presumes they will have the last word on changes to that sand box that they own. If they want to rewrite it — that's their call. I'm the end they'll create a codebase they're happy to maintain and we will all win.
(And I think they'll be a happier engineer too.)
Weird take.
Um, no you're not.
> And the counter to that is: no, no solution exists,
If that's the case, it's probably helpful to know it.
> we all know everything is shit, what are you proposing we do about it?
Give up on whatever doomed goal you were trying to reach, instead of continuing to waste time on it?
> We are destroying software with an absurd chain of dependencies, making everything bloated and fragile.
> We are destroying software by always thinking that the de-facto standard for XYZ is better than what we can do, tailored specifically for our use case.
> We are destroying software mistaking it for a purely engineering discipline.
> We are destroying software claiming that code comments are useless.
> We are destroying software by always underestimating how hard it is to work with existing complex libraries VS creating our stuff.
This one begs the question: "What is 'fast'?". I mean, is it high-performance, or quickly-written? (I think either one is a problem, but "quickly-written" leads to bad software, and overly-optimized software can be quite hard to maintain and extend).
> We are destroying software trying to produce code as fast as possible, not as well designed as possible.
> We are destroying software by jumping on every new language, paradigm, and framework.
Whilst I largely agree with the premise of the post, some of these points feel a little bit dissonant and contradictory. We can have stability and engineering quality _and_ innovation. What I think the author is trying to target is unnecessary churn and replacements that are net-worse. Some churn is inevitable, as it’s part of learning and part of R&D.
That culture usually starts at the top if you have it in your company, but occasionally you will just be lucky enough to have it in your team.
So before you write some software, you describe it to your team in a document and get feedback (or maybe just your manager).
If you don't do this, you don't really have a software design culture at your company. My rule was basically that if a piece of software required more than 1 Jira ticket, then create a doc for it.
Legacy systems are born from rushing into implementation to feel productive rather than understanding the problem. Slow is smooth. Smooth is fast.
If asked to implement a templated dictionary type, many C++ programmers will write a templated class. If they want to be responsible, they’ll write unit tests. But the simplest way is:
template<typename Key, typename Value>
using Dictionary = std::vector<std::pair<Key, Value>>;
It is trivially correct and won’t have any edge cases which the tests may or may not catch.
Some programmers would consider this implementation beneath them. In just 800 lines, they could build the type from scratch and with only a few months’ long tail of bugs.
When I do a git blame of something coherent, that worked well in production, that I wrote years ago, almost none of my original code survived, every line is now written by a different person. That's the entropy of software systems left unchecked. It was not really rushed, there was a ton of time debating design before a single line was written.
Many of these have been an issue since I started programming many decades ago on minis. Seems as the hardware gets more powerful we find ways to stress out the hardware more and more. This making your list true to a greater extent year after year.
BTW, I like your WEB Site format a lot!
Yet: https://x.com/antirez/status/1885994950875046164
The reality is that man's "unnecessary complexity" is another's table stakes feature. The post isn't entirely without merit, but most of it reads like a cranky old timer who yearns for a simpler time.
Also I'm very enthusiastic with modern AI and definitely open to new things that make a difference. The complexity I point my finger to, in the post, is all unnecessary for software evolution. Actually it prevents going forward making it better because one have to fight with the internal complexity or with a culture that consider innovation rewriting the software with X or using Y.
Take for example the web situation. We still write into forms, push "post" buttons. Now we trigger millions of lines of code but the result is basically the same as 20 years ago.
To me this is the most sure way to identify the adults from the children in the room. People that can actually program, strangely enough, aren’t bothered by programming.
This one packs a lot of wisdom.
I'm not a masochist; they're often built on top of components, e.g. my IDE uses the Monaco editor. But working on these tools gives me a sense of ownership, and lets me hack them into exactly the thing I want rather than e.g. the thing that Microsoft's (talented! well-paid! numerous!) designers want me to use. Hacking on them brings me joy, and gives me a sense of ownership.
Like an idealised traditional carpenter, I make (within reason) my own tools. Is this the most rational, engineering-minded approach? I mean, obviously not. But it brings me joy -- and it also helps me get shit done.
Or, more generally, the fact that most of what the software industry produces is much more in line with “art” than “engineering” (especially when looked at from a Mechanical Engineer or Civil Engineer). We have so much implementation flexibility to achieve very similar results that it can be dizzying from the standpoint of other engineering fields. consider
Setting up uv, Dockerfiles, GitHub Actions, Secrets etc. took me basically the whole day. I can very much relate to "We are destroying software with complex build systems."
People tend to adapt to technology by becoming more lazy, putting less effort in understanding. Look at how after of calculators became common, we got generations of people who struggle to do basic math.
You'll also find that most people do the same stuff they would have done without a calculator still without a calculator. The advantage now is when they do reach for the calculator they aren't working with perhaps 2 digits precision like the slide rule that preceded them or having to tabulate large amounts of figures to maintain precision.
Do you have any data behind this claim?
> When developers become too dependent on AI to "assist" them in coding, they're more likely to be unable to debug "their own" code as they won't have a full grasp/understanding of it.
Again, do you have any evidence that this happen?
(This is an allusion to Kernighan’s lever.)
What is it about software that causes us to search for aesthetic qualities in the instructions we write for a machine? Ultimately the problems we’re solving, for most of us, will be utterly meaningless or obsolete in ten years at most.
That illusion has been lifted a little harshly for a lot of people over the past year or so. I still enjoy software-as-craft but I don't hold any false belief that my day job does.
He makes very good points. But he missed one. We are destroying software(or anything else) by waiting till something goes wrong to fix it. ex: Software Security, US Food Standards and their relation to the health of it's citizens, etc...
> We are destroying software by no longer caring about backward APIs compatibility.
Come on, who believes this shit? Plenty of people care about API backwards compatibility.
> We are destroying software pushing for rewrites of things that work.
So never rewrite something if it "works"? There are plenty of other good reasons to rewrite. You might as well say "We are destroying software by doing things that we shouldn't do!"
> We are destroying software claiming that code comments are useless.
No we aren't. The crazy "no comments" people are a fringe minority. Don't confuse "no comments" (clearly insane) with "code should be self-documenting" (clearly a good idea where possible).
Worthless list.
My take: SemVer is the worst thing to happen to software engineering, ever.
It was designed as a way to inform dependents that you have a breaking change. But all it has done is enable developers to make these breaking changes in the first place, under the protective umbrella of “I’ll just bump the major version.”
In a better universe, semver wouldn’t exist, and instead people would just understand that breaking changes must never happen, unless the breakage is obviously warranted and it’s clear that all downstreams are okay with the change (ie. Nobody’s using the broken path any more.)
Instead we have a world where SemVer gives people a blank check to change their mind about what API they want, regularly and often, and for them to be comfortable that they won’t break anyone because SemVer will stop people from updating.
But you can’t just not update your dependencies. It’s not like API authors are maintaining N different versions and doing bug fixes going all the way back to 1.0. No, they just bump majors all the time and refactor all over the place, never even thinking about maintaining old versions. So if you don’t do a breaking update, you’re just delaying the inevitable, because all the fixes you may need are only going to be in the latest version. So any old major versions you’re on are by definition technical debt.
So as a consumer, you have to regularly do breaking upgrades to your dependencies and refactor your code to work with whatever whim your dependency is chasing this week. That callback function that used to work now requires a full interface just because, half the functions were renamed, and things you used to be able to do are replaced with things that only do half of what you need. This happens all the god damned time, and not just in languages like JavaScript and Python. I see it constantly in Rust as well. (Hello Axum. You deserve naming and shaming here.)
In a better universe, you’d have to think very long and very carefully about any API you offer. Anything you may change your mind on later, you better minimize. Make your surface area as small as possible. Keep opinions to a minimum. Be as flexible as you can. Don’t paint yourself into a corner. And if you really, really need to do a big refactor, you can’t just bump major versions: you have to start a new project, pick a new name (!), and find someone to maintain the old one. This is how software used to work, and I would love so much to get back to it.
Which is just fine when it is a non funded free software project. No one owes you anything in that case, let alone backwards compatibility.
It’s not an automatic outcome of free software either. The Linux kernel is famous for “we don’t break user space, ever”, and some of Linus’s most heated rants have come from this topic. All of GNU is made of software that doesn’t break backwards compatibility. libc, all the core utilities, etc, all have maintained deprecated features basically forever. It’s all free software.
Forbidding breaking changes isn't going to magically make people produce the perfect API out of the gate. It just means that fixes either don't get implemented at all, or get implemented as bolt-on warts that sow confusion and add complexity.
Or others may say: "There are software, and there are things remind people of software".
It's easy to get caught up in what you dislike about software, but you have to make yourself try new stuff too. There's always more delight to be found. Right now, I think it's good to embrace LLM assistants. Being able to sit down and avoid all the most tedious parts and focus purely on engineering and design makes every session much more enjoyable.
Software itself is better now than it has ever been. We're slowly chipping away at our own superstitions. Just because some people are fully employed building empty VC scams and software to nowhere does not damn the rest of the practice.
I think that writing software for an employer has always kind of sucked. There's never enough time to improve things to the state they should be and you're always one weird product feature from having to completely mess up your nice abstraction.
I do feel like writing hobby software is in a great state now though. I can sit down for 30 minutes and now with Cursor/LLM assistance get a lot of code written. I'm actually kind of excited to see what new open source projects will exist in a few years with the aid of the new tools.
Isn't this kind of ironic?
Arguing against 10x complexity & abstractions, with 1x coding :-)
I believe we'll soon be looking for "the joy of programming" in a totally different way, far more outcome-oriented than it is today. (Which is, in my book, a good thing!)
This idea is getting the causality arrows backwards. I'm not talking up AI because I'm in AI - I'm in AI because I believe it is revolutionary. I've been involved in more fields than most software devs, I believe, from embedded programming to 3d to data to (now) AI - and the shift towards Data & AI has been an intentional transition to what I consider most important.
I have the great fortune of working in what I consider the most important field today.
> But for me, and many others, the joy of programming is in doing the programming. There is not a more outcome-driven modality that can bring us joy. And we either reject the premise or are grieving that it might eventually be true.
This is an interesting sentiment. I certainly share it to some extent, though as I've evolved over the years, I've chosen, somewhat on purpose, to focus more on outcomes than on the programming itself. Or at least, the low-level programming.
I'm actually pretty glad that I can focus on big picture nowadays - "what do I want to actually achieve" vs "how do I want to achieve it", which is still super technical btw, and let LLMs fill in the details (to the extent that they can).
Everyone can enjoy what they want, but learning how to use this year's favorite library for "get back an HTML source from a url and parse it" or "display a UI that lets a user pick a date" is not particularly interesting or challenging for me; those are details that I'd just as soon avoid. I prefer to focus on big picture stuff like "what is this function/class/file/whatever suppoed to be doing, what are the steps it should take", etc.
We continue to add complexity for the sake of complexity, rarely because of necessity. We never question adding new things, yet there's always an excuse not to streamline, not to remove, not to consolidate.
We must deliver new features, because that is progress. We must never remove old systems, because that harms us. We must support everything forever, because it's always too costly to modernize, or update, or replace.
It doesn't matter if the problem doesn't exist, what matters is that by incorporating this product, we solve the problem.
We must move everything off the mainframe and onto servers. We must move every server into virtual machines. We must move every virtual machine into AWS. Every EC2 instance into a container. Every container into Kubernetes. Into OpenShift. We must move this code into Lambda. Do not question the reasoning or its value, it must be done.
How did our budget balloon in size? Why are our expenses always going up? Why is our cloud bill so high? We must outsource, clearly. We must hire abroad. We must retain contractors and consultants. We need more people with certifications, not experience.
Why is everything broken? Why is our data leaking into the public realm? How did we get hacked? We must bring in more contractors, more consultants. That is the only answer. We don't need internal staff, we need managers who can handle vendors, who can judge KPIs, who can identify better contractors at cheaper costs year after year.
Why is the competition beating us? What vendor should we go with to one-up our competition? We must consult the Gartner report. We must talk to other CIOs. We must never talk to our staff, because they don't understand the problem.
We don't like our vendor anymore. They raised prices on us, or didn't take us to that restaurant we liked for contract negotiations, or didn't get us box seats for that event this year. They must go. What do you mean we can't just leave AWS this quarter? What do you mean we can't just migrate our infrastructure to another competitor? That proprietary product is an industry standard, so just move to another one. What are we even paying you for?
We checked all the boxes. We completed all the audits. We incorporated all the latest, greatest technologies. We did everything we were told to. So why aren't we successful? Why aren't we growing?
...ah, that's why. We didn't incorporate AI. That must be it. That will fix it all.
Then we'll be successful. Then everything will work.
And sometimes the reverse problem is what is destroying software: being unwilling to push for rewrites of things when they desperately need one. I think we may have over indexed on "never rewrite" as an industry.
> We are destroying software mistaking it for a purely engineering discipline.
And I'm also seeing this the other way: engineers are beginning to destroy non software things, because they think they can blindly apply engineering principles to everything else.
I just tried to develop a simple CRUD-style database UI for a mechanical hobby project. Being on the backend/systems side of the spectrum, for the UI I decided "yeah, considering I work on a Mac now, and there's no such thing as WinForms here, let's quickly throw together something small with python and tkinter". God, that was a mistake and lead to both suffering and lost days which I did not work on the main project on.
How is it that in 2025, we still do not have some sort of Rapid Application Development tool for Mac? How do we still not have some sort of tool that allows us to just drag a database table into a window, and we have basic DB functionality in that window's app? Jobs demonstrated that capability in the NeXT demo, Visual Studio has been doing it for decades, so has Delphi. But on Mac?
Swift is a train wreck, tkinter is like painting a picture with yourself being blindfolded, Qt does a lot of things right, but has a funky license model I am not comfortable with - and has pain points of its own.
I eventually coughed up the 100 bucks and got a Xojo license ... now I am working in BASIC, for the first time since 1992 - but it has an interface designer. For a massive boost in usability, I have to get back into a language I wished to forget a long time ago.
And that does not spark joy.
Yes, bloat is bad. Yes, I too make fun of people who have to import some nom package to check whether a number is odd or even.
But sometimes, you are not Michelangelo, chiselling a new David. Sometimes, you just need a quick way to do something small. Actively preventing me from achieving that by refusing to go with the times is destructive.
50 IQ: haha software is complicated whatever
100 IQ: (this post)
150 IQ: software is complicated whatever
Until the incentives change, the outcome won't change.
Pointless meetings and bureaucracy also doesn't help. Instead of giving engineers time and breathing room to build well-defined systems, organisations treat them like fungible workhorses that must meet arbitrary deadlines.
Semi-related: Start using profane variable names because apparantly it will cause Copilot to stop analyzing your code.
Read responsively
"We are destroying software by no longer taking complexity into account when adding features or optimizing some dimension.
And we are destroying software with complex build systems.
We are destroying software with an absurd chain of dependencies, making everything bloated and fragile.
And we are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels."
Yie-die-die-die-diiiiieee-dieeee,Yie die die die dieee dieeee
I think people vastly underestimate "the wheel". The wheel was something recreated independently over and over again for thousands of years across human history. Even your favorite programming language or web framework is not comparable to "the wheel".
When we build software, we answer three questions: "what?", "how?", and "why?". The answer to what becomes the data (and its structure). The answer to how is the functionality (and the UI/UX that exposes it). The answer to why is...more complicated.
The question why is answered in the process of design and implementation, by every decision in the development process. Each of these decisions becomes a wall of assumption: because it is the designer - not the end user - making that decision.
Very rarely can the end user move or replace walls of assumption. The only real alternative is for the user to alter the original source code such that it answers their why instead.
Collaboration is the ultimate goal. Not just collaboration between people: collaboration between answers. We often call this "compatibility" or "derivative work".
Copyright, at its very core, makes collaboration illegal by default. Want to make CUDA implementation for AMD cards? You must not collaborate with the existing NVIDIA implementation of CUDA, because NVIDIA has a copyright monopoly. You must start over instead. This is NVIDIA's moat.
Of course, even if copyright was not in the way, it would still be challenging to build compatibility without access to source code. It's important to note that NVIDIA's greatest incentive to keep their source code private is so they can leverage the incompatibility that fills their moat. Without the monopoly granted/demanded by copyright, NVIDIA would still have a moat of "proprietary trade secrets", including the source code of their CUDA implementation.
Free software answers this by keeping copyright and turning it the other direction. A copyleft license demands source code is shared so that collaboration is guaranteed to be available. This works exclusively for software that participates, and that is effectively its own wall.
I think we would be better off without copyright. The collaboration we can guarantee through copyleft is huge, but it is clearly outweighed by the oligopoly that rules our society: an oligopoly constructed of moats whose very foundations are the incompatibility that is legally preserved through copyright.
In a world of mass hallucination (psychosis), doom-scrolling and international capital, software decay is a logical process. Your Leaders have no incentive to think systemically when they have guaranteed immunity.
A good software is a result of meritocratic system with accountability and transparency.
Just take a statistical view on investment in education vs investment in AI infrastructure and the picture becomes clear.
The designer fallacy. Techno-optimism. Detachment from the real problems of humanity. Desensitized obedience for career growth or social benefits.
We build software on the shoulders of giants with a sense of reality and human connection. We lost this skill.
You cannot abstract to infinity. You cannot complicate things just because and expect quality and maintainability to emerge from the ether.
Even more fundamentally, it's built on the ubiquitous mistake made by those who make good money in the current status quo: it doesn't name the actual cause of rushed work et al., which is obviously capitalist cost-cutting, not lazy professionals.
> We are destroying software with an absurd chain of dependencies, making everything bloated and fragile.
> We are destroying software by making systems that no longer scale down: simple things should be simple to accomplish, in any system.
That's true and I'd say we've got proof for that with the fact that many software is now run in containers.
I always get downvoted for saying that it's not normal we now all run things in containers but I do run my own little infra at home. It's all VMs and containers. I know the drill.
It's not normal that to do something simple and which should be "dumb", it's easier to just launch a container and then interface with the thing using new API calls that are going to be outdated at the next release. We lost something and it's a proof we gave up.
This containerization-of-all-the-things is because we produce and consume turds.
Build complexity went through the roof so we have to isolate a specific build environment in a container file (or, worse, a specific environment tailored to be accept one build already made).
Criticize Emacs as much as you want: the thing builds just fine from source with way more lines of code (moreover in several languages) than most projects. And it builds fine since decades (at least for me). And it doesn't crash (emacs-uptime -> 8 days, 5 hours, 48 minutes and that's nothing. It could be months but I sometimes turn my computer off).
Nowadays you want to run this or that: you better deploy a container to deal with the build complexity, deployment complexity and interacting complexity (where you'll use, say, the soon-to-be-updated REST calls). And you just traded performance for slow-as-molasses-I-wrap-everything-in-JSON calls.
And of course because you just deployed a turd that's going to crash anyway, you have heartbeats to monitor the service and we all applaud when it automatically gets restarted in another container once it crashed: "look what a stable system we have, it's available again?" (wait what, it just crashed again, oh but no problem: we just relaunched another time)
It's sad really.
I just open sourced a CLI tool for income and expense tracking yesterday at https://github.com/nickjj/plutus and I'd like to think I avoided destruction for each of those bullets.
You have a choice on the code you write and who you're writing it for. If you believe these bullets, you can adhere to them and if you're pressured at work to not, there are other opportunities out there. Don't compromise on your core beliefs. Not all software is destroyed.