Rendered at 20:40:21 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
hamstergene 1 days ago [-]
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
hibikir 17 hours ago [-]
A non-trivial part of the big difference between the juniors that seem talented and "get it", and those that don't is precisely their ability to form accurate enough world models quickly. You can tell who is going at the "physics" of software and applying them, and who is just writing down recipes, and doesn't try to understand the nature of any of the steps.
It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
ruszki 15 hours ago [-]
I was even as a junior the kind, who tried to understand the nature of the steps. I failed many times, but I learned from them all the time. I remember my mutable public static variables, and terrible small JavaScript apps. But every time when I did something like that, I tried to understand it. I knew that I failed. Sometimes it took me a year or more (like when I first encountered React about a decade ago, I immediately knew why some of my apps failed with architecture previously).
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
cwsx 10 hours ago [-]
I had to learn Prolog for a university paper and I have to agree; out of the dozen-ish languages I've had to learn, something just didn't "click" with Prolog.
No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
9dev 13 hours ago [-]
I wouldn't really try to equate arbitrary job titles awarded based on tenure with actual expertise; titles aren't consistently applied across the industry, or awarded on conditions other than actual merit.
abustamam 11 hours ago [-]
There are a lot of very young developers who have less years of experience than me who have tons more expertise than me.
The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
dasil003 6 hours ago [-]
I vividly remember the moment this clicked for me. I had spent the better part of a decade being interested in programming and essentially learning recipes. It wasn't until I was a couple years into a CS degree and starting to work professionally as a web developer, that I finally had an epiphany of what software actually was, and the degrees of freedom that it actually has. It's very hard to put into words because it was an internal phenomenon, but I can describe at a more visceral understanding of what is meant by "the map is not the territory", and "all models are wrong but some are useful". It's like, you can build anything in software, it's up to you to decide how to do it and make it relevant for a real world use case.
Of course I was still super junior and had so much to learn, but from that point I could at least interrogate any pattern or best practice to understand why it existed and where it should or should not be applied.
genghisjahn 3 hours ago [-]
I've had conversations with people who wanted to learn how to code. I found that teaching someone how to code is tedious experience. It's just a bunch of memorization and bafflement at how quickly someone else can do things at the keyboard. I've since come to realize that wanting to learn to code is NOT a good starting place. It's best to have a vision. What's the problem you are wanting to solve? If writing software is a way to solve that problem...well NOW we have something to learn around. We have a vision. We have a goal. And learning the syntax and cs concepts is no longer an end of itself, it's just an obstacle to get through to accomplish the vision. You bring enough of these visions to completion, you'll find you've cleared a LOT of obstacles and wow, you've gained a lot of software knowledge.
kharak 12 hours ago [-]
I've always had excellent model building functionality for abstractions and got the "physics" of a subject rather quickly, be it economics, biology, certain mathematical subjects and more.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
dragochat 9 hours ago [-]
your "physics" grounding is exactly why it feels so odd - software is by its nature anti-physicalist
math and logic are closer to a basis for software abstraction - but they were scary to business people so a "fake language" was invented atop them - you have "objects" that don't actually exist as objects, they are just "type based dispatch/selection mechanism for functions", "classes" that are firstly "producers of things and holders of common implementation" and only secondarily also work to "group together classes of objects"
jeltz 9 hours ago [-]
I feel that is a bit of a false history. OOP was invented by people trying to simulate physical systems, e.g. Stroustup, the Simula people and their contemporaries not business people. Arguably it was popularized later by business people and enterprise Java developers. But that happened way later.
I do not think OOP ever really worked out well as can be evidenced by it no longer being as popular and people having almost entirely abandoned "Cat > Animal > Object" inheritance hierarchies.
9rx 3 hours ago [-]
This is also a bit of a false history. OOP was squarely invented with Smalltalk. The term was literally conceived for Smalltalk to describe its unique (at the time) programming model. While objects most certainly predate Smalltalk, it was Smalltalk that first started exploring how objects could be oriented.
OOP didn't really take off either, but mostly because it is hard to optimize and impossible to type.
spdionis 11 hours ago [-]
I have the opposite experience. Goes to show the difference between people.
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
unsettledturtle 6 hours ago [-]
Chemistry in particular is just taught very poorly in USA middle/high school. If anything, it perfectly hinders building that internal understanding.
"Chemical bonds fill the electron shells, which is why we have CO2. But don't worry about why carbon monoxide exists."
"Here's a formula to figure out the angle between atoms in a molecule. But it doesn't apply to H2O, because handwavy reasons. Just memorize this number instead."
Students don't gain an understanding of the subject, because the curriculum doesn't even try to teach it.
ryandrake 5 hours ago [-]
This was kind of infuriating about high school chemistry. We were taught so much simply is and that's that. Gold and Mercury differ by one proton, so why is one a dense, yellowish metal and the other one liquid at room temperature? Carbon and Nitrogen sit right next to each other on the periodic table, so why are their chemical properties so different? Why are there so few elements that are ferromagnetic? We dove relatively deep into chemical bonds and isotopes, but glossed over fundamental things like why compounds with similar structures had seemingly random, unrelated properties.
jghn 8 hours ago [-]
This happened at an old employer of mine. We started to go down the FP road, veering off the standard OOP of the day. About 25% of the people picked it up immediately. About 50% got it well enough. And 25% just thought it was arcane wizardry.
Between that latter group and the bottom portion of the middle it sparked a big culture war. Eventually leading to leadership declaring that FP was arcane wizardry, and should be eradicated.
carlmr 9 hours ago [-]
>teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease.
Besides OO -> Functional this applies everywhere else in Computer Science. If you understood the fundamentals no new framework, language or paradigm can shock you. The similarities are clear once you have a fitting world model.
_glass 6 hours ago [-]
there's actually a really good book that bridged it well for me when I was doing my bachelors, A Little Java, A Few Patterns. this is from the famous lisp books for groking FP.
jmbwell 9 hours ago [-]
Indeed. Understand the principles, you can work with just about any tool
d_sem 15 hours ago [-]
This resonates. Tips on how to build this skill?
ruszki 14 hours ago [-]
Fail, and try to understand why. Don't be quick with the answer. Sometimes it takes years. But it's crucial to want to improve, and recognize when the answer is in front of you.
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
necovek 10 hours ago [-]
I am curious about that nit on list comprehensions in Python: what do you mean, why are they a "mistake" of language design?
ruszki 6 hours ago [-]
So when they designed it, it wasn’t that bad for simple cases. However, with more complex nested lists, there isn’t a clear data flow, it jumps from one place to another. Especially the first term is problematic. It’s not beneficial at all for the modern IDE based development. So at the end, this is a better list comprehension in this sense:
`[state_dict.values() for mat to mat2 for row for p to p/2]`
Or similar, where data flow is 1->2->f(2)->3->4->f(4). Where right now it is this lovely mess with one more repeating term:
`[p / 2 for mat in state_dict.values() for row in (mat 2) for p in row]`
Where the flow is f(4)->2->1->3->f(2)->4->3
This is not just a Python list comprehension problem obviously. The simple for… in… has a similar problem. It’s only better, because the first term `p/2` is at the end.
zahlman 4 hours ago [-]
I'm struggling to even understand what you have in mind, because HN doesn't do Markdown formatting and asterisks are interpreted for emphasis across lines. But I've never really thought there was a problem with the syntax. To me it reads naturally, left to right: "A list ([) of the results from calculating whatever, (for) each of the (name) values that are (in) the (names) container". With multiple clauses, they're in the same order as the corresponding imperative code, which also makes sense. (Perhaps if "for" were spelled "where", it might not...)
korijn 14 hours ago [-]
Put yourself in a position where it is your problem/responsibility, where you cannot depend on another to do it for you. You'll be learning every day.
dalmo3 10 hours ago [-]
If your hair is on fire, you don't ask how hot.
lionkor 14 hours ago [-]
No who you replied to, but practice. Deliberate practice; not just writing the same apps over and over, but instead challenging yourself with new projects. Build things from scratch, from documentation or standards alone. Force yourself to understand all the little details for one specific problem.
gooseyard 22 hours ago [-]
By complete coincidence, yesterday I came across this link to an article Peter Naur wrote in 1985 (https://pages.cs.wisc.edu/~remzi/Naur.pdf) which I haven't been able to stop thinking about.
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
hathawsh 22 hours ago [-]
Isn't that interesting? The job of exploring a theory or model to such an extent that it can be expressed in computer code always seems to fall on the shoulders of a software developer. Other people can write specifications and requirements all day long, but until a software developer has tackled the problem, the theory probably hasn't been explored well enough yet to express clearly in computer code. It feels like software developers are scientists who study their customers' knowledge domains.
Twisol 21 hours ago [-]
> It feels like software developers are scientists who study their customers' knowledge domains.
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
Terr_ 18 hours ago [-]
> so stifled when an e.g. product manager
Another facet of this is my annoyance at other developers when they persistently incurious about the domain. (Thankfully, this has not been too common.)
I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
eithed 10 hours ago [-]
This might be an indicator that PM isn't doing their job; PM should be able to answer you questions regarding what the business wants (= people who you're trying to serve). Developers, by the nature of interacting with domain, do become experts in the domain, but really it should be up to PM what the domain should be doing business-wise.
Jensson 10 hours ago [-]
If that is what a PM needs then there aren't enough good PM to warrant a PM role for most products, so just make software engineers do that in most cases.
Edit: The main role of PM is to decide which features to build, not how those features should be built or how they should work. Someone has to decide what to build, that is the PM, but most PM are not very good at figuring out the best way for those features to work so its better if the programmers can talk to users directly there. Of course a PM could do that work if they are skilled at it, but most PM wont be.
eithed 9 hours ago [-]
> not [...] how they should work
So that we're on the same page, what I think should be PM responsibilities:
If I have a user story: "As a customer I want to purchase a product so that I can receive it at my address" - PM defines this user story as they have insight to decide if such feature is needed.
PM should then define acceptance criteria: "Given customer is logged in When they view Product page Then 'Add product to basket' button should appear", "Given 'Add product to basket' button When customers click on it Then Product information modal should appear" etc - PM should know what users actually want, ie whether modals should appears, or not; whether this feature should be available for logged users only, or not.
How this will work shouldn't matter to PM; these are AC they've defined.
Of course the process of defining AC should involve developers (and QA), because AC should be exhaustive to delivering given feature
imperfect_blue 4 hours ago [-]
The problem, in my experience, is that most PMs don't add anything when it comes to drawing up the acceptance criteria.
In your example of an order placement - the PM has no special knowledge of what is a good customer order flow. Developers are usually way better at coming up with those by the dint of experience and technical knowledge of the current codebase and make the appropriate speed/polish trade-off.
PMs acts as an imperfect proxy for what the customer wants, making judgements off nothing more than their own taste. And though there are many great PMs, the taste of a PM is usually worse than that of developers and designers on average.
IMO the main business reason they exist is for organization accountability and ownership, despite the often negative value they bring.
LandR 13 hours ago [-]
This is why at my current place we are not supposed to do any dev without an SME on the call. We do the development and share the screen and get immediate feedback as we are working in real time! It's great.
BobbyTables2 21 hours ago [-]
Agree 100%.
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
kstenerud 16 hours ago [-]
In theory, it's the same as in practice.
In practice, it isn't.
tsunamifury 14 hours ago [-]
Sorry this is just the interior trapped nonsense that engineers find themselves in. Please touch grass
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
Paracompact 13 hours ago [-]
I agree so much with the both of you, to the point it's difficult to avoid cognitive dissonance one way or the other.
Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.
There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're doers and builders, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're decoding the fundamental invariants of the universe.
I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
0xpgm 12 hours ago [-]
We keep seeing things like cryptic error messages shown to end users simply because of the disconnect between the programmer and the end user.
If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.
Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.
If there were indeed perfect product managers, there would no need for product support.
tonyedgecombe 5 hours ago [-]
>We keep seeing things like cryptic error messages shown to end users simply because of the disconnect between the programmer and the end user.
A lot of the error messages I'd write were for me, especially those errors I never expected to see.
The typical feedback I'd get from end users is "your software doesn't work". If they can send me a screenshot of the error I'm halfway to solving the problem.
hathawsh 3 hours ago [-]
I actually agree with this. Product designers and product managers are often essential and sometimes they do up to 99% of the work of figuring out how something should work. To accomplish that, they often do things well outside the role of a software developer. On the other hand, in my experience, only someone with a software development mindset seems to be able to complete the last 1% (or 10%, or whatever) that reveals and resolves certain kinds of logic issues.
necovek 12 hours ago [-]
You seem to be assuming a certain org structure with very clear, specialized roles. Many teams do not have this, and engineers are already Product Engineers. It sometimes even makes sense (whenever engineers dogfood their product, startups, or if it is a product targeting other engineers) and is not just a budget/capacity issue.
Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.
Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
movpasd 10 hours ago [-]
Regarding the tension between symbolic representation and Naur "theory", I'd actually say they come from two different traditions, each providing two different theses. When writing them out I think it becomes a bit clearer how they interact and that they're not actually contradictory.
Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).
This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".
Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).
I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
nasretdinov 11 hours ago [-]
It's interesting that the way you describe it, the world model itself is _not_ just a collection of words in our minds, and I have a small theory of my own that "thoughts" in our brains aren't actually words at all (otherwise animals which don't talk wouldn't be able to make complex decisions?), and the words that we "hear" in our heads and which we perceive as our thoughts are just a rough translation of these thoughts into words, they aren't thoughts themselves. It is also why it's sometimes really hard to put complex (but correct) thoughts into words, and especially hard to adequately compare complex ideas during a regular conversation: on the surface a lot of ideas (especially in software engineering) "sound" good, but they're actually terrible. And there's no better way to communicate ideas than to put them into words, which is probably what makes good software engineering extremely difficult.
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
gooseyard 21 hours ago [-]
hey thanks!!
lukebuehler 14 hours ago [-]
I keep saying this is the single most important article to consider when talking about AI assisted software building. Everyone should read it. The question should always be: is a human building a theory of the software, or is does only AI understand it? If it's the latter, it is certainly slop.
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
psychoslave 16 hours ago [-]
>their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
LooseMarmoset 20 hours ago [-]
I think that this is actually a good thing. If everyone had the same internal world model, we would have very little innovation.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
bruce511 19 hours ago [-]
I agree. It's pretty easy to train based on facts, and even experiences. And learners can often take things in unexpected directions.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
frgturpwd 12 hours ago [-]
Yeah, you can't get it out in "one session of conversation", but you definitely can under a different... context.
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
> An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities.
"Transmissionism" is a term I've seen to describe this
The way I usually frame this is: if all expertise could be eventually distilled into verbal form, then years of experience will cease to matter as it all could be replaced with a series of textbooks. Which we obviously know is not possible.
gfody 1 days ago [-]
this is why I only communicate in poetry
complexity is
not what you believe it is
please try listening
randysalami 1 days ago [-]
So cool. One reading is “complexity is not what you believe it is”. Another is “complexity is”… “not what you believe it is”. Seems similar but the difference is subtle. Even the “please try listening” line changes in both versions. One is confrontational, the other is empathetic.
entropicdrifter 23 hours ago [-]
Agreed. "complexity is" as a full sentence followed by "not what you believe it is" has a fundamentally different meaning.
Very cool
kstenerud 16 hours ago [-]
There is complexity
that can only be moved around,
not eliminated.
minikomi 9 hours ago [-]
Sometimes it's better
To keep it all in a clump
Than spread it about
SoftTalker 24 hours ago [-]
Reminded me of a colleague who wrote his email replys as haiku. It got old pretty quickly.
dragontamer 23 hours ago [-]
Like an old colleague
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
k__ 13 hours ago [-]
I'd say, on averaged, it's 50% what you say and 50% communication issues.
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
forlorn_mammoth 24 hours ago [-]
good things llms solve this problem by assuming everything can be put into words and then convincing the world this is true.
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
mschulkind 21 hours ago [-]
This is surprisingly close to a personal theory I've been working on. I've been describing how to use AI to people as engaging the world model in their head, organization, or software.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
dogcomplex 23 hours ago [-]
Correct. One just has to realize that the cost of communication (and the context/memory lost along the way to train that understanding) is often just far higher than anyone has patience for. To fully understand the expert, they must become the expert. (or at least a hell of a lot closer than they were)
This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
crabbone 11 hours ago [-]
Another part of the equation is practice.
Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)
I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.
I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.
To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.
When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
ChrisMarshallNY 19 hours ago [-]
Just wanted to say thanks for this.
Great thread.
coip 22 hours ago [-]
“Cursive knowledge”, as an old boss told me. Was incredibly ironic when he leaned into my misunderstanding.
zsoltkacsandi 1 days ago [-]
That’s very well put.
danieltanfh95 19 hours ago [-]
yep, as I was exploring in https://danieltan.weblog.lol/2026/05/dunning-kruger-and-the-... , the expert pays the "communication tax" to dumb down concepts that the listener can understand. There is a gap between domain understanding and what is being conveyed that is similar for human-llm interactions as well.
themafia 15 hours ago [-]
> AI can blow you out of the water at knowing more facts
Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.
> and they need to have the right project that provides the opportunity to learn what needs to be learnt.
It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
jongjong 21 hours ago [-]
Great points. Words allow one to communicate an approximation of part of what one knows.
Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.
Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.
Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
whattheheckheck 1 days ago [-]
Well here's an engineering problem figure out how to mentor 10x the number of juniors
the_snooze 6 hours ago [-]
You don't. You accept that social bandwidth is a real-world constraint that you work with, not magic away. That's real engineering.
necovek 9 hours ago [-]
Mentor 3 who need to mentor/"pair with" 3 each? ;-)
accidentallfact 10 hours ago [-]
I'm going to get downvoted to hell for this, but you described the exact reason why education is a waste of time.
necovek 9 hours ago [-]
I'll bite: is education not about starting with theoretical summary of the knowledge in the domain, and then applying it in practice and really feeling it work, be challenging, or not work?
The best educators I had had exactly that approach: you sometimes start with theory, but other times with challenges which make you feel the difficulty, and understand the value of the theory you are co-developing with the educator (they just have the benefit of knowing exactly where we'll end up, but when time allows, they do let you take a wrong turn too). Even if you start with theory, diving into a challenge where you are allowed not to apply the learnings should quickly tell you why the theoretical side makes sense.
As with everything in life, great educators are few but once you have them, you can apply the same approach yourself even if the educator is unable to steer you the right way.
If you never received this type of education, then what you received could arguably be called a waste of time.
toshikatsu-oga 9 hours ago [-]
[dead]
makbar890 1 days ago [-]
[dead]
CreepGin 1 days ago [-]
[dead]
cpursley 12 hours ago [-]
This sounds like a whole lot of copium from devs who don't want to bother with the effort of just writing stuff down, ie good documentation practices...
Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").
Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.
necovek 9 hours ago [-]
I believe you are responding to a concern you are facing in your career with bad documentation (I would guess bad code too), but projecting that onto an unrelated topic: I believe both could be independently true or not.
cpursley 9 hours ago [-]
*write stuff. Siri dictation can’t be overhauled soon enough.
lnenad 1 days ago [-]
As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by
> “Do we really need that?”
> “What happens if we don’t do this?”
> “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
bilekas 1 days ago [-]
I came to say somethign simular actually.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
ericmcer 1 days ago [-]
That doesn't sound as good in meetings. The person who can cut scope and get everyone to the "we did it" back patting phase makes everyone feel warm and cozy.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
hnlmorg 1 days ago [-]
This is where good leadership in the dev team is needed.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
hilariously 1 days ago [-]
For a large enough problem you need a combination of enough skill (to do the job), enough foresight (to know what likely will go wrong and how much error budget you need), and skin in the game (so you dont just cut things that sound good but instead what is truly needed) - if you don't have all three of these you usually are just talking out of your ass.
tetha 13 hours ago [-]
> There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
This is what I was thinking - I'd say the biggest step up a developer can make is to recognize that sometimes you need a bit of one approach, sometimes a bit of another one.
Sometimes minimalism is the way, and you need to wonder if the pain, workload or lacking capabilities and features are problematic. Or, sometimes adding the smallest possible thing is a good way, as long as we don't paint ourself into a corner and enable learning and accumulating information of what we actually need.
Sometimes buying a thing is a good way, if you can find a good vendor and a tool fitting your use case and especially if the effort of doing it on your own is high. This commonly occurs in security, because keeping up to date with the ongoing vulnerability and threat landscape can be a full job on its own.
And sometimes adding something bigger is the way, if the effort of maintaining it are less than the effort and pain incurred by not having it. Or if we can ramp up the effort of the thing incrementally, while reaping benefits along the way. This can be validated often by doing a small thing.
What the AI will do in my opinion is to push the bar more in this direction. Cozily hacking CRUD-Code in a web server together most likely won't be enough in a year or two for the average development job.
necovek 9 hours ago [-]
I think this is more a matter of perspective, rather than original meaning.
I read the above as "avoid development that increases complexity needlessly" — and often, there is a desire to overcomplicate something that can be much simpler because the understanding is lacking.
"As much as they can" does not mean trying not to do any work, but trying to simplify the work where it achieves desired outcomes, and just about! This frequently means doing the improvement today.
notatoad 16 hours ago [-]
both of these things are equally important. every change will annoy somebody. every change breaks somebody's workflow.
preventing the unnecessary changes can help you get the political capital in your org to push through the changes that really need to happen.
empath75 24 hours ago [-]
I am an avoider and also a serial trend-hopper. You can do both!
lnenad 1 days ago [-]
Exactly.
hirako2000 1 days ago [-]
A sort of survivor bias. A VP ordered to use elastic search, because it worked well at his company before. Turned out it worked well for us. Listen to the VP to make technical decisions. And use elastic search.
giancarlostoro 1 days ago [-]
Reminds me when the ELK stack was called just ELK (idek what it is now) we had a server we put it on, and after making the additional dashboards my manager wanted, we learned the limits of ES / ELK. It needs a ridiculous amount of memory, because it will shove everything in memory. Same thing when I learned that MongoDB indexing puts every item in memory as well, which is a yikes, why would you not want to index?
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
hilariously 1 days ago [-]
There's no high performance database that wont take all of your memory (at least for size of data) if you let it.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
giancarlostoro 1 days ago [-]
MySQL doesnt eat up all 8GB of my system when I need to query a table with indexed values, MongoDB seems to eat it all up.
vscode-rest 1 days ago [-]
You paid one hundred bucks for that eight gb of ram, do you really want it to just sit there unused?
giancarlostoro 1 days ago [-]
No, but my manager was wondering why our website was slowing to a crawl.
vscode-rest 1 days ago [-]
Is the DB on the same host as the web server?
hilariously 1 days ago [-]
It is more likely they did not leave enough overhead for the host operating system, which is a classic issue.
giancarlostoro 1 days ago [-]
I don't really remember, to be fair this was nearly 10 years ago now. Upon some googling now, I do see a way to limit just how much Mongo sucks up for data + index. I am curious if it would have been a smoother experience, if this configuration was even available then.
fleroviumna 14 hours ago [-]
[dead]
hilariously 1 days ago [-]
If the data is < ram size and if you read that data again and its off disk again its the slowest it can possibly be, there's a reason most databases implement a buffer cache (actually making writes insanely faster as well) but yeah, MySQL is generally not a very good operational database with all the ones I have tinkered with.
Yokohiii 24 hours ago [-]
Production grade multi tenant databases want to *solely* run on RAM.
> why would you not want to index?
Because if you don't need an index it wastes RAM, as you've learned. Maintaining indices also has a cost. Index only what you need.
In the sense of the blog post: A senior with decent DB experience would have told you. ;)
tardedmeme 19 hours ago [-]
Everything "wants to" run solely in RAM, but we don't have infinite RAM, so a "production grade" database should also be able to fetch data from disk unless this is an explicit tradeoff. MariaDB and PostgreSQL do not require all indices to be stored in RAM. Obviously they can be accessed more quickly if they are in RAM but they are designed under the assumption they will often be stored on disk. It sounds like MongoDB is not, and given the reputation of MongoDB, this is as likely to be incompetence as it is to be a willing tradeoff.
Yokohiii 18 hours ago [-]
Every serious database that is designed to handle moderate to high traffic, will expect you to have RAM to fit all data and indices. Relational DBs do a solid job if that's not the case, but that also sabotages the efficiency you could get from them. It will work for some time. If it's enough for your, that's fine.
I am not experienced with MongoDB, I don't know if previous comment reports were the users fault or MongoDB's. But one thing is clear to me, complaining it uses too much RAM and not knowing the reasons for it, is a user problem. A common mistake is to setup a DB and expect it just magically does works. DBs are complicated beasts, you have to know how to deal with them.
tardedmeme 16 hours ago [-]
You certainly don't need to hold all data in RAM to serve "moderate" traffic. A modern hard drive can seek about 80 times per second, an optimized RAID array even more, and an SSD tens of thousands, and if we're pessimistic, it takes 10 seeks to service a request. To me a light load means up to about a request every second, a moderate load means maybe 20 requests per second and a heavy load means hundreds or thousands of requests per second. Pessimistically each (read) request takes 5-10 random reads to service and almost every system is read-mostly.
I think these are realistic expectations for most apps. Obviously the likes of Netflix and Uber get orders of magnitude more, but 99.9% of apps aren't a Netflix or an Uber, and you don't have to optimize for scaling until your app is on a trajectory to become one, and putting your database on an SSD already let's you handle several thousand concurrent users with ease.
Yokohiii 12 hours ago [-]
RDBMS are typically pretty good keeping the frequently requested data in RAM. This disguises the latency of disk access and performance will heavily depend on access patterns. If you serve 1TB of data from a DB with 8GB of RAM and that is sufficient for your use cases, I wont stop you. If you expect low, predictable latency (<1ms) even on a 98/2 r/w system, then it it's not worth the headache.
Of course everything depends on use case and constraints. I highlight the extremes here, the initial confusion was why DBs require so much RAM. Traditional DBs are optimized around RAM, that's where they perform best. You can abuse that, but it's not the best they can be in terms of latency, predictability and stability.
giancarlostoro 8 hours ago [-]
Potentially a mix of both, though MongoDB was still very young when we were using it. Places like Google were championing it, or rather places that can afford to burn a ton of RAM.
giancarlostoro 8 hours ago [-]
You mean NoSQL which is slightly different and nuanced, in a shop that was mostly SQL with the exception of me, the one Junior developer using MongoDB and Elastic, mind you, we got a lot of things done and I learned a lot more about Mongo than I would like.
In all fairness this was my first job a few years ago as a developer, I deep dove MongoDB but I was also one of the only devs using it at this place.
My previous experience with MongoDB had been in college and more limited.
Izkata 22 hours ago [-]
For anything Lucene-based (Elasticsearch, Solr) this was a problem where some of the indexed data had to be transformed for another type of query to be efficient, and it put the transformed data into the Java heap then never released it. I think it was indexed data for searching was read straight from disk and was fine, but analysis queries needed the transformed version?
At some point they added the docValues configuration option per-field to do the transformation during indexing and store it to disk instead, so none of it had to be stored in the heap. Instead what you're supposed to do is rely on the OS disk cache, which handles eviction automatically, so you can run with significantly less memory but get performance improvements by adding memory without having to change any configuration further.
quantified 1 days ago [-]
Pick the right use case. It is super awkward, horrible UI for things like log analysis. Use Scalyr instead.
rdiddly 14 hours ago [-]
Congrats on being the third top-level comment at this hour, and the first one who seems to have read more than just the headline.
sisve 1 days ago [-]
Agree. context matter. As a senior developer you need to understand complexity, risk, upsides and and downsides. Understand the business side.
If you are a startup or a big company that is already a cash cow makes a difference when changing a core featrue of the product etc... context context context
Ferret7446 1 days ago [-]
One of the side effects of the LLM boom is that it made it a lot easier to tell people that context is important
Aperocky 21 hours ago [-]
I think this is contrarian, I found author's point clear in context. Obviously sometimes actions are warranted, but the balance today is skewed in making everything more complex than they needed.
This do not mean we don't develop new product and services, it just means when we do so, we find the path of least overall entropy, it also applies to operations and tech debt reduction.
premature optimization is root of all evil
lwhi 1 days ago [-]
I think you may be missing the message the OP is trying to communicate.
The qualities were highlighted because they can all lead to better stability.
lnenad 1 days ago [-]
Why can't innovation bring better stability?
nine_k 1 days ago [-]
Innovation is change, and change is the opposite of stability.
Innovation can reduce pain though, if the current pain is strong enough. A stable stream of failures in production can be the kind of "stability" you want to disrupt.
mpyne 1 days ago [-]
Being able navigate change can provide stability in the long term though, at least as opposed to being resistant to change.
nine_k 1 days ago [-]
Yes, all stability in real life is metastability, it needs a constant effort to maintain. A worthy innovation can lower this effort, or lower the risk of a catastrophic failure.
A complete stability is death.
mnsc 15 hours ago [-]
Resistance to change is very different than reluctance of change.
lnenad 1 days ago [-]
What are we talking about? Philosophically yes. Factually, no. In the context of a system innovation could be switching from one form that renders in 1 second to another that renders in 50ms. Stability isn't part of that equation.
nine_k 1 days ago [-]
Is this switching risk-free? Consider all these ancient computer devices that run high-stakes equipment for years and decades without change. An RPi could replace an ancient PDP-11, cost a fraction, consume a fraction of energy, be faster, etc. But it also may introduce new and unknown failure modes.
nly 22 hours ago [-]
The important thing is to raise the question and have the discussion. By asking the question, you're not precluding the experiment.
overgard 21 hours ago [-]
I mean blanket statements are bad and you don't want to be the last company running on Java 6, but all the same, it's equally bad to be the guys using the latest javascript build pipeline that came out three months ago and is undocumented.
zahlman 4 hours ago [-]
> I really dislike blanket statements.
... All of them?
someone654 1 days ago [-]
Was thinking the same thing, but then i re-read the section and noticed this:
> Yes, yes, of course this is simplistic.
It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:
> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.
jcgrillo 22 hours ago [-]
It's a tricky balance, and there's a nonlinearity in that it really depends on what technical risk you've already taken on. Like.. clever ideas are like children. A handful are fine, lovely even! But if you have more than you can adequately keep track of or properly nurture that's no good. So best to focus attention on the small number of clever ideas that actually matter for your business--the ones that differentiate you from all the other companies doing broadly the same thing as you.
slashdave 23 hours ago [-]
I mean, sure, reduced complexity is great, but... what about performance?
dickywad 1 days ago [-]
[dead]
ChrisMarshallNY 10 hours ago [-]
> They want to avoid development as much as they can.
One of my favorite .sigs was:
I hate code, and want as little of it as possible in my software.
I don't remember where I saw it, but it was a while ago. It's possible the author has an HN account.
One of the things that happens to "avoiders," is that they get attacked for being "negative." It can get career-ending, when the management chain is the "Move fast and break things" type.
I just stopped offering suggestions, after encountering that crap a few times, and learned to just quietly make preparations for when the wheels fall off.
I have spent my entire adult life, shipping, and shipping means lots of "not-shiny," boring stuff. But it gets onto shelves, and into end-users' hands. I was originally trained in hardware development, where mistakes can't be fixed with an OTA update. It taught me to "play the tape through," and make sure that I do a good job on every part of the project; which includes a lot of anticipating problems, and designing mitigations and prevention.
hirako2000 1 days ago [-]
Most proof of concepts I've seen get traction turned into production.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
____tom____ 1 days ago [-]
> recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
Old quote: "There is nothing so permanents as a temporary hack."
This is why you need sufficiently senior engineering leadership (both IC leadership and management). If you have engineers who meekly do whatever a non-technical stakeholder asks then you have a vacuum of responsibility, and sooner or later things will blow up catastrophically and whoever was least adept at CYA will get blamed.
On the other hand, almost any business problem can be solved in a reasonable way that doesn't send your system through any terrible one-way doors if you zoom out enough and ask enough whys. Of course not every place allows engineering to do that, but the ones that don't aren't able to retain senior folks because they will just go somewhere where their judgment is valued. Sometimes technical debt is the right thing for the business, but sufficiently senior engineers can set things up so there is always a way out. But what you can't do is uphold the purity of the system above the business problem. The systems are paid for by the business, so if you lose sight of that then you've lost the plot and the basis for your influence.
Yokohiii 23 hours ago [-]
Yea, I think even a lot of decent devs are afraid to just say "no" to things. They don't even bargain to find a balanced solution that can be reasonably done in terms of architecture and time to production.
Yokohiii 23 hours ago [-]
I guess it's company culture? I had a job and we initially had quick solutions that went messy. We set a hard policy that every "quick and dirty" feature will have a follow up story that gets pulled into the following 1-2 sprints. Often it turned out that the feature didn't live up to expectations and we just disabled or deleted it, the other times we reviewed it and refactored it properly.
We were highly autonomous team though and hardly had cadence complains. But mostly because the all other departments were lagging. Except marketing, marketing always has "ideas".
allknowingfrog 1 days ago [-]
This problem definitely predates AI coding agents, though it may be exacerbated by them. The article essentially concludes with the ancient advice of "plan to throw one away". Well sure, I also read Mythical Man Month, but how do I convince the decision-makers?
scotty79 13 hours ago [-]
I think AI makes writing second (or third, or fourth) implementation way easier. So it may actually happen more often with the AI.
At this point Zig implementation of Bun seems like one written to throw away. And it happened only thanks to AI.
onion2k 1 days ago [-]
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
Why would you do that though? If you have a working 'prototype' that's handling the demand, has the required features, and doesn't really need to be rebuilt (except to appease the sensibilities of the developers), why would you spend time and effort on that? That makes no sense. The fact it's a prototype or a 'proof of concept' is essentially irrelevant if you can't enumerate what the actual problem with it is.
I work with a bunch of teams that complain that they're mired in tech debt all the time, and complain that it's a huge risk and it slows them down. Except I can see our incidents log and there aren't many incidents and none that can be attributed to running risky code in prod, I have our risk register that has no 'this code is old and rubbish and has past-EOL dependencies on it', and no team has ever managed to articulate how or even how much the tech debt slows them down. They shouldn't really claim to be surprised that no one wants them to spend time 'fixing' a problem that apparently has no impact.
I've also seen the opposite case where a team spent months refactoring an app that they wrote before it launches. They wrote it, then decided they could make it 'better', and spent loads of time reworking most of it before it launched. All the value was delayed because they decided they didn't like their own work. And obviously the leadership team were pissed off about that, and now there's very little trust left.
There should be a good conversation about delivery of work between teams and stakeholders or no one will be happy, but if that isn't happening the stakeholders will always win.
allknowingfrog 1 days ago [-]
Because the goal isn't "keep this exact version of the app alive and running". The prototype is never the whole application. If your only metric is incidents, then yeah, don't ever touch the code again.
You can get a few feet closer to the moon by building a treehouse, but you still can't turn it into a spaceship.
onion2k 24 hours ago [-]
The prototype is never the whole application.
In a world where people (stakeholders, Product, and dev teams alike) want the prototype to be the full set of MVP features, this is not true.
hirako2000 14 hours ago [-]
[dead]
mlhpdx 23 hours ago [-]
Regarding the viability of rewrites of successful PoCs: Does the current environment change the math? How difficult would it be to overcome the inertia/hesitation/perception of slow, painful projects that may no longer be so?
__MatrixMan__ 1 days ago [-]
Thats why you gotta write them in a language nobody else on the team has heard of.
sublimefire 22 hours ago [-]
A mention of a “rewrite” triggered. Whoever does rewrites is effectively out of ideas on what to do next. This is an opportunity cost and the team/company chooses what is more important and the rewrite is never at the top. So even promising or expecting such a thing is silly.
IMO it is a bit arrogant to assume it is more important to engineer a better version of a thing rather than make money quicker and cut corners. In essence it is better to have a problem which is about how to scale a new product because it got traction rather than solve a problem how to sell more copies of already scalable thing.
drzaiusx11 20 hours ago [-]
I do "rewrites" for my day job all day every day; with as of late the goal being rewriting critical services to get past scaling plateaus.
Rewrites require an existential-level threat to pursue and should never be taken lightly. They must solve a real verifiable need, backed by real world data. Rewrites for rewrites sake or some lofty or nebulous goal of "better" or "more maintainable" code are doomed to fail and a waste resources.
I've seen the worst of it, from your average monoliths with no separation of concerns to 1000s of lines of self-modifying assembly in dead architectures with no code comments containing critical business logic, etc.
The main rule is to not to bite off more than you can chew, which if I'm being honest you really only learn from fucking up or watching others fuck it up.
t-writescode 20 hours ago [-]
They said a Proof of Concept goes to prod. That’s not “rewrite the whole service that’s been built for months”. That’s “I vomitted a neat thing over the weekend” -> now it’s in prod.
Hackathon and overnight oncall fixes ABSOLUTELY should be rewritten or production-hardened, but they very often are not.
empath75 24 hours ago [-]
After my first proof of concept went into production by surprise, I stopped building proof of concepts and started building MVPs.
That's not to say that my first pass that I show people is ready to go into production, but I build the PoC from the beginning with the idea that it _is_ going into production and make sure I have a plan to get to production with it while I am working on it.
nullorempty 1 days ago [-]
What I found is that my willingness to communicate and share my expertise is usually not in demand with more junior developers. In general, I find developers uninterested in finding a mentor. They don't look at your linked in profile, they don't look at you as a possible source of knowledge and expertise.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
asdfman123 1 days ago [-]
This is my frustration at my current job. There's so much silliness and no one cares about avoiding it.
A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.
A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.
No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
mdavid626 11 hours ago [-]
I feel you. Similar experience on my side. I think it might've been like this before, but AI coding tools made it worse. Everybody thinks they can do it better - when there is a problem, the coding agent can just fix it. Why bother building relationships with senior devs or with anybody?
Looking deeper into it: these people don't understand the underlying foundations anymore. Just keep building fast, without building proper mental models (that would take time).
lionkor 12 hours ago [-]
You need to advocate for yourself, because nobody else will, unless your manager is really good at his job.
Our work is largely very difficult to understand to outsiders, we need to write docs and have meetings to show what we have done. It's part of the job, and yes, if you don't do that, it doesn't matter how fantastic the software is that you wrote (sadly).
blastro 16 hours ago [-]
you've healed me - resonates
floro 10 hours ago [-]
As a junior I will share my perspective from the other side.
Companies have outlandish hiring practices. They want juniors who already know everything. That's why admitting that you don't know something is seen as showing weakness to the company in the eyes of a junior. Also, not knowing things will actively keep you from getting promoted.
I'm sure it's not like that everywhere but it's juniors playing the corpo game.
jake-coworker 4 hours ago [-]
One of my favorite senior developer stances is "I should be annoyed by how many questions you're asking me"
randusername 7 hours ago [-]
Plausible alternative explanations:
- Juniors are discouraged to ask for mentorship because they are under pressure to appear competent
- Juniors have internalized from bad experiences that seniors are not to be disturbed
- Juniors grew up in a world where nobody modeled mentorship as a possibility for them; a CS major probably learned async, online, parasocially, without much 1:1 face-to-face interaction
- Juniors don't know what they don't know just yet-- and it doesn't always work well for someone to try and teach them explicitly-- but once they figure this out they'll be more interested in reaching out
aspbee555 24 hours ago [-]
> So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
Shocka1 23 hours ago [-]
Wish I had you at my first engineering job at IBM. A couple senior devs there (not all) would get pissed when juniors tried asking them questions. Not only did it take a bit of courage to ask someone who had been there 20 years about something, but it was a 50/50 chance they were going to be an asshole to ya lol. Was a good learning experience for me - I go out of my way to mentor now.
JambalayaJimbo 23 hours ago [-]
All the senior developers I have worked with are absolutely allergic to coming into the office, working closely with junior developers, and in general talking to people.
Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.
Maybe that’s just my workplace though!
And yes, the office is important.
mgkimsal 18 hours ago [-]
In the senior realm here - would love to chat with folks over lunch, brainstorm, assist, mentor, guide, etc. Can't do that AND be expected to deliver code at a 'full time' expected pace. What I would be delivering is... some code, some guidance, some assistance, etc. I've seen inside enough places to know that many senior folks end up being guarded and solitary because the deadlines aren't ever set to accomodate that sort of work. You're a 'Senior Developer(tm)' and the measuring stick is... lines of code.
Orgs get what they measure for. If your team values that sort of interactivity and support, it will ... observe it, measure it, and hire for that sort of person. I've seen groups evolve towards that, and they've been great, but it doesn't seem to be a default - most groups/orgs have to work towards it and and keep working at it.
SchemaLoad 17 hours ago [-]
The last two jobs I've had ended up with teams spread across multiple offices and time zones. I don't hate the idea of coming in to the office, but every time I do I end up only talking with people from other cities on calls anyway.
That said, I completely agree. I learned most of what I know from being in the same room with senior developers and asking questions. Something that just isn't happening these days.
macintux 1 days ago [-]
I took a job in another state in large part because one of the interviewers was a highly skilled sysadmin that I wanted to learn from (I had basically backed myself into system administration as a career at my first job, a startup, so I didn't have a lot of people to lean on to learn my trade).
Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
agumonkey 1 days ago [-]
Are juniors you ran into psychologically obsessed by being self-reliant ? or too proud of their own ideas ?
I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
Yokohiii 23 hours ago [-]
> Are juniors you ran into psychologically obsessed by being self-reliant ? or too proud of their own ideas ?
Honestly I have the feeling that this is often insecurity. It's easy to feel uncomfortable if you think you don't follow along.
Another issue is that juniors usually experience culture shock on their first jobs. So they more or less isolate and do thing how they learned it.
dnnddidiej 11 hours ago [-]
Im not even confident I can mentor a junior well. Part of that is probably mentoring is a seperate skill. (Like management is) and so you need to get good at that plus research the "many worlds" of their future paths rather than share your war stories. If that makes sense.
drzaiusx11 18 hours ago [-]
I'm sorry this has been your experience. There are folks out there open to learning from us seniors.
I've been a mentor off and on for the last few decades, and I've been really lucky to have some strong mentees. Some I've followed for a better part of a decade and are crushing it out there. All I can really say is that they're out there, sorry I don't have any more helpful to say around how to find them etc. I'll mull on that for a bit..
gib444 1 days ago [-]
Exactly my experience. You describe it more diplomatically than I do hah.
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
asdfman123 1 days ago [-]
I like to play an online strategy game, openfront.io. The way to win is to take out someone who is gaining power before they get too powerful.
It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.
As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
judahmeek 3 hours ago [-]
I think it's odd that your company has a fairly decent promotion metric (those who seek to spread knowledge through docs & meetings get promoted) and you seem to want nothing to do with it while also complaining that your coworkers don't respect your opinion.
I kind of get it as you have expressed that promotion is not your goal. However, organizational influence comes through promotion as your org & only those with influence at your org can change that.
What do you think would be a better system, that decoupled promotions from influence & enabled you to provide your experienced opinion without getting into management?
dyauspitr 1 days ago [-]
I don’t think it’s the arrogance of youth. It’s just that this generation and honestly a big cohort of millennials are not used to gleaning information from people. A stunning number of people have been raised/educated solely by the internet. That’s the source for knowledge, not other people.
randusername 7 hours ago [-]
Yes, and this isn't necessarily a moral failing.
It is a problem as old as human civilization that the old overlook that society itself changes and instead lament the willfulness of the young in abandoning the old ways.
It isn't like young people grew up surrounded by examples of mentorship and arrogantly chose otherwise. In the internet age 1-on-1 face-to-face instruction is rare. I feel really fortunate that I caught the tail end of it.
anthonypasq 5 hours ago [-]
Its simply true that the average person you talk to is going to be ...average. Or you could listen to John Carmack on a 5 hour podcast. This warps your perception of what the people around you can offer you.
I think younger people have maybe thrown the baby out with the bathwater, and you need some discernment on whose advice you can value and trust. But ive just been in many situations in my life where ive asked for advice and its just been total shit.
"Wisdom of the elders" is overrated when society changes so rapidly, and not all the adults you know are the insightful village shaman.
I recall asking my grandfather what is was like to live through the JFK assassination and just recieving something to the effect of "oh yeah that was crazy and bad, i remember seeing it on the news." follow up questions produced no further insight. So you come to the conclusion, why bother with that when you can just read a book about the topic.
Johanx64 24 hours ago [-]
> A stunning number of people have been raised/educated solely by the internet. That’s the source for knowledge, not other people.
On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
vogelke 4 hours ago [-]
I hope you're kidding. I've seen lots of 'Net people claim to be experts, and I wouldn't trust most of them to feed my cat.
drzaiusx11 17 hours ago [-]
To be quite honest I learned exactly this way myself, however nowhere near recently by any stretch of imagination; I learned through Usenet, bulletin board systems, IRC, and a heavy dose of (bordering on obsession) reading any and all technical manuals I could get my hands on from the local used book store.
I still vividly remember reading a z80 instruction set manual on a rainy day during summer vacation by a lake as a kid (maybe 14?)--writing my own assembly by hand in the margins for fun. TBH I probably still have that exact manual in storage somewhere. Had a green stripe down the front edge/binding iirc.
Back then I easily met folks like myself out there on the net, including many kids younger and smarter than me. It was awesome.
I do hope that some form of that 'net lives on in spirit somehow, given that the Internet I knew has largely fallen to corporate interests.
Now that I have my own kids, it's been painful to watch them have such an utterly different experience than I did.
Their Internet is based entirely on consumption and dark patterns designed to capture their attention, while providing nothing (to them) in return besides a dopamine addiction and body dismorphia.
Johanx64 1 days ago [-]
For all I know maybe you are an expert, but as a general rule of thumb - people are sick of "experts" eager to share their "expertise".
It's simply the case that the supply of "experts" wanting to share "expertise" vastly eclipses the demand by several orders of magnitude.
I think there's a business somewhere, where you get paid to listen to "experts" and they get to feel better about themselves. It's a win-win.
So if people don't perceive you as an "expert" and dont go to you for answers, you simply do not register as one or they have a rather high bar which requires observable undeniable artifacts (and I don't mean credentials, I mean software) and competition is rather fierce - there's simply overproduction of people who think they are "experts" and thus you have to give unmistakable symptoms of being one to register.
rramadass 17 hours ago [-]
This is the key sticking point.
"It takes two to tango" i.e. junior developers must first put in some effort and then proactively seek out seniors with expertise.
It may be a cliche, but a truism nevertheless; viz. the juniors are simply not interested in putting in the necessary time/effort to gain knowledge systematically. They want everything to be quick, easy and handed to them on a platter.
I think the main reason for this is; there is just too much out there to learn and everything is being propagandized as being the most important and most indispensable; This swamps the juniors and hence they feel lost and try to keep up with everything which is a fool's errand.
Juniors need to keep the following in mind;
1) Change their learning mindset as follows; - Browse a lot, Read a subset and Study an even smaller subset.
2) Always focus on the essentials and not on the frills. This is determined by your specific goals/needs.
3) Be okay with not knowing everything. Do not base your self-worth on others evaluation of you.
4) Do not compete with others. Do the best you can and always improve on your yesterday's self. As the adage goes "drops of water falling, if they fall continuously, can bore through iron and stone".
5) Be confident in your own intelligence. As Sherlock Holmes said "what one man can invent another can discover". What might seem impenetrable in the beginning will over time become clearer and easier when studied regularly.
6) Everything is dependent on Self-Effort modulated by Timing, Context, Means Employed and finally Random Chance (i.e. lady luck). Manage the last by factoring in its payoffs as part of your self-effort itself (i.e. hedging). Focusing on the above five parameters before starting on anything will guarantee success.
7) You can always short-circuit your studies and gain knowledge quickly by asking seniors with expertise to teach you. Your attitude and way of approach is very important here i.e. you must be sincere and committed.
dimaor 24 hours ago [-]
you have HN, there is always someone here, my friend :).
CharlieDigital 20 hours ago [-]
It's funny, I've been literally trying to convey these exact sentiments to my team over the last few days down to the:
> Need to build a whole new feature to test it? Have you tried putting a button in the existing UI and seeing if people click it?
Pretty much word for word.
It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.
It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
20 hours ago [-]
nitwit005 23 hours ago [-]
This misses the basic problem of incentives. What "the company" wants doesn't matter, it's what the people making particular decisions want.
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
sublimefire 22 hours ago [-]
A typical example would be the researchers which are evaluated based on papers and new stuff they put out into the wide blue world. But if you are on a product side this makes little sense because you need to match “features” to the requirements expressed by the customers and you will tell researchers to stop pushing.
throwway120385 1 days ago [-]
A really competent senior figures out what the prevailing culture of the company is now, and what it will need to be in 5 years, and adapts as they go. Startups with 5 people maybe don't need extra complexity costing runway. A 500 person business may need that complexity because now there are second-order effects that need to be mitigated for every business decision. It's not a black-and-white "always avoid complexity" it's "add complexity when it makes sense" and even that question has a lot of nuance because sometimes the business just needs to survive for another couple of months.
hilariously 1 days ago [-]
Right, prioritization and transparency allow you to change the variables that people should be using to solve a problem (and if it doesn't they are not good at the job) - if you have two hours before a storm comes you will be asking "will it take on enough water that I cant bail it out?" instead of thinking about your architecture.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
Izkata 5 hours ago [-]
> Imagine you get asked to build something ambitious, and you say:
> “Sure, I’ll have the Speed version ready in 3 days. Then the Scale version in about 6 weeks.”
> They get what they want, speed and momentum. You get what you want, observation and design.
Except that 6 weeks is now blocking the next thing and you'll be pushed to drop it. So this doesn't really solve anything.
I was kind of hoping at the end they'd suggest getting the non-developers involved more so they can experience the pain points they're creating. Not entirely sure how that would work though.
hosh 1 days ago [-]
Complexity, if it can be reduced to a single measurable dimension, is only one of several factors in a solution space.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
bonesss 1 days ago [-]
“Complexity” understood as the immediate first impression a junior gets looking at some arbitrary facet is always bad and too much and bad.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
ahussain 11 hours ago [-]
Quality and speed are not diametrically opposed. A great engineer does well on both axes by building the minimal thing needed now in a way that is easy to extend in the future.
I have also seen projects go badly because the eng was trying to be perfect upfront. Whereas quickly getting to an MVP and then iterating tends to go better.
skydhash 22 hours ago [-]
> Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me.
Well said. In Kent Beck’s Tidy First, it explores the slow process that can be summarized by this except from his substack [0]
“Valuable” lives on 2 axes:
Features—what the code does now.
Futures—what we can get the code to do once we learn the lessons of this set of features.
While there might be a component of time to get features out, it’s rarely urgent enough to forget about being flexible and having a somewhat constant velocity.
TRADEOFFS! I think this is IT. Non programmers imagine there aren't tradeoffs. As a programmer one should eventually realise that every possible aspect of design is a tradeoff.
lwhi 1 days ago [-]
Many of these factors are directly influenced by complexity.
hosh 1 days ago [-]
They all influence each other to one extent or another.
And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
junto 23 hours ago [-]
Totally agree on this.
The use of “complexity” in terms of systems theory in comparison to “complicated”, is often misunderstood.
I also agree that it’s a really good framework for evaluating problems and then making decisions on potential solutions because each has its own set of approaches.
Small nick pick. It’s “Cynefin” not “Cynefine”. The word is Welsh (Cymraeg). Roughly pronounced ke-ne-fin.
> "Software systems tend not to be complicated, not complex, until you start getting into distributed systems."
these days so much software is "distributed systems".
hosh 19 hours ago [-]
I don’t know at what threshold a complicated system becomes complex.
For example, at a level of scale, Kubernetes start having emergent behavior.
On the other hand, it doesn’t take much to produce a complex system. The Boids simulation is a complex adaptive system in the form of a flock, yet each member of the flock concurrently follows only three basic rules.
lwhi 24 hours ago [-]
Isn't Cynefine a framework designed to sell consultancy services?
I think complexity is a byword for 'unintentionally complicated' here.
nomel 1 days ago [-]
You missed one of the most important ones: usability
hosh 1 days ago [-]
I was not trying to be exhaustive. I am sure you can come up with more characteristics.
goosejuice 1 days ago [-]
> The avoider, the reducer, the recycler.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
bob1029 1 days ago [-]
The best strategy is to frame your argument from the perspective of the customer:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
goosejuice 1 days ago [-]
We may be looking at this differently based on our own experience fwiw. I also should have said added complexity or lack of (from poor planning).
That can work too, e.g. when demonstrating the pain a customer will experience when something complex is poorly designed (like some b2b workflow), but it's less visceral than telling your internal stakeholders all the extra work they'll have to do if it's rushed. Even the best of your peers are a bit selfish. The business side has a lot of incentives around quick turnarounds so it's easy to overlook the downside.
Imagine such a scenario. You're in healthcare and working on a feature that will add new data model for some kind of clinical information.
You could say:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Yeah that very well may prevent W from churning, though hopefully you think about how it will affect other clients too.
Or, you could say
"If we get this data model wrong, and the value set is ambiguous, you (product/sales/cs) will have to reach out to every single customer and clarify what they meant by x/y/z if we wish to migrate it with any degree of accuracy in the future."
That's drawn from experience but I'm sure there are a lot of parallels to that in other industries for any kind of data. Migrating data is a pain in the ass for everyone, but often it can be the people pushing for a quick solution that suffer the most when that goes wrong.
This kind is stuff is why commission structures should consider churn / residuals. Bad incentives make for hastily made decisions.
Yokohiii 23 hours ago [-]
> you will earn some trust
Building trust is yet another quality of a good senior. By that I don't mean to be buddy with the CEO but earning trust from everyone by making good decisions, arguments and delivering as promised. Even giving a jr a warning and let him fall flat is a good trust building exercise.
imp0cat 16 hours ago [-]
> You want to bake me a whole birthday cake? Just put a candle on my sandwich.
I think these people also need to learn that, in the eyes of the customer, a sandwich with a candle is in no way comparable to a birthday cake.
empath75 24 hours ago [-]
My experience with avoiders and reducers and recyclers is that they want to avoid _my_ idea and do _their_ idea instead.
goosejuice 21 hours ago [-]
Seems rather tangential
romaniv 19 hours ago [-]
>We could call this the ‘Speed’ version of the system. It’s not meant to be understandable, the goal is getting things good enough to take it to the market for feedback.
AI is actually quite awful for prototyping, because it makes it far too easy to add random crap to your "prototype" without any specific intention. This quickly transforms the prototyping process from something that's high-level and geared towards building the mental model of the real system into something akin to copy-editing a random piece of software without any coherent mental model involved. Moreover, prompting allows to to glaze over some essential complexity of the task without getting any notions of the scope of the effort of actually doing it. In other words, people end up failing to make necessary decisions and simultaneously get bogged down with unnecessary ones.
In short, fast feedback loops are only useful if there is actual feedback involved.
Prilog 6 hours ago [-]
The “AI accelerates uncertainty reduction but increases system complexity” framing is probably one of the better ways I’ve seen this explained.
What’s missing from a lot of the “AI replaces developers” discussion is that generating code is only a tiny part of operating production software. The hard part starts after deployment: debugging, understanding cascading failures, maintaining consistency, and safely evolving systems over time.
Feels like the next wave of tooling won’t just be “AI that writes code faster”, but AI that helps absorb and reduce operational complexity after the code exists.
mgaunard 1 days ago [-]
Even with AI, there is a clear difference between juniors and seniors.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
hun3 19 hours ago [-]
Avoiding problems outside your business problem domain, and can therefore guide the AI more effectively towards building the right thing.
mgaunard 13 hours ago [-]
Making things fit for purpose is not avoiding problems in my book.
If anything, quite often it's introducing more problems, because we know we'll run into them and they need to be addressed.
AI is sometimes quite lazy and refuses to solve the hard problems (sometimes making funny excuses like it would take weeks) until you make it explicit that it's important they are dealt with.
jake-coworker 4 hours ago [-]
I don't love the title - the article is about differing perspectives/incentives in a company, not about senior developers' inability to communicate
That said, I think everyone will agree both extremes are failure modes - moving too fast and not having things work right, or building things too correctly and never building something people want.
IMO the two stereotypes in the article generally hold true, and the job of each in a healthy company is to present the trade-offs, and collect enough data to experimentally validate. And when you disagree, let the decision maker (CEO) decide, but disagree and commit
t43562 1 days ago [-]
I found that the proposers of features "want everything" because they don't know what is critical - they're therefore totally unwilling to accept anything other than "the full monty". So as a senior developer you cannot propose any faster route.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
panny 20 hours ago [-]
It's the XY problem. The customers tell sales they want Y, rather than stating their problem X which they think Y will solve. Sales runs breathlessly to the dev team and demands we implement Y. Now scale this up to 10 customers or 100 customers. They all have the same X but come up with independent Ys.
You see the problem immediately. Sales/marketing didn't do their job sussing out what X is and wastes dev time with Ys. And worst of all, write once, support forever. Each one off Y has to be maintained for the special snowflake customer that uses it. None of the Ys actually work well for all the customers with problem X so you end up drowning in "technical debt" spent to create them all.
If your marketing department leads the company, I've discovered the best option is to just quit. Go find a job at an engineering company.
j_w 8 hours ago [-]
This is why the first thing you should do as a dev when somebody tells you that they want Y feature is to ask why.
Non-developers have no clue WHAT they want, then know WHY they want it. The why is much more important to know, because the requestor has no clue how software works and imagines bad solutions.
t43562 6 hours ago [-]
In some cases it's just because they think that any "missing thing" might be the one that causes customers to reject the offering. So they "must" have everything. It's the lack of knowledge that's the problem and they don't feel that they're going to get more than one chance/feedback cycle to learn from.
The product managers proposing things have their reputation tied up in them so a major feature is a chance at fame for them.
It's as if everyone gets "a turn" at using the development department once in a while and they want to make the utmost of it, knowing that the instant their feature is "finished" the spotlight will be gone from them for months.
j_w 5 hours ago [-]
> might be the one that causes customers to reject the offering
This is a valid why for a feature though. Clients want it.
Now, the concern here is do the clients want it, or does sales/product THINK the client wants it?
xyzelement 23 hours ago [-]
The article covers that under the imperative of discovery. Learn what works quickly because you may not know what the core part is otherwise.
There's ways to navigate it.
p0w3n3d 14 hours ago [-]
“I found this new tool and it’s pretty cool ...”
yup
“This company <company totally unlike the one we’re in> does things this way, so …”
agreed
“Here, look at this HackerNews post that says this is best practice, we should probably …”
sir/m'lady, we're at war from now on. This is the only reason I come here. Of course I don't take everything carelessly, but the amount of experts on this forum is damn high and this is the only forum in the last 10 years that helped me grow so much
npodbielski 12 hours ago [-]
At war with whom? About what?
lionkor 12 hours ago [-]
I really dislike the "ah this is my favorite senior" language. The author would have done well to simply leave this "rating" of different kinds of people out, and it would not harm the article. In fact, it would improve it.
People don't want to be judged in the introduction of an article, based on how they like to approach their literal dayjob. It's a weird jab.
6 hours ago [-]
luodaint 12 hours ago [-]
The article is all about technical communication — diagrams, architectural discussions, code snippets. The more difficult piece to communicate is product sense: which user feedback indicates a genuine trend, when a feature request is a workaround for an underlying issue vs. the issue itself.
It’s not difficult for seasoned engineers with deep technical backgrounds to whiteboard a distributed system in twenty minutes. It takes hundreds of customer discussions, invalid hypotheses, and years of experience building judgment about whether this is the right solution at the right time.
The engineers who compound quickly have usually built their skills in both areas concurrently. Communication of the latter is more challenging due to the judgment-based foundation beneath it.
dnnddidiej 11 hours ago [-]
This is an excellent article. Thought provoking and Ill rememner the 2 loops forever on.
> What if we had one system just for speed?
Like a beta? It would take incredible discipline from the business and customers not to consider that production software and demand 99.99 uptime and bug free.
rudnevr 10 hours ago [-]
I'm trying to avoid a snarky comment like "oh of course it's a senior dev's fault again", so I'll tell a story.
When I started around 20 years ago, my junior dev experience was pretty harsh - I was taught, not always in a correct or respectful manner, to do this and not to do that. Overall though, it was absolutely useful and formative. Senior engineers are rarely abusers, they communicate real issues, better or worse, and it was on me to figure out why and how to work the right way. Also we were raised in a pretty receptive attitude to the "old" technology - from Tcl and Smalltalk to Ada, Perl, etc. It was admired classics rather than just old shit.
Surprisingly, this didn't translate too well to my experience when I found myself in a mentoring position. Starting from 2015 maybe the situation changed. Newer generation of devs felt much more entitled to social games, higher salaries and opinions rather than authentic engineering interest and therefore my experience.
No amount of structured communication would change that, even the cold pressure of production failures and very specific poor management feedback normally doesn't work. They're also more lenient to prod screw-ups, and often use "everyone can make a mistake" excuse for excusing even more mistakes.
The thing is, most of them don't want to hear for any reason.
As many of my peers I learned humility and accepted that as is, only using my advantage in expertise when it comes directly to my responsibility territory, and to avoid a hassle imposed by my eager younger teammates, like I usually parse prod logs and settings with command line while the younger guys trying to push through loki/grafana query limitations.
I'm fine and safe, and my job is no less secure, I guess, because someone has to fix bugs etc. The companies less so, but as long as they don't care why would I.
It will be interesting to see this generation wiped off by the next one. I guess they shouldn't be in a very good shape, because the foundation they built upon (namely quickly changing libraries and language supersets like React/TypeScript/some JVM flavour/and I hope Kafka) will be replaced by the next tech fashion.
chrisweekly 1 days ago [-]
I may be missing something, but the "left" and "right" loops strike me as slightly different words for the same exact thing.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
DougWebb 1 days ago [-]
The company is providing existing services to existing users for payment.
The company is offering potential new services to current and potential users in the market and getting feedback on how valuable those new services might be.
dirtbag__dad 19 hours ago [-]
> I don’t like senior developers who like trying new technology. I like ones that avoid more complexity.
I guess the author has never worked on a dog shit system with no tests at all and constant downtime.
I have worked with “complexity averse” engineers who would rather fix the edges over and over again, than roll up their sleeves and just get the job done.
I just don’t believe that using new tools is at odds with avoiding complexity.
Sometimes you have to take it to the chin, and get to use the new shiny thing along the way to move much faster.
throwaway74628 4 hours ago [-]
Any can can be kicked down the road in the name of pragmatism, sure, but IME the kind of tech debt you’re describing often comes about from lack of fear and respect for complexity and the damage it can do.
Before you can reduce complexity, you must first manage it by breaking the problem into smaller parts through categories, boundaries, and contracts; so many software engineering best practices concern this aspect.
That is to say, the rolling up of sleeves usually involves first adding tests for existing behavior and then doing more-or-less what’s described. The tools involved are aside to this; “catch as catch can”
sfink 14 hours ago [-]
The safest answer a sales person can give is "yes".
The safest answer an engineer can give is "no".
overgard 21 hours ago [-]
Hits home for me; although a lot of times adding complexity is not about your opinion as a senior developer but rather what the business wants. I've definitely worked jobs where I helped create microservice kubernetes nightmares, and while this was partially my fault for wanting to play with shiny things, a lot of this was just "this is what the business wants and you have the expertise to do it", and I'd kinda shrug and go OK. I worked one job (small business) where an executive once leveled with me that the reason they wanted the complexity is because it looked good to investors, not because it was an actual need.
FWIW though the idea about a "speed" product and a "stability" product isn't new. We used to call it "prototyping". I don't know when/how that disappeared from the collective consciousness. "have a space where we can build things fast with horrible practices" isn't some AI era innovation, it's what smart companies have done for decades.
BiraIgnacio 1 days ago [-]
One could say in order to be a senior developer in any area, more-than-good- communication skills are required.
nathanielks 1 days ago [-]
Unfortunately that's not the case. There are many senior and above level engineers out there who are unskilled communicators but very technically skilled.
lwhi 1 days ago [-]
In which case maybe they're best suited to not leading a team.
CobrastanJorji 1 days ago [-]
> I don't like the kind of senior developer that says "I found this new tool and it’s pretty cool ..."
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
ahussain 11 hours ago [-]
I’m curious about this scale vs speed distinction.
Every codebase includes parts that are more experimental, and parts that are more core. My sense is that AI can help on both of these fronts (I.e building rapid prototypes on the fringes and hardening the core with better test coverage).
buster 9 hours ago [-]
> Forget maintaining stability, AI is a downright destabilizer. It worsens understandability, fixability, debuggability, teachability, guaranteability, all the bloody bilities.
This is just an assumption and the whole article falls flat if this turns out to be wrong.
In my limited (as everyone else's) experience, working with agentic AI needs good documentation, good specification (spec driven, you know it's all the hype nowadays). Those alone lead to much improvement. Now take into account that probably your senior dev also has more time to think about the big picture, to improve all those little things that were a nuisance in the past but now are a mere "Claude, fix that" in a worktree away.. I would not bet on the assumption of this article.
heisenbit 15 hours ago [-]
While I agree with adding code contributing to complexity is problematic there is lots of code in existing code basis which is overly complex due to past outdated requirements or less than perfect human coders. The current flood of AI driven security fixes demonstrates that AI can be pretty good in detecting security edge cases. It is not inconceivable to use it to also reduce code complexity.
devhouse 21 hours ago [-]
Good read. The big elephant in the room though: you likely won't purely hand-code the Stable version for much longer. So where's that split? Prototype vs. Prod? Feature Flags? Canary? 2 codebase nightmare? All of this already exists.
The message that hits for me is that of AI being a destabilizer while simultaneously being an accelerator. The Speed/Scale suggestion won't address this. A codebase no one understands, growing at machine speed won't go away just because you drew a box around it. The fix is likely more mundane stuff like process and role shifts, smaller PRs, tests, tooling, ownership principles.
narag 8 hours ago [-]
Why junior bloggers fail to make me read their articles
lioeters 5 hours ago [-]
"Why experts fail to X", written by a non-expert.
khalic 8 hours ago [-]
hot take number 9712956028926...
21 hours ago [-]
kevdoran 19 hours ago [-]
I feel like I was totally on board until the conclusion about one fast system and one stable one. It's not really possible in practice, once a customer starts paying for something, even a vibe coded app by a sales person, it's now a stable system.
The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.
I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
danhorner 1 days ago [-]
I tripped over the double-entendre of the teaser quote and then found it ironic that the author is a copy writer.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
jrumbut 23 hours ago [-]
I think it's possible that this idea would work as a communication/branding strategy for senior developers, though I don't think it's strictly true.
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
jinkuan 1 days ago [-]
The polarization of speed vs scale concern on team is interesting.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
abhisek 19 hours ago [-]
I partly agree. Agents are not going to replace senior devs. Exactly for the internal context and the decision making that comes with it.
But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.
I think the same will happen with agents working on a org specific paved path set by senior devs.
himata4113 19 hours ago [-]
They will (and already have) replace low-performing senior developers because a single high-performance senior developer can do a lot more than they used to.
I have personally noticed this a lot how multiple people can work on the same problem, but the more senior developers get way more miledge out of AI compared to those that are early in their carreers.
Another difference I've noticed is how many agents one can keep running without losing awareness.
It generally just raised the bar on what management will expect from developers which will result in a shrinking workforce. The only ones that will benefit are AI companies and the upper management since less employees means less management so lower management will get screwed too.
simplyluke 19 hours ago [-]
> will result in a shrinking workforce
Jevons paradox is already rearing its head, I've seen data suggesting open roles in tech are at their highest since the post-pandemic slump [1]. If you're a senior leader at a company and your engineers are now capable of multiple-times more productivity, is the logical choice to fire half, or set way more ambitious goals? One assumes engineers are hired because their outputs are worth more than their cost. If outputs, at least for those capable of wielding new tools, are higher, so is the value of that employee to you.
The universal thing I'm hearing from friends at small-mid-size tech companies, and experiencing myself, is that there is way more work and demand for it from senior leaders than they're capable of with their current teams.
There is a limited things to work on, planning and orchestration becomes the bottleneck.
strix_varius 7 hours ago [-]
> Why? Because they hunt a singular monster in professional software development: complexity.
I love this sentence.
invalidSyntax 18 hours ago [-]
It sounds like a perfect idea on paper until you notice that junior devs will not be able to learn about stable code. Unless AI get's good enough to write stable code, or good enough that no human has to look in the code, the next generation will face a bigger problem than now. Well it's AI that started it so let's make AI take responsibility... Oh they can't. Now what?
ionwake 9 hours ago [-]
Ill never forget I was fired from an aerospace company for designing a new system that was basically a linear diagram compared to the highly complex nightmare web of mystery my boss had desgined for our current system, which simply didnt work.
I was given a chance to redesign and it and when I failed to add the added complexity I was let go.
To this day I reckon the higher ups are still having the same age old problems and excuses from their underlings regarding a system that has an utterly useless design. The guy in charge, rarely in the office, calmly explaining its a fantastic implementation, the new coders we are getting just cant work with it / operate it well because they suck.
I am not bitter, if anything it just made me terrified of being C-suite of any large company, knowing it would be almost impossible to understand why your company is failing.
They are very remotely related yet somehow very close.
1 days ago [-]
pragma_x 1 days ago [-]
Interesting article. I appreciate the range of perspectives here, and the overall pitch to keep the most experienced in frame along side new-fangled advancements (AI).
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
Speed… speed… velocity… speed. All I hear about these days. Every meeting.
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
drbojingle 21 hours ago [-]
They fail to communicate in the same way we fail to download a copy of "the truths of the world as we know it" into every child's brain. It's easily to say " look both ways when you cross the road" but speech is so one dimensional. It's a slow tape reel and that's just the encoding.
don-code 1 days ago [-]
I agree with the author's premise - that one feedback loop optimizes for speed, and the other for scale - but I don't think the market is bearing the conclusion - that AI should be utilized to enable more rapid experimentation, where we better scale what works.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
block_dagger 23 hours ago [-]
It seems to me that the author fails to extrapolate on the effects of recursive self improvement. The only things preventing 95% engineer obsolescence will be compute/energy constraints and the speed of adoption, which can take years for large infrastructure companies. But it's coming.
halfcat 22 hours ago [-]
Cuts both ways. If supply chain attacks continue recursive self improvement, everyone’s going to be working in air-gapped facilities. Departments also need to be air-gapped from one another. And each team air-gapped. And so on.
There’s a speed limit, because the faster you go the less room for error you have. It’s the same as being heavily leveraged with debt. If you have a cash investment and it drops by 50% you can just wait. If you’re leveraged 100-to-1, a 1% drop forced liquidation and wipes you out.
brador 9 hours ago [-]
Survivor bias. The ones who did were fired.
1 days ago [-]
robin64 16 hours ago [-]
I enjoyed reading this, and I agree with the underlying message: communicating better with our audience.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream.
robin64 16 hours ago [-]
I enjoyed reading this, and I agree with the underlying message: communicating better with our audience.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream
augment_me 1 days ago [-]
I think that if this becomes an actual problem, there will be such a massive incentive to add AI to the scale/compression/risk avoidance side that there will be automated tools specialized in that kind of work.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
tracker1 1 days ago [-]
I do a bit of both... I pay attention to new tools, libraries and languages. But will rarely recommend them initially. That said, I also tend to fight complexity to an extreme degree KISS/YAGNI are my top enterprise development keystones.
dotancohen 14 hours ago [-]
I stopped communicating my experience-derived lessons when I discovered that 1. it cheapened the perception of "my genius", and 2. nobody wants to hear it anyway. From non-tech workers for whom I'd write a bat or bash script for, to engineers for whom I'd debug a complex race condition - they all just want the answer and care nothing about how I got it.
Fine, then, I'll keep the experience to myself.
____tom____ 1 days ago [-]
I feel this is about as accurate and relevant as if I were to write an article on senior copywriters.
wagwang 22 hours ago [-]
Depends on the product, but in many cases you cant actually decouple the complexity because the complexity is the product. There are times where the archaic flow needs to work for some stupid compliance reason.
1 days ago [-]
doxeddaily 16 hours ago [-]
I actually think the article makes some pretty interesting points. It's not about the name of it though.
orisho 22 hours ago [-]
Shouldn't a senior developer strive to eliminate complexity while increasing velocity? The two do not contradict. Reducing complexity can increase velocity.
roughly 1 days ago [-]
This is well-put, but the problem comes when you’ve got leadership looking at what appears to be a fully-functioning version of the product that the market is clearly indicating to them is sufficient to drive revenue. Budgeting the 6 weeks or whatever to translate from “the working version” to “the trustworthy version” is a hard pitch.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
davebren 17 hours ago [-]
The loop on getting slop out to market quick in order to get feedback is already flawed. If you don't understand the problems of your customers well enough to come up with a coherent vision for how to solve them you shouldn't be the one doing the product design or making high level business decisions in the first place.
There's a place for prototyping and experimental features but now agile has cultivated extreme learned helplessness and everything is an A/B test because there's no longer any ability to judge whether something is good or bad based on a holistic vision.
There's a lot of opportunity in being the manager who can still see it
wewewedxfgdf 23 hours ago [-]
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
No-one says this.
deadbabe 23 hours ago [-]
They say it quietly whenever there is a workforce reduction.
alecco 1 days ago [-]
This is engagement bait. I almost fell for it.
xyzelement 23 hours ago [-]
What does that mean? The article expressed something that seems to be really true and I hadn't heard expressed this clearly.
egorfine 12 hours ago [-]
> this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can
And push an insurmountable pile of technical debt onto the successor.
Well, yeah, I understand the idea and I'm all for it: the less code the better, the less changes the better.
However in certain industries it is no longer a right approach for the job. In modern frontend development if you did not update your codebase for like a couple of months, this codebase falls so much behind that it becomes way more expensive to push an upgrade as compared to daily minor updates of packages. Yeah, I hate this as much as you do, but this is the pace frontend is moving at, and if you don't follow, you will mount technical debt.
xyzelement 23 hours ago [-]
Just wanted to say this writeup made tangible a real thing - a truly clarifying way to think about it.
someone654 1 days ago [-]
> Your thoughts, senior software developer?
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
kimjune01 6 hours ago [-]
we lack the shared data structure for it
panny 1 days ago [-]
I can/have done this without AI and it tends to be disasterous. Management declares we need X fast. Okay, we can build that really fast, but it won't scale. Management says fine, just build it. We do. Management now wants to build Y fast. But wait, what about X? Nevermind, just build Y now. Okay, we're building Y, and X collapses... because it wasn't built to scale. Now we're being called in at 2 am to fix X while also expected to ship Y tomorrow. Sure, they'll glow you up and tell everyone what a hero you were for coming to the rescue at 2 am, but on that six month performance review, the blowup is used as reason to withhold raises and promotions. They don't lose any sleep of course, just you, the developer.
dyauspitr 1 days ago [-]
Probably because unlike apprenticeships a senior developer isn’t an owner. This creates a situation where imparting knowledge means you have less time to do your own packed work stack.
einpoklum 1 days ago [-]
Irrespective of the linked post, let me say why I (being sort-of-a senior developer) fail to communicate my expertise. In no particular order:
1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.
2. Same, but devoting time to preparing materials which communicate my expertise.
3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.
4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.
5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.
6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.
7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.
8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.
9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.
10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.
11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.
12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
lenerdenator 20 hours ago [-]
I'm a "senior" developer.
Want me to communicate my expertise? Give me some time to actually do it.
1 days ago [-]
greenchair 1 days ago [-]
Wow I'm only done with part one and the author pegged me to a t.
casey2 13 hours ago [-]
If you're a senior Go player and you think robots can play I'm suspicious of your expertise.
Literally what people thought after Fan Hui (2-dan) was beaten. For humans software requires ingenuity and creativity. Computers can cheat that, infact computers ALWAYS cheat that to beat humans. NTP as a method of cheating is slightly more general than say board evaluation, so it's less efficient for the same problem, but scaling laws show that with enough compute NTP can beat humans at chess (or any most other arbitrary games, in real time).
npodbielski 13 hours ago [-]
The second part of the system that author proposes, will not work for most of the medium and small companies. From what I saw people that ran those companies, the owners for example were looking at those devs like, hacks that try to extort them for money. They were angrily grinding their teeth but put up with that because they need them to do their software to actually make money.
Now, with so-called AI they will mostly slap something kinda working in few days and then maybe get hacked or double invoice some customer from time to time... They will learn of those problems the hard way. Or maybe they will not because it will be mostly working emailing system and nobody will care if it will loose 2% of the emails because of some bug.
Nevertheless, either the Stable version, Scale version of the software will never happen or will be looked like not necessary or it will became a thing after catastrophic failure.
Anyway I do not think it will be like that, everybody cares about speed and money and making money quickly without an effort is the ultimate unicorn entire world is after.
Those complaining developers just stand in a way.
aa-jv 13 hours ago [-]
Here's a mental exercise - do you immediately think you know what this command does?
PING
Junior developer: PING is used to check if a host is reachable by a network
Middle developer: PING constructs and sends ICMP packets to an address
Senior developer: what machine, what OS?
Junior manager: Don't care, ask a techie if you need to do something technical
Middle manager: Ask <techies name> about it, I know he has great experience with it
Senior manager: PING is used to check if a host is reachable by a network
Senior developers fail to communicate their expertise, because that expertise is developed and formed by asking more questions than answers, and managers fail to understand the capabilities of "their techies", because managers see question-asking techies as counter-productive, and attempt to route around them. Managers only want answers, developers know the value of asking deep questions.
Thus, AI.
(BTW, PING is a command that produces a distinct sound on the Oric-1/Atmos computers, and it is thus an Onomatopoeia.. I know this, because I am a Senior Oric-1/Atmos Developer who knows what lies at #FA9F, how it works, what the 14 bytes are for, and so on.. because I once asked the question, "how does PING go 'poooinnng' but ZAP go 'zap'?")
AI: <asks billions of questions in a second>PING is ..
dragochat 11 hours ago [-]
> I don’t like this kind of senior developer [...] not my wavelength.
Bro & I would not get along well =)))) But the article IS good stuff.
psychoslave 16 hours ago [-]
Interestingly the article put complexity management vs uncertainty reduction.
But reduction is narrower than management which is narrower than organization.
Also uncertainty is part of complexity. Being able to isolate what is deemed predictable under clearly identified premises is the best that can hoped on that matter. It means that then one strategy can be applied to protect the stable core, and other strategy can be tried on what is unknown (known and unknown unknowns).
20 hours ago [-]
hmokiguess 22 hours ago [-]
apex predator of grug is complexity
complexity bad
say again:
complexity very bad
you say now:
complexity very, very bad
aussieguy1234 21 hours ago [-]
As a senior developer, I achieved last night what I thought was impossible with all the anti-bot (including bot detection) tech that gatekeeps much of the internet.
An AI agent using a web browser like a human. I used various stealth technologies to achieve this. I set it off on a research task for me and it saved me $30 of a purchase by finding the best price. Its Jeff Bezos worst nightmare, visting amazon.com and ignoring all the product placement ads.
It had multiple tabs open, did searches in multiple places, opening products and checking sites....it looked just like a human would do doing the same task.
This I can assure you would not have been possible without my expertise. I had to be very careful to remove all bot signals from the browser, including going to browserscan.net to check. Once done, most captchas were never shown to the agent. There is a NodeJS codebase involved that I wrote by hand.
I searched through the code of the browser automation framework I was using, looking for ways to make it look more human. I had AI help with this part, but had to confirm everything and pull the agent up when it suggested bad ideas.
Most of the work was architectural, including making sure my browser was easy for the agent to use.
I'm going to add 2captcha as a next step, to solve the few captchas that it still encounters (as I still do sometimes as a human).
I'm thinking of open sourcing it, but i'm not sure if its a good idea as if it became widespread, it might encourage the adoption of even more invasive anti-bot measures.
nyeah 1 days ago [-]
This is copy. I'm only interested in content.
xyzelement 23 hours ago [-]
What does this mean?
nyeah 9 hours ago [-]
The article explains what copy is. It's advertising writing, manipulation rather than factual communication.
22 hours ago [-]
a_c 1 days ago [-]
You can't force people to feel what you feel. One can (pr|t)each, without experiencing, other can only mimic or rebel. That's how cult is formed.
dcchambers 1 days ago [-]
In 2026 the answer is "job security"
rvz 1 days ago [-]
The unspoken observation on the reason why this happens is it almost always political in the organization to make themselves more valuable and harder to fire / layoff.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
iJohnDoe 1 days ago [-]
FTA: “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
bigfishrunning 1 days ago [-]
Now they *think* they can create the apps themselves. I say let every CEO and business administrator try; business will fail, everything will get shitty, and eventually somebody somewhere might learn something. Let 'em cook.
mschuster91 1 days ago [-]
> Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and a lot of companies were affected.
It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.
> Ah, well, it can’t yet do the one thing senior developers still do. Take responsibility.
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
austin-cheney 23 hours ago [-]
Its all relative. There is no baseline for expertise in software. So, instead its whatever self-serving quality some sociopath on the other end favors.
edge_trader_41 1 hours ago [-]
[flagged]
toshikatsu-oga 9 hours ago [-]
[flagged]
SergeyKuch 6 hours ago [-]
[dead]
SergeyKuch 6 hours ago [-]
[flagged]
xiaosong001 7 hours ago [-]
[dead]
Bmello11 1 days ago [-]
[flagged]
hona_mind 19 hours ago [-]
[flagged]
nikhilpareek13 1 days ago [-]
[flagged]
jeffg2026 5 hours ago [-]
[flagged]
asn_tech_2019 13 hours ago [-]
[flagged]
varispeed 1 days ago [-]
[dead]
JohnMakin 1 days ago [-]
I don't necessarily disagree with this conclusion, but the way it is written has a lot of AI prose smell that was extremely distracting for me.
alwa 1 days ago [-]
I’m inclined to take the author at their word that they’re a copywriter by trade.
I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
srcreigh 1 days ago [-]
I've found myself using AI rhetorical styles. Mostly in PRs. The whole "not just X, Y" pattern hooked into my brain.
tmaly 1 days ago [-]
I didn't get the AI vibe from it. At some point we are just going to have to get use to most stuff being written to some degree by AI.
There will be different shades of usage and maybe we draw a line somewhere in there.
jewel 1 days ago [-]
Also the consumption of AI-generated text could be having an influence on the tone of how people write.
So even if AI was not used to write an article, it could "smell" like AI to someone who consumes less of it.
ThrowawayR2 1 days ago [-]
The written word is how people interact with LLMs. Clarity and precision in writing results in more effective prompting of LLMs. It is just as possible that leaning heavily AI writing will be seen as a marker of not being natively skilled enough at writing to prompt LLMs effectively because of the GIGO principle.
SpicyLemonZest 1 days ago [-]
There's no fundamental reason that I have to read random blogposts from people I don't know. I do it today because I find it to be an enjoyable way to learn more about my profession and explore various perspectives on it. If I stop finding it enjoyable because too many people write their posts with AI, I'll stop reading these kind of blogs altogether, in the same way that I (and I suspect many commenters here) do not read even the most lovingly crafted Linkedin posts.
yesitcan 1 days ago [-]
Let’s do the exact opposite of what this person is saying. Resist AI slop.
tolerance 1 days ago [-]
You have to be able to distinguish the scent of LLMs from the scent of Gary Halbert.
zzzeek 1 days ago [-]
im either the biggest idiot in the world or this person is a terrible "copywriter". I found this post to be nearly unintelligible: "You can’t explain away someone else’s problem using your own problems." WTF does that mean? this would be a good place to put some very simplistic examples of what they mean, but they dont. is that because theyre trying to be succinct? clearly not as the post rambles on and on anyway. I hate posts that are both 1. not explaining their concept and 2. super long winded. That's a problem
are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
Of course I was still super junior and had so much to learn, but from that point I could at least interrogate any pattern or best practice to understand why it existed and where it should or should not be applied.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
math and logic are closer to a basis for software abstraction - but they were scary to business people so a "fake language" was invented atop them - you have "objects" that don't actually exist as objects, they are just "type based dispatch/selection mechanism for functions", "classes" that are firstly "producers of things and holders of common implementation" and only secondarily also work to "group together classes of objects"
I do not think OOP ever really worked out well as can be evidenced by it no longer being as popular and people having almost entirely abandoned "Cat > Animal > Object" inheritance hierarchies.
OOP didn't really take off either, but mostly because it is hard to optimize and impossible to type.
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
"Chemical bonds fill the electron shells, which is why we have CO2. But don't worry about why carbon monoxide exists."
"Here's a formula to figure out the angle between atoms in a molecule. But it doesn't apply to H2O, because handwavy reasons. Just memorize this number instead."
Students don't gain an understanding of the subject, because the curriculum doesn't even try to teach it.
Between that latter group and the bottom portion of the middle it sparked a big culture war. Eventually leading to leadership declaring that FP was arcane wizardry, and should be eradicated.
Besides OO -> Functional this applies everywhere else in Computer Science. If you understood the fundamentals no new framework, language or paradigm can shock you. The similarities are clear once you have a fitting world model.
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
`[state_dict.values() for mat to mat2 for row for p to p/2]`
Or similar, where data flow is 1->2->f(2)->3->4->f(4). Where right now it is this lovely mess with one more repeating term:
`[p / 2 for mat in state_dict.values() for row in (mat 2) for p in row]`
Where the flow is f(4)->2->1->3->f(2)->4->3
This is not just a Python list comprehension problem obviously. The simple for… in… has a similar problem. It’s only better, because the first term `p/2` is at the end.
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
Another facet of this is my annoyance at other developers when they persistently incurious about the domain. (Thankfully, this has not been too common.)
I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
Edit: The main role of PM is to decide which features to build, not how those features should be built or how they should work. Someone has to decide what to build, that is the PM, but most PM are not very good at figuring out the best way for those features to work so its better if the programmers can talk to users directly there. Of course a PM could do that work if they are skilled at it, but most PM wont be.
So that we're on the same page, what I think should be PM responsibilities:
If I have a user story: "As a customer I want to purchase a product so that I can receive it at my address" - PM defines this user story as they have insight to decide if such feature is needed.
PM should then define acceptance criteria: "Given customer is logged in When they view Product page Then 'Add product to basket' button should appear", "Given 'Add product to basket' button When customers click on it Then Product information modal should appear" etc - PM should know what users actually want, ie whether modals should appears, or not; whether this feature should be available for logged users only, or not.
How this will work shouldn't matter to PM; these are AC they've defined.
Of course the process of defining AC should involve developers (and QA), because AC should be exhaustive to delivering given feature
In your example of an order placement - the PM has no special knowledge of what is a good customer order flow. Developers are usually way better at coming up with those by the dint of experience and technical knowledge of the current codebase and make the appropriate speed/polish trade-off.
PMs acts as an imperfect proxy for what the customer wants, making judgements off nothing more than their own taste. And though there are many great PMs, the taste of a PM is usually worse than that of developers and designers on average.
IMO the main business reason they exist is for organization accountability and ownership, despite the often negative value they bring.
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
In practice, it isn't.
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.
There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're doers and builders, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're decoding the fundamental invariants of the universe.
I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.
Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.
If there were indeed perfect product managers, there would no need for product support.
A lot of the error messages I'd write were for me, especially those errors I never expected to see.
The typical feedback I'd get from end users is "your software doesn't work". If they can send me a screenshot of the error I'm halfway to solving the problem.
Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.
Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).
This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".
Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).
I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
Or maybe I'm just a little bit insane. Or both.
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
"Transmissionism" is a term I've seen to describe this
https://andymatuschak.org/books/
complexity is
not what you believe it is
please try listening
Very cool
that can only be moved around,
not eliminated.
To keep it all in a clump
Than spread it about
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)
I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.
I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.
To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.
When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
Great thread.
Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.
> and they need to have the right project that provides the opportunity to learn what needs to be learnt.
It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.
Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.
Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
The best educators I had had exactly that approach: you sometimes start with theory, but other times with challenges which make you feel the difficulty, and understand the value of the theory you are co-developing with the educator (they just have the benefit of knowing exactly where we'll end up, but when time allows, they do let you take a wrong turn too). Even if you start with theory, diving into a challenge where you are allowed not to apply the learnings should quickly tell you why the theoretical side makes sense.
As with everything in life, great educators are few but once you have them, you can apply the same approach yourself even if the educator is unable to steer you the right way.
If you never received this type of education, then what you received could arguably be called a waste of time.
Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").
Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
> Ah, baby, this is my senior developer. The avoider, the reducer, the recycler. They want to avoid development as much as they can.
There are times when this is good, there are times when actively trying introduce an improvement is the best way forward. A good senior is able to recognise when those times are.
Now combing through analytics to determine whether or not what we did was actually good? Less warm and cozy.
Is the improvement likely to reduce maintenance overhead (and thus cost)? Or improve performance allowing for fewer services running (and thus reducing cost)? Or reduce bugs that force people out of a workflow (eg in an online shop, thus fixing it increases sales)?
Or if it’s just tech debt then use Jira (etc) to your advantage and talk about the number of tickets you can close of this sprint due to this engineering initiative.
If the development team and product teams goals are largely aligned then the problem with engineering initiatives is just how you explain them to the product team.
This is what I was thinking - I'd say the biggest step up a developer can make is to recognize that sometimes you need a bit of one approach, sometimes a bit of another one.
Sometimes minimalism is the way, and you need to wonder if the pain, workload or lacking capabilities and features are problematic. Or, sometimes adding the smallest possible thing is a good way, as long as we don't paint ourself into a corner and enable learning and accumulating information of what we actually need.
Sometimes buying a thing is a good way, if you can find a good vendor and a tool fitting your use case and especially if the effort of doing it on your own is high. This commonly occurs in security, because keeping up to date with the ongoing vulnerability and threat landscape can be a full job on its own.
And sometimes adding something bigger is the way, if the effort of maintaining it are less than the effort and pain incurred by not having it. Or if we can ramp up the effort of the thing incrementally, while reaping benefits along the way. This can be validated often by doing a small thing.
What the AI will do in my opinion is to push the bar more in this direction. Cozily hacking CRUD-Code in a web server together most likely won't be enough in a year or two for the average development job.
I read the above as "avoid development that increases complexity needlessly" — and often, there is a desire to overcomplicate something that can be much simpler because the understanding is lacking.
"As much as they can" does not mean trying not to do any work, but trying to simplify the work where it achieves desired outcomes, and just about! This frequently means doing the improvement today.
preventing the unnecessary changes can help you get the political capital in your org to push through the changes that really need to happen.
I bet there's money to be made for building a drop-in to either of those two that requires less memory, would save companies a bundle, and make other companies a bundle as well.
That's because it's much, MUCH faster to do it that way, though if you can deal with certain type of latency trade offs for throughput something like turbopuffer can do wonders for your costs.
> why would you not want to index?
Because if you don't need an index it wastes RAM, as you've learned. Maintaining indices also has a cost. Index only what you need.
In the sense of the blog post: A senior with decent DB experience would have told you. ;)
I am not experienced with MongoDB, I don't know if previous comment reports were the users fault or MongoDB's. But one thing is clear to me, complaining it uses too much RAM and not knowing the reasons for it, is a user problem. A common mistake is to setup a DB and expect it just magically does works. DBs are complicated beasts, you have to know how to deal with them.
I think these are realistic expectations for most apps. Obviously the likes of Netflix and Uber get orders of magnitude more, but 99.9% of apps aren't a Netflix or an Uber, and you don't have to optimize for scaling until your app is on a trajectory to become one, and putting your database on an SSD already let's you handle several thousand concurrent users with ease.
Of course everything depends on use case and constraints. I highlight the extremes here, the initial confusion was why DBs require so much RAM. Traditional DBs are optimized around RAM, that's where they perform best. You can abuse that, but it's not the best they can be in terms of latency, predictability and stability.
In all fairness this was my first job a few years ago as a developer, I deep dove MongoDB but I was also one of the only devs using it at this place.
My previous experience with MongoDB had been in college and more limited.
At some point they added the docValues configuration option per-field to do the transformation during indexing and store it to disk instead, so none of it had to be stored in the heap. Instead what you're supposed to do is rely on the OS disk cache, which handles eviction automatically, so you can run with significantly less memory but get performance improvements by adding memory without having to change any configuration further.
This do not mean we don't develop new product and services, it just means when we do so, we find the path of least overall entropy, it also applies to operations and tech debt reduction.
premature optimization is root of all evil
The qualities were highlighted because they can all lead to better stability.
Innovation can reduce pain though, if the current pain is strong enough. A stable stream of failures in production can be the kind of "stability" you want to disrupt.
A complete stability is death.
... All of them?
> Yes, yes, of course this is simplistic.
It's an example, put to the extreme, to clearly communicate the ideas. As all things, the golden mean applies, as I understand the article argues for:
> the design of the 'Scale' version is influenced by what worked and what doesn’t work in the 'Speed' version of the system.
One of my favorite .sigs was:
I don't remember where I saw it, but it was a while ago. It's possible the author has an HN account.One of the things that happens to "avoiders," is that they get attacked for being "negative." It can get career-ending, when the management chain is the "Move fast and break things" type.
I just stopped offering suggestions, after encountering that crap a few times, and learned to just quietly make preparations for when the wheels fall off.
I have spent my entire adult life, shipping, and shipping means lots of "not-shiny," boring stuff. But it gets onto shelves, and into end-users' hands. I was originally trained in hardware development, where mistakes can't be fixed with an OTA update. It taught me to "play the tape through," and make sure that I do a good job on every part of the project; which includes a lot of anticipating problems, and designing mitigations and prevention.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
Old quote: "There is nothing so permanents as a temporary hack."
On the other hand, almost any business problem can be solved in a reasonable way that doesn't send your system through any terrible one-way doors if you zoom out enough and ask enough whys. Of course not every place allows engineering to do that, but the ones that don't aren't able to retain senior folks because they will just go somewhere where their judgment is valued. Sometimes technical debt is the right thing for the business, but sufficiently senior engineers can set things up so there is always a way out. But what you can't do is uphold the purity of the system above the business problem. The systems are paid for by the business, so if you lose sight of that then you've lost the plot and the basis for your influence.
We were highly autonomous team though and hardly had cadence complains. But mostly because the all other departments were lagging. Except marketing, marketing always has "ideas".
At this point Zig implementation of Bun seems like one written to throw away. And it happened only thanks to AI.
Why would you do that though? If you have a working 'prototype' that's handling the demand, has the required features, and doesn't really need to be rebuilt (except to appease the sensibilities of the developers), why would you spend time and effort on that? That makes no sense. The fact it's a prototype or a 'proof of concept' is essentially irrelevant if you can't enumerate what the actual problem with it is.
I work with a bunch of teams that complain that they're mired in tech debt all the time, and complain that it's a huge risk and it slows them down. Except I can see our incidents log and there aren't many incidents and none that can be attributed to running risky code in prod, I have our risk register that has no 'this code is old and rubbish and has past-EOL dependencies on it', and no team has ever managed to articulate how or even how much the tech debt slows them down. They shouldn't really claim to be surprised that no one wants them to spend time 'fixing' a problem that apparently has no impact.
I've also seen the opposite case where a team spent months refactoring an app that they wrote before it launches. They wrote it, then decided they could make it 'better', and spent loads of time reworking most of it before it launched. All the value was delayed because they decided they didn't like their own work. And obviously the leadership team were pissed off about that, and now there's very little trust left.
There should be a good conversation about delivery of work between teams and stakeholders or no one will be happy, but if that isn't happening the stakeholders will always win.
You can get a few feet closer to the moon by building a treehouse, but you still can't turn it into a spaceship.
In a world where people (stakeholders, Product, and dev teams alike) want the prototype to be the full set of MVP features, this is not true.
IMO it is a bit arrogant to assume it is more important to engineer a better version of a thing rather than make money quicker and cut corners. In essence it is better to have a problem which is about how to scale a new product because it got traction rather than solve a problem how to sell more copies of already scalable thing.
Rewrites require an existential-level threat to pursue and should never be taken lightly. They must solve a real verifiable need, backed by real world data. Rewrites for rewrites sake or some lofty or nebulous goal of "better" or "more maintainable" code are doomed to fail and a waste resources.
I've seen the worst of it, from your average monoliths with no separation of concerns to 1000s of lines of self-modifying assembly in dead architectures with no code comments containing critical business logic, etc.
The main rule is to not to bite off more than you can chew, which if I'm being honest you really only learn from fucking up or watching others fuck it up.
Hackathon and overnight oncall fixes ABSOLUTELY should be rewritten or production-hardened, but they very often are not.
That's not to say that my first pass that I show people is ready to go into production, but I build the PoC from the beginning with the idea that it _is_ going into production and make sure I have a plan to get to production with it while I am working on it.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
A less experienced dev suggested using "AI magic" to replace a URL validator. I protested, suggesting a cached fuzzy match solution (prepopulated by AI)... and no one cared. Now the AI model has been suddenly turned down, and our system is broken. We're going to have re-validate the whole system.
A younger developer who got promoted over me tried to write a doc on possible ways to fix it. He said "hey Dan, can you help me with this?" He got promoted over me because the way to get ahead is to write docs and have meetings, not do things sensibly. Now he's trying to use my work to demonstrate his leadership.
No one cares. The more I offer better solutions, the more it's a threat to less experienced developers. Things mostly work so my manager doesn't care. There's probably better ways for me to have handled things, but it's so exhausting fighting the nonsense and I just want to write good code.
Looking deeper into it: these people don't understand the underlying foundations anymore. Just keep building fast, without building proper mental models (that would take time).
Our work is largely very difficult to understand to outsiders, we need to write docs and have meetings to show what we have done. It's part of the job, and yes, if you don't do that, it doesn't matter how fantastic the software is that you wrote (sadly).
Companies have outlandish hiring practices. They want juniors who already know everything. That's why admitting that you don't know something is seen as showing weakness to the company in the eyes of a junior. Also, not knowing things will actively keep you from getting promoted.
I'm sure it's not like that everywhere but it's juniors playing the corpo game.
- Juniors are discouraged to ask for mentorship because they are under pressure to appear competent
- Juniors have internalized from bad experiences that seniors are not to be disturbed
- Juniors grew up in a world where nobody modeled mentorship as a possibility for them; a CS major probably learned async, online, parasocially, without much 1:1 face-to-face interaction
- Juniors don't know what they don't know just yet-- and it doesn't always work well for someone to try and teach them explicitly-- but once they figure this out they'll be more interested in reaching out
seriously. it kills me to have so much knowledge and expertise that few people appear to care about if not downright hate me for wanting to pass it on to others as it appears institutional knowledge does not have any value these days
Whereas juniors are eager to chat, have lunch with you , and share what they’re working on, the seniors are guarded and solitary.
Maybe that’s just my workplace though!
And yes, the office is important.
Orgs get what they measure for. If your team values that sort of interactivity and support, it will ... observe it, measure it, and hire for that sort of person. I've seen groups evolve towards that, and they've been great, but it doesn't seem to be a default - most groups/orgs have to work towards it and and keep working at it.
That said, I completely agree. I learned most of what I know from being in the same room with senior developers and asking questions. Something that just isn't happening these days.
Of course, he turned in his notice shortly after I arrived, because he had found his successor. So, that didn't work out so well for me.
I also believe that some of seniors experience is flesh-level resilience. I'm no smarter than when I joined the industry, I just got used to being in the trenches, how to handle my own psychology, how all the easy-looking things are not and how the horrible ones aren't either.. I could explain this in detail to any junior, but until they're on the minefield it won't mean much.
Honestly I have the feeling that this is often insecurity. It's easy to feel uncomfortable if you think you don't follow along.
Another issue is that juniors usually experience culture shock on their first jobs. So they more or less isolate and do thing how they learned it.
I've been a mentor off and on for the last few decades, and I've been really lucky to have some strong mentees. Some I've followed for a better part of a decade and are crushing it out there. All I can really say is that they're out there, sorry I don't have any more helpful to say around how to find them etc. I'll mull on that for a bit..
To me, young people just don't seem to know, or want to know, that information and knowledge can be gained from a person. It's the arrogance of youth x100
They have a supercomputer in their pocket/on their desk, and an AI that knows 'everything'. I can't imagine what it's like being a teacher right now.
How's your AI going to explain the office politics? The CTO's opinion on things? Talk about recent outages and learnings (details of which are not often on blogs)?
They think all they need is knowledge and facts and none of history, politics, communication etc
I think a lot of is that an AI or Google search won't challenge them, push them, disagree with them - and that's comforting to them, and more desirable than the learning that could happen
It's just basic game theory, and you see it everywhere. However, it's so annoying in the workplace when your two options seem to come down to try to dominate or be dominated. Especially if you care about quality code and don't care for meetings.
As far as I'm concerned, I think I have to make peace with the fact that if I don't play the game, I am going to be managed by people who don't know what they're doing. But neither option seems particularly good. Should I try to bury my ego and influence from below? Should I work harder and try to climb the corporate ladder? I'm still not sure.
I kind of get it as you have expressed that promotion is not your goal. However, organizational influence comes through promotion as your org & only those with influence at your org can change that.
What do you think would be a better system, that decoupled promotions from influence & enabled you to provide your experienced opinion without getting into management?
It is a problem as old as human civilization that the old overlook that society itself changes and instead lament the willfulness of the young in abandoning the old ways.
It isn't like young people grew up surrounded by examples of mentorship and arrogantly chose otherwise. In the internet age 1-on-1 face-to-face instruction is rare. I feel really fortunate that I caught the tail end of it.
I think younger people have maybe thrown the baby out with the bathwater, and you need some discernment on whose advice you can value and trust. But ive just been in many situations in my life where ive asked for advice and its just been total shit.
"Wisdom of the elders" is overrated when society changes so rapidly, and not all the adults you know are the insightful village shaman.
I recall asking my grandfather what is was like to live through the JFK assassination and just recieving something to the effect of "oh yeah that was crazy and bad, i remember seeing it on the news." follow up questions produced no further insight. So you come to the conclusion, why bother with that when you can just read a book about the topic.
On the internet you can learn from and sometimes interact with the best of the best, so the barrier of entry for what constitutes an "expert" is rised much higher.
I still vividly remember reading a z80 instruction set manual on a rainy day during summer vacation by a lake as a kid (maybe 14?)--writing my own assembly by hand in the margins for fun. TBH I probably still have that exact manual in storage somewhere. Had a green stripe down the front edge/binding iirc.
Back then I easily met folks like myself out there on the net, including many kids younger and smarter than me. It was awesome.
I do hope that some form of that 'net lives on in spirit somehow, given that the Internet I knew has largely fallen to corporate interests.
Now that I have my own kids, it's been painful to watch them have such an utterly different experience than I did.
Their Internet is based entirely on consumption and dark patterns designed to capture their attention, while providing nothing (to them) in return besides a dopamine addiction and body dismorphia.
It's simply the case that the supply of "experts" wanting to share "expertise" vastly eclipses the demand by several orders of magnitude.
I think there's a business somewhere, where you get paid to listen to "experts" and they get to feel better about themselves. It's a win-win.
So if people don't perceive you as an "expert" and dont go to you for answers, you simply do not register as one or they have a rather high bar which requires observable undeniable artifacts (and I don't mean credentials, I mean software) and competition is rather fierce - there's simply overproduction of people who think they are "experts" and thus you have to give unmistakable symptoms of being one to register.
"It takes two to tango" i.e. junior developers must first put in some effort and then proactively seek out seniors with expertise.
It may be a cliche, but a truism nevertheless; viz. the juniors are simply not interested in putting in the necessary time/effort to gain knowledge systematically. They want everything to be quick, easy and handed to them on a platter.
I think the main reason for this is; there is just too much out there to learn and everything is being propagandized as being the most important and most indispensable; This swamps the juniors and hence they feel lost and try to keep up with everything which is a fool's errand.
Juniors need to keep the following in mind;
1) Change their learning mindset as follows; - Browse a lot, Read a subset and Study an even smaller subset.
2) Always focus on the essentials and not on the frills. This is determined by your specific goals/needs.
3) Be okay with not knowing everything. Do not base your self-worth on others evaluation of you.
4) Do not compete with others. Do the best you can and always improve on your yesterday's self. As the adage goes "drops of water falling, if they fall continuously, can bore through iron and stone".
5) Be confident in your own intelligence. As Sherlock Holmes said "what one man can invent another can discover". What might seem impenetrable in the beginning will over time become clearer and easier when studied regularly.
6) Everything is dependent on Self-Effort modulated by Timing, Context, Means Employed and finally Random Chance (i.e. lady luck). Manage the last by factoring in its payoffs as part of your self-effort itself (i.e. hedging). Focusing on the above five parameters before starting on anything will guarantee success.
7) You can always short-circuit your studies and gain knowledge quickly by asking seniors with expertise to teach you. Your attitude and way of approach is very important here i.e. you must be sincere and committed.
It feels like engineers are collectively feeling the pain now that product has decided that engagement of mental faculties is no longer necessary on their behalf; just build it and figure out the user persona and utility later...if ever. What used to be a process of taking the time to understand the domain, the user, and how the product fits into some process has been tossed out the window; just ship whatever we think some imaginary user wants and experiment until we succeed.
It creates the exact problem that OP talks about: every random feature that gets vibe-coded becomes a source of instability and risk; something that can then only be maintained via more vibe coding because no one has a working mental model of the thing.
There exist people who's jobs depend entirely on rolling out new features, or apps of some sort, and having them show up in some form of company metric. If the senior developers says it's a bad idea, those people won't listen, or won't care. Their job is on the line.
The problem I see is management is playing games with not talking about how much money is available, what the real timelines are, etc - because they fear the people contributing will leave before the critical moment and so people keep making stupid decisions in that context and then you all get to get a new job.
> “Sure, I’ll have the Speed version ready in 3 days. Then the Scale version in about 6 weeks.”
> They get what they want, speed and momentum. You get what you want, observation and design.
Except that 6 weeks is now blocking the next thing and you'll be pushed to drop it. So this doesn't really solve anything.
I was kind of hoping at the end they'd suggest getting the non-developers involved more so they can experience the pain points they're creating. Not entirely sure how that would work though.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
“Complexity” understood as what’s gonna make development on this system fly easy and fast for the next 10 man-years de facto means side steps when naive approaches would charge straight ahead.
Tortoise and the Hare… the urge to hurry up and burn hard the first two weeks (low hanging fruit, visible wins, MVP!), resulting in ever decreasing momentum due to immature design and in-dev maintenance needs is befuddling to me. So much “faster” for weeks, and it just meant the schedule slipped 6 months.
I have also seen projects go badly because the eng was trying to be perfect upfront. Whereas quickly getting to an MVP and then iterating tends to go better.
Well said. In Kent Beck’s Tidy First, it explores the slow process that can be summarized by this except from his substack [0]
“Valuable” lives on 2 axes:
While there might be a component of time to get features out, it’s rarely urgent enough to forget about being flexible and having a somewhat constant velocity.[0]: https://tidyfirst.substack.com/p/genie-tarpit
And, the Cynefine Framework defines “complexity” a bit differently than the intuitive way it’s often used.
The simple domain is a single dimension. The complicated domain is a system of factors. I think when most people say “complex”, they are really talking about what Cynefine labels as “complicated”.
The Cynefine complex domain is not so easily solved or reduced. It has emergent behaviors. The act of measuring tends to perturb the system. No single solution will ever solve something in the Cynefine complex domain, because the complex system will shift behavior, making solutions that worked before start working against it.
Examples are ecosystems and economies. Software systems tend not to be complicated, not complex, until you start getting into distributed systems.
One of the key insights of Cynefine is understanding that each of the domains has its own way of solving things and that often times, people use solutions and methods from one domain to solve problems characterized by a different domain.
You don’t solve problems in the complicated domain with methods from the simple domain. And you don’t solve problems in the complex domain with methods that work for complicated domains.
The use of “complexity” in terms of systems theory in comparison to “complicated”, is often misunderstood.
I also agree that it’s a really good framework for evaluating problems and then making decisions on potential solutions because each has its own set of approaches.
Small nick pick. It’s “Cynefin” not “Cynefine”. The word is Welsh (Cymraeg). Roughly pronounced ke-ne-fin.
https://en.wikipedia.org/wiki/Cynefin_framework
> "Software systems tend not to be complicated, not complex, until you start getting into distributed systems."
these days so much software is "distributed systems".
For example, at a level of scale, Kubernetes start having emergent behavior.
On the other hand, it doesn’t take much to produce a complex system. The Boids simulation is a complex adaptive system in the form of a flock, yet each member of the flock concurrently follows only three basic rules.
I think complexity is a byword for 'unintentionally complicated' here.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Arguments like:
> We should do Z because it would provide future extensibility.
> Z could eventually enable some novel platform capabilities.
> Z is easier to unit test.
Are much less likely to succeed in the business contexts that I have experienced so far.
That can work too, e.g. when demonstrating the pain a customer will experience when something complex is poorly designed (like some b2b workflow), but it's less visceral than telling your internal stakeholders all the extra work they'll have to do if it's rushed. Even the best of your peers are a bit selfish. The business side has a lot of incentives around quick turnarounds so it's easy to overlook the downside.
Imagine such a scenario. You're in healthcare and working on a feature that will add new data model for some kind of clinical information.
You could say:
> This will allow for us to deploy the feature in only X days supporting Y use case with Client W who has been complaining about this shit for Q months now.
Yeah that very well may prevent W from churning, though hopefully you think about how it will affect other clients too.
Or, you could say
"If we get this data model wrong, and the value set is ambiguous, you (product/sales/cs) will have to reach out to every single customer and clarify what they meant by x/y/z if we wish to migrate it with any degree of accuracy in the future."
That's drawn from experience but I'm sure there are a lot of parallels to that in other industries for any kind of data. Migrating data is a pain in the ass for everyone, but often it can be the people pushing for a quick solution that suffer the most when that goes wrong.
This kind is stuff is why commission structures should consider churn / residuals. Bad incentives make for hastily made decisions.
Building trust is yet another quality of a good senior. By that I don't mean to be buddy with the CEO but earning trust from everyone by making good decisions, arguments and delivering as promised. Even giving a jr a warning and let him fall flat is a good trust building exercise.
I think these people also need to learn that, in the eyes of the customer, a sandwich with a candle is in no way comparable to a birthday cake.
AI is actually quite awful for prototyping, because it makes it far too easy to add random crap to your "prototype" without any specific intention. This quickly transforms the prototyping process from something that's high-level and geared towards building the mental model of the real system into something akin to copy-editing a random piece of software without any coherent mental model involved. Moreover, prompting allows to to glaze over some essential complexity of the task without getting any notions of the scope of the effort of actually doing it. In other words, people end up failing to make necessary decisions and simultaneously get bogged down with unnecessary ones.
In short, fast feedback loops are only useful if there is actual feedback involved.
What’s missing from a lot of the “AI replaces developers” discussion is that generating code is only a tiny part of operating production software. The hard part starts after deployment: debugging, understanding cascading failures, maintaining consistency, and safely evolving systems over time.
Feels like the next wave of tooling won’t just be “AI that writes code faster”, but AI that helps absorb and reduce operational complexity after the code exists.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
If anything, quite often it's introducing more problems, because we know we'll run into them and they need to be addressed.
AI is sometimes quite lazy and refuses to solve the hard problems (sometimes making funny excuses like it would take weeks) until you make it explicit that it's important they are dealt with.
That said, I think everyone will agree both extremes are failure modes - moving too fast and not having things work right, or building things too correctly and never building something people want.
IMO the two stereotypes in the article generally hold true, and the job of each in a healthy company is to present the trade-offs, and collect enough data to experimentally validate. And when you disagree, let the decision maker (CEO) decide, but disagree and commit
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
You see the problem immediately. Sales/marketing didn't do their job sussing out what X is and wastes dev time with Ys. And worst of all, write once, support forever. Each one off Y has to be maintained for the special snowflake customer that uses it. None of the Ys actually work well for all the customers with problem X so you end up drowning in "technical debt" spent to create them all.
If your marketing department leads the company, I've discovered the best option is to just quit. Go find a job at an engineering company.
Non-developers have no clue WHAT they want, then know WHY they want it. The why is much more important to know, because the requestor has no clue how software works and imagines bad solutions.
The product managers proposing things have their reputation tied up in them so a major feature is a chance at fame for them.
It's as if everyone gets "a turn" at using the development department once in a while and they want to make the utmost of it, knowing that the instant their feature is "finished" the spotlight will be gone from them for months.
This is a valid why for a feature though. Clients want it.
Now, the concern here is do the clients want it, or does sales/product THINK the client wants it?
There's ways to navigate it.
People don't want to be judged in the introduction of an article, based on how they like to approach their literal dayjob. It's a weird jab.
It’s not difficult for seasoned engineers with deep technical backgrounds to whiteboard a distributed system in twenty minutes. It takes hundreds of customer discussions, invalid hypotheses, and years of experience building judgment about whether this is the right solution at the right time.
The engineers who compound quickly have usually built their skills in both areas concurrently. Communication of the latter is more challenging due to the judgment-based foundation beneath it.
> What if we had one system just for speed?
Like a beta? It would take incredible discipline from the business and customers not to consider that production software and demand 99.99 uptime and bug free.
When I started around 20 years ago, my junior dev experience was pretty harsh - I was taught, not always in a correct or respectful manner, to do this and not to do that. Overall though, it was absolutely useful and formative. Senior engineers are rarely abusers, they communicate real issues, better or worse, and it was on me to figure out why and how to work the right way. Also we were raised in a pretty receptive attitude to the "old" technology - from Tcl and Smalltalk to Ada, Perl, etc. It was admired classics rather than just old shit.
Surprisingly, this didn't translate too well to my experience when I found myself in a mentoring position. Starting from 2015 maybe the situation changed. Newer generation of devs felt much more entitled to social games, higher salaries and opinions rather than authentic engineering interest and therefore my experience.
No amount of structured communication would change that, even the cold pressure of production failures and very specific poor management feedback normally doesn't work. They're also more lenient to prod screw-ups, and often use "everyone can make a mistake" excuse for excusing even more mistakes. The thing is, most of them don't want to hear for any reason.
As many of my peers I learned humility and accepted that as is, only using my advantage in expertise when it comes directly to my responsibility territory, and to avoid a hassle imposed by my eager younger teammates, like I usually parse prod logs and settings with command line while the younger guys trying to push through loki/grafana query limitations.
I'm fine and safe, and my job is no less secure, I guess, because someone has to fix bugs etc. The companies less so, but as long as they don't care why would I.
It will be interesting to see this generation wiped off by the next one. I guess they shouldn't be in a very good shape, because the foundation they built upon (namely quickly changing libraries and language supersets like React/TypeScript/some JVM flavour/and I hope Kafka) will be replaced by the next tech fashion.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
The company is offering potential new services to current and potential users in the market and getting feedback on how valuable those new services might be.
I guess the author has never worked on a dog shit system with no tests at all and constant downtime.
I have worked with “complexity averse” engineers who would rather fix the edges over and over again, than roll up their sleeves and just get the job done.
I just don’t believe that using new tools is at odds with avoiding complexity.
Sometimes you have to take it to the chin, and get to use the new shiny thing along the way to move much faster.
That is to say, the rolling up of sleeves usually involves first adding tests for existing behavior and then doing more-or-less what’s described. The tools involved are aside to this; “catch as catch can”
The safest answer an engineer can give is "no".
FWIW though the idea about a "speed" product and a "stability" product isn't new. We used to call it "prototyping". I don't know when/how that disappeared from the collective consciousness. "have a space where we can build things fast with horrible practices" isn't some AI era innovation, it's what smart companies have done for decades.
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
Every codebase includes parts that are more experimental, and parts that are more core. My sense is that AI can help on both of these fronts (I.e building rapid prototypes on the fringes and hardening the core with better test coverage).
This is just an assumption and the whole article falls flat if this turns out to be wrong. In my limited (as everyone else's) experience, working with agentic AI needs good documentation, good specification (spec driven, you know it's all the hype nowadays). Those alone lead to much improvement. Now take into account that probably your senior dev also has more time to think about the big picture, to improve all those little things that were a nuisance in the past but now are a mere "Claude, fix that" in a worktree away.. I would not bet on the assumption of this article.
The message that hits for me is that of AI being a destabilizer while simultaneously being an accelerator. The Speed/Scale suggestion won't address this. A codebase no one understands, growing at machine speed won't go away just because you drew a box around it. The fix is likely more mundane stuff like process and role shifts, smaller PRs, tests, tooling, ownership principles.
The thing breaks, the salesperson says "can you check this out?" then disappears and we're back to where we started.
I don't even find this very new: many companies I've been at have tried to spin-off a "fast" team to sell stuff.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
I am really skeptical of arguments based around "I can do things the model can't" because that space of things is not very large and is getting smaller every day.
The opportunity to not merely cling on to what we have another year but to grow is to say "together, the model can manage so much more complexity than before that we can do things that were not previously possible."
We haven't identified too many of those things yet, but I am certain they are coming.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
But senior devs are also expected to have a compounding effect even pre-AI. Writing a single doc, refactoring legacy code to make it extensible, building security frameworks specific to the project and many more. All of these would compound the dev team.
I think the same will happen with agents working on a org specific paved path set by senior devs.
I have personally noticed this a lot how multiple people can work on the same problem, but the more senior developers get way more miledge out of AI compared to those that are early in their carreers.
Another difference I've noticed is how many agents one can keep running without losing awareness.
It generally just raised the bar on what management will expect from developers which will result in a shrinking workforce. The only ones that will benefit are AI companies and the upper management since less employees means less management so lower management will get screwed too.
Jevons paradox is already rearing its head, I've seen data suggesting open roles in tech are at their highest since the post-pandemic slump [1]. If you're a senior leader at a company and your engineers are now capable of multiple-times more productivity, is the logical choice to fire half, or set way more ambitious goals? One assumes engineers are hired because their outputs are worth more than their cost. If outputs, at least for those capable of wielding new tools, are higher, so is the value of that employee to you.
The universal thing I'm hearing from friends at small-mid-size tech companies, and experiencing myself, is that there is way more work and demand for it from senior leaders than they're capable of with their current teams.
1: https://www.ciodive.com/news/tech-job-postings-hit-3-year-hi...
I love this sentence.
I was given a chance to redesign and it and when I failed to add the added complexity I was let go.
To this day I reckon the higher ups are still having the same age old problems and excuses from their underlings regarding a system that has an utterly useless design. The guy in charge, rarely in the office, calmly explaining its a fantastic implementation, the new coders we are getting just cant work with it / operate it well because they suck.
I am not bitter, if anything it just made me terrified of being C-suite of any large company, knowing it would be almost impossible to understand why your company is failing.
I saw this yesterday
https://trinkle23897.github.io/learning-beyond-gradients/
They are very remotely related yet somehow very close.
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
There’s a speed limit, because the faster you go the less room for error you have. It’s the same as being heavily leveraged with debt. If you have a cash investment and it drops by 50% you can just wait. If you’re leveraged 100-to-1, a 1% drop forced liquidation and wipes you out.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream.
I think the framing started in the right path and then took a slightly wrong turn.
Both loops presented benefit from being tighter, faster. One to take a system to a “stable” (maintainable) setpoint quickly. The other to handle uncertainty.
And the additional insight about splitting the systems to better adapt to AI… we’ve described spikes for years, well before AI went mainstream
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
Fine, then, I'll keep the experience to myself.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
There's a place for prototyping and experimental features but now agile has cultivated extreme learned helplessness and everything is an A/B test because there's no longer any ability to judge whether something is good or bad based on a holistic vision.
No-one says this.
And push an insurmountable pile of technical debt onto the successor.
Well, yeah, I understand the idea and I'm all for it: the less code the better, the less changes the better.
However in certain industries it is no longer a right approach for the job. In modern frontend development if you did not update your codebase for like a couple of months, this codebase falls so much behind that it becomes way more expensive to push an upgrade as compared to daily minor updates of packages. Yeah, I hate this as much as you do, but this is the pace frontend is moving at, and if you don't follow, you will mount technical debt.
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.
2. Same, but devoting time to preparing materials which communicate my expertise.
3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.
4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.
5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.
6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.
7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.
8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.
9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.
10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.
11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.
12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
Want me to communicate my expertise? Give me some time to actually do it.
Literally what people thought after Fan Hui (2-dan) was beaten. For humans software requires ingenuity and creativity. Computers can cheat that, infact computers ALWAYS cheat that to beat humans. NTP as a method of cheating is slightly more general than say board evaluation, so it's less efficient for the same problem, but scaling laws show that with enough compute NTP can beat humans at chess (or any most other arbitrary games, in real time).
Now, with so-called AI they will mostly slap something kinda working in few days and then maybe get hacked or double invoice some customer from time to time... They will learn of those problems the hard way. Or maybe they will not because it will be mostly working emailing system and nobody will care if it will loose 2% of the emails because of some bug.
Nevertheless, either the Stable version, Scale version of the software will never happen or will be looked like not necessary or it will became a thing after catastrophic failure.
Anyway I do not think it will be like that, everybody cares about speed and money and making money quickly without an effort is the ultimate unicorn entire world is after.
Those complaining developers just stand in a way.
Middle developer: PING constructs and sends ICMP packets to an address
Senior developer: what machine, what OS?
Junior manager: Don't care, ask a techie if you need to do something technical
Middle manager: Ask <techies name> about it, I know he has great experience with it
Senior manager: PING is used to check if a host is reachable by a network
Senior developers fail to communicate their expertise, because that expertise is developed and formed by asking more questions than answers, and managers fail to understand the capabilities of "their techies", because managers see question-asking techies as counter-productive, and attempt to route around them. Managers only want answers, developers know the value of asking deep questions.
Thus, AI.
(BTW, PING is a command that produces a distinct sound on the Oric-1/Atmos computers, and it is thus an Onomatopoeia.. I know this, because I am a Senior Oric-1/Atmos Developer who knows what lies at #FA9F, how it works, what the 14 bytes are for, and so on.. because I once asked the question, "how does PING go 'poooinnng' but ZAP go 'zap'?")
AI: <asks billions of questions in a second>PING is ..
Bro & I would not get along well =)))) But the article IS good stuff.
But reduction is narrower than management which is narrower than organization.
Also uncertainty is part of complexity. Being able to isolate what is deemed predictable under clearly identified premises is the best that can hoped on that matter. It means that then one strategy can be applied to protect the stable core, and other strategy can be tried on what is unknown (known and unknown unknowns).
complexity bad
say again:
complexity very bad
you say now:
complexity very, very bad
An AI agent using a web browser like a human. I used various stealth technologies to achieve this. I set it off on a research task for me and it saved me $30 of a purchase by finding the best price. Its Jeff Bezos worst nightmare, visting amazon.com and ignoring all the product placement ads.
It had multiple tabs open, did searches in multiple places, opening products and checking sites....it looked just like a human would do doing the same task.
This I can assure you would not have been possible without my expertise. I had to be very careful to remove all bot signals from the browser, including going to browserscan.net to check. Once done, most captchas were never shown to the agent. There is a NodeJS codebase involved that I wrote by hand.
I searched through the code of the browser automation framework I was using, looking for ways to make it look more human. I had AI help with this part, but had to confirm everything and pull the agent up when it suggested bad ideas.
Most of the work was architectural, including making sure my browser was easy for the agent to use.
I'm going to add 2captcha as a next step, to solve the few captchas that it still encounters (as I still do sometimes as a human).
I'm thinking of open sourcing it, but i'm not sure if its a good idea as if it became widespread, it might encourage the adoption of even more invasive anti-bot measures.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
And society is beginning to suffer from it. AWS alone managed to slop itself into outages twice in a matter of a year [1] (and I bet that's just the stuff that escalates into mass-visible outages, not the "oh, can't start a new EC2 instance of a specific type for a few hours" kind), and a lot of companies were affected.
It's always the same game: by the time the consequences of the beancounters' actions come home to roost, they have long since departed with nice bonus packages, leaving the rest to dig out the mess.
[1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
I agree that the punchy staccato and the rhetorical questions smell AI-ish, but the way this person uses them, there’s, like, a payload each time. Versus LLM-speak, where the assertions are at best banal and more frequently just confusing.
There will be different shades of usage and maybe we draw a line somewhere in there.
So even if AI was not used to write an article, it could "smell" like AI to someone who consumes less of it.
are we just trying to say, "use AI for prototyping and customer demos that aren't important to be mature, use senior devs to develop and maintain the real products" ? You could just say that then...? Which I also disagree with as how AI should be used, AI is valid to include as a tool across all forms of development - it just should never be put in charge for production-level software (e.g. no vibe coding of mission critical components).