(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38566235

然而,文章的教训似乎表明,干净的代码实践不应总是盲目遵循,而应根据具体情况进行评估,考虑到潜在的长期变化和易于修改等因素。 在这种特殊情况下,干净代码的实现导致了不必要的复杂性以及修改代码以适应新需求的困难。 因此,作者建议在过度重复和完全避免重复之间采取中间立场。 最终,关键的一点是,干净的代码并不是万能的解决方案,每种情况都必须仔细考虑。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye, clean code (2020) (overreacted.io)
371 points by kiyanwang 1 day ago | hide | past | favorite | 300 comments










Code duplication sometimes is a good thing but it doesn't prove that clean code is a bad thing.

I think you overcommitted a bit on the refactor, if you had replaced those "10 repetitive lines of math" with a function it would have been cleaner.

Let's be crystal clear about it, your teammate did a terrible job not creating a function for those 10 repetitive lines. I would be rejecting this PR but never rewriting it.

Rewriting a PR is a quick way to insult someone because you're not open to debate and many times you don't have the full picture of why the code was written this way. PRs have to be reviewed and rejected when there are no coding standards, the last thing I want is people committing code and saying "Clean code is not important", this mindset only leads to a codebase that no one wants to work within a matter of months.

Communicating better and having some tact is the lesson you should be taking from your experience, clean code has nothing to do with it.



> Rewriting a PR is a quick way to insult someone ...

Although I agree with this there is also another, more subtle thing going on. Rewriting the code that works and someone else is maintaining is a waste of the rewriter's time and is unprofessional. It also denies the original person an opportunity to learn because they don't see how lower quality code wastes time in practice (the time wasted in the refactor doesn't count because Abramov did it by initiative).

The clean code approach is better, but the issue here isn't code. It is that he was wasting his time while not providing useful feedback to the unnamed coder. He was making terrible choices. He isn't maximising the business or team outcomes. The best outcome is the original coder raising their standards and him going and working on something that needs work. He should have angled to that. Ie, the correct option was to do a code review.



> Although I agree with this there is also another, more subtle thing going on. Rewriting the code that works and someone else is maintaining is a waste of the rewriter's time and is unprofessional.

Rewriting existing functioning code is not a waste of time in general. We don't know the full picture, so one cannot make such a generalized statement. I can write perfectly functioning code in the most terrible way that makes maintaining the code a herculean task. Very easily so even. Making things worse is almost always easy. Rewriting that code might make maintenance much easier, even if the code was previously working. It might make onboarding easier, since others can better understand what is going on. It is not in general unprofessional. To judge that, you need a way more complete picture. It can even be unprofessional to leave horrible code as is, instead of fixing it, leaving future people to plaster over it with more leaky abstraction.



But, without talking to the other person is the problem.

This made it sound like it was checked in, and this guy jumped in and re-wrote it the very same week without talking to the other person. That is probably the biggest problem here.

So, there was a project happening, one person did a section, someone else jumped in and re-did it (double work), without talking about it (risk another future triple re-write).



Not specifically for this blog post, but this is more common than it should be, and it's a symptom of organizational failure most of the time.

Either the code was merged without the people that would be directly impacted by it reviewing it, they had two teams touching the same code-path almost simultaneously and not talking to each other or some other lack of communication.

Sometimes the person rewriting the code is simply cutting through the bullshit because they know the original committer simply sucks at their job and you can't do anything about it (yes, I've seen that before).

Ultimately it could just be the guy being a jerk, but more often than not, it's not as simple as it seems.



I disagree slightly; I think there’s an important thing here which was a little bit glossed over in the article.

The original code established a pattern and left space for where and how to add code to handle edge cases, when they arose, whereas the clean code solution was implemented as though it was the complete and final solution.

A first draft is often incomplete, and in an oversimplified first draft, it’s easy to pull out patterns and remove “duplication” which is a result of the simplification, not an inherent feature of the problem being solved.



Absolutely. The lesson is "goodbye to over-applying a rule without considering context-specific trade-offs", which is a lesson that takes time to learn.


> I think you overcommitted a bit on the refactor,

Source domain checks out



> Let's be crystal clear about it, your teammate did a terrible job not creating a function for those 10 repetitive lines.

I've found over a long time (20+ years) that this is usually a sentiment held by people who are focusing on the wrong things in code bases (and quite often aren't actually solving real problems but spend their time solving non-problems with the additional side effect of creating more for the future). The first implementation of this should most definitely not abstract away anything like those 10 lines (which is minuscule). It's trivial to take something that does exactly (and only) the thing and modify it later and it's pointless to abstract away something as small as 10 lines for a gain you haven't yet proven or tested.

"Clean Code" is absolutely not important and most rules/"principles" of the same character as those that make up Clean Code are absolutely not important either, but are things that people with nothing better to do hold on to in order to validate the hornets nests they accumulate in code bases over time. It leads to over-abstracted, hard-to-change code that runs badly and is much harder to understand, generally speaking.

The only thing you should use as a guiding principle, if anything, is to express your data transformations in a clear and easily changed way, since that's the only real thing a program actually does. If something doesn't have to do with improving the expression of data transformation it's very likely it's bullshit that's been stacked on top of what you're doing for no real reason.

Most of the SOLID principles have nothing to do with or in fact make it harder to see what data transformations take place and how, which is why they are largely useless for understanding what is actually going on in a program.



If you implement one single function, then it's easy to refactor later on.

If over 10 years, dozens of people add feature after feature without thinking carefully about the structure of the code, you get huge messes that are very hard to clean up. Then suddenly an old, unsupported dependency needs to be replaced (e.g. because of a vulnerability) and it takes weeks to remove it because there is zero separation of concerns anywhere in your codebase.

You say that overabstracted code is hard to change. I agree, but so is underabstracted code - or simply code with bad abstractions. I don't care particularly about SOLID, since I'm not too fond of OOP anymore, but encapsulation (which is a concept independent of paradigms) is definitely crucial when you want to make sure a code base is maintainable in the future.



> You say that overabstracted code is hard to change. I agree, but so is underabstracted code - or simply code with bad abstractions.

I am not against abstraction, but the attitude that the person's colleague should've *definitely* abstracted away this minuscule amount of lines immediately even in their first implementation is not just abstraction, it's premature abstraction and the kind of dogmatic thinking that leads to the problems I outlined. The blog post scenario is not unique; the correct solution is almost always to leave the code less abstracted and only abstract it when there are actual proven use cases.



There's no definites in software development but if a formula is repeated 10 times you probably have a good name for it and at that point it probably should be in a function.


Exactly : "premature abstraction and the kind of dogmatic thinking".

This thread is filled with nit-picking this code, when the biggest problem was actually team communications and overall discussions about future direction. Many young programmers will spin their wheels on these types of things, but then sales/marketing might be like 'dude, this has a 1-2 year life span at most'. Or, this is being replaced by product-x next year.



> dude, this has a 1-2 year life span at most

If I had a nickel for every piece of "will be turned off in 6 months, tops" software that my workplace still maintains a decade later, I'd probably double my salary.



> If I had a nickel for every piece of "will be turned off in 6 months, tops" software that my workplace still maintains a decade later, I'd probably double my salary.

Definitely this. I've seen it, too, and I'd wager the vast majority of IT professionals have seen it, too.

At one particular company I spent a decade at, I was brought in specifically to help in the decomissioning of a particular set of legacy databases. One particular database was still on life support because a particular trading desk wouldn't move off of it, and since they made a shit-ton of money for the firm, they got a free pass on it, while IT constantly being dinged every year for the legacy database still being alive. Competing mandates between a profit center and a cost center, the cost center always loses. Which is fine, but then don't blame the cost center for not being able to make their goals because the profit center is dragging their heals. (Which was _not_ done at this firm).



Why didn't the IT folks step in and provide the labor for the conversion? This helps them accomplish their goals. Tading desks love free labor. This shows a novel way in which cost centers can attribute their costs to lines of business. Finally, this demonstrates leadership and initiative by the IT department and heads--especially if they have to fight an uphill battle about IP, and they succeed in convincing the desk.


The code should be flexible. The first iteration wasn't, simple, this is the code juniors in high school write. It always leads to a mess later on. Now, if you are a beginner like you seem to be, then it's a good code. Sure.


The first iteration is plenty flexible where it needs to be. I've been programming since 2001 and honestly, the reason I'm holding this view and you're not is very likely the exact opposite situation of what you think: I think it's hard to find experienced and knowledgeable programmers who think the best time to abstract something is the first iteration of something.


I just want to point out that this following line could come across as condescending, even if not intended that way.

> Now, if you are a beginner like you seem to be, then it's a good code.

You don't know anything about the person you are replying to except what they posted, and assuming someone with a different view (which I happen to agree with from my experiences) automatically has less experience (instead of acknowledging that others may have good reasons for their views which one hasn't considered or run into) doesn't reflect well.



So much this. It’s hard not to lose the forest for the trees — we are craftspeople after all — but at the end of the day the overall structure of a program is so much important than whether there’s a bit of duplication here or there. Better to let the structure emerge and then reduce duplication instead of trying to guess the right structure up front. And yeah I’m still not convinced SOLID is real (but I’m also not convinced classes are useful much of the time, for that matter).


Letting the structure emerge requires people thinking in depth about the underlying principles of what the code does or should do.

As for classes: They are merely a construct in many languages, that people have come up with for organizing code and in my opinion a very debatable one. Some newer languages don't even deal in classes at all (Rust for example) and with good reason. If one says we need classes for having objects—No we don't. And objects are a concept to manage state over the lifetime of what the object represents, so that might be a worthy concept, but a class? I mostly find classes being used as a kind of modules, not actually doing anything but grouping functionality, that could simply be expressed by writing ... functions ... in a module, a construct for grouping that functionality.

I think what many people actually want is modularity, which in contrast to classes is a concept, that truly seems to be well accepted and almost every language tries to offer it in some way or another. It is just that many people don't realize, that this is what they are chasing after, when they write class after class in some mainstream language, that possibly does not even provide a good module system.



> Some newer languages don't even deal in classes at all (Rust for example)

When you first write a struct, and then you below write an implement block for that struct containing functions that can only be applied to that struct, it really looks like a class to me, it just has a syntax that differs from what we're used to in other languages. Why wouldn't you call that a class?



The idea is, that it decouples state from behavior, while a class tries to group that together. Other people can implement traits later for your struct. (I think Clojure has something similar.) They do not need to subclass your struct and in fact cannot even. This design encourages and kind of forces composition over inheritance.

I would not name it a class, because I don't want people to fall back into thinking: "Ah I know! Inheritance!".



Well, having classes doesn't mean that you will necessarily use inheritance. There are programmers (ab)using it a lot, but for me, like for many others, classes are primarily a way to organize code: they provide a convinient way of representing something and specify functions that only make sense in the context of that something, while ensuring that you don't accidentally re-use those function for something else or lose track of which functions are meant for which component. They also provide a way to make collaboration easier, as you can agree on classes' interfaces and then each one goes on to implement it's own classes without having to worry about implementation details of other classes. It is true that you usually also have inheritance with classes, but I'm unsure if having it is a requirement to call something a class. IIRC from a theory perspective, classes are just an abstraction of some concepts, and the fact that a class' instance is called an object reflects this.


I think the person you're replying to tried to address that point that classes are primarily a way to organise code when other possibly equally good or better options exist like modules. An F# module might (for example) look like below.

module Hello = let say() = "hi" // returns the string "hi"

There are mechanisms to encapsulate implementation details (private functions), to have multiple modules for different "domains" and specifying public contracts.

A class seems to imply more than that: each class specifies how to create an object with a constructor (where an object is something with the class's methods except modifying some state only owned by and accessible by the object itself).



Classes are about data abstraction and encapsulation, which have nothing to do with implementation inheritance. They're about providing an interface that preserves any required invariants and does not depend directly on how the data is represented. A "structure" that either preserves useful invariants or is intended to admit of multiple possible representations that nonetheless expose the same outward behavior is effectively a class, whether you call it one or not.


I think the discussion of what to call that is a bit pointless. For some people through their Java/C++/whatever-tinted glasses, these things will be classes. To others they might be called something else. You personally call them "classes". I personally do not. Rust the language does not. Lots of people behind the development of the language thought that structs and traits are better names.

I appreciate the programming language design behind them and hope, that Rust will not devolve into an ecosystem, where everyone thinks that they must put everything into classes or similar, needlessly maintaining state therein, requiring users to mutate that state through accessors and whatnot, when simply a couple of functions (and I mean strictly functions, not procedures) would have done the job.

I never stated, that I personally think classes necessarily mean inheritance. But guess who thinks so. Lots and lots of developers working with mainstream languages, where frequently inheritance is made use of, in combination with what those languages call a "class". That is why I am saying, that I don't want those people to fall back into their previous thinking and would not want to call them classes. It gives many many people the wrong ideas.

What other people call it is their own choice. I am merely stating my own preference here and probably the preference of the language designers, whom I mostly deem to be quite a lot more competent in drawing the distinction than myself.



> requiring users to mutate that state through accessors

There are plenty of cases where this makes sense, such as when working with sub-word data (bitfields), which is common in the embedded domain and often found as part of efficient code more generally. In fact, it may be more rare to have actual structs where one definitely wants to provide actual access (e.g. via pointers/refs) to the underlying data, and thus cannot just rely on getters/setters.



> Letting the structure emerge requires people thinking in depth about the underlying principles of what the code does or should do.

Right, yes, but those principles are often still very much in flux in the early days of a feature. Once a feature is more mature, it’s easier to confidently say what the code should do, and so that becomes a good time to refactor. Early on in the development lifecycle I think it’s rarely a good idea to worry about code duplication, underabstraction, etc.

And yes I agree with you that classes are an organizational concept with parallels in functional languages. Modularity is very important, but as you say there’s no reason that modularity implies classes. Sometimes I find classes to be ergonomic, and when they are using them makes sense, but plenty of other times a struct will do, as long as there’s some type of module system to keep different things different.



I'm not on the train of applying "Clean" Code always and everywhere. However:

> I've found over a long time (20+ years) that this is usually a sentiment held by people who are focusing on the wrong things in code bases (and quite often aren't actually solving real problems but spend their time solving non-problems with the additional side effect of creating more for the future). The first implementation of this should most definitely not abstract away anything like those 10 lines (which is minuscule). It's trivial to take something that does exactly (and only) the thing and modify it later and it's pointless to abstract away something as small as 10 lines for a gain you haven't yet proven or tested.

I have to disagree here a little.

How are you going to "prove" lower maintenance cost? How are you going to prove, that less bugs happened? How are you going to prove that ahead of making the change? How would you prove it even after making the change? You cannot go back in time and live the same time again with a different approach. So what you are demanding as a kind of proof is never going to be possible and as such you are never going to have the refactoring. This argument could be had about any kind of refactoring, not only the one described in the blog post.

If one plasters the same code in many places, I have to doubt their basic understanding of computer programming. Have fun bugfixing N places instead of 1 place, once a bug is discovered. You hopefully don't be forgetting any places.

This is not to say, that no duplication ever should exist. Lets not be extremists.

> "Clean Code" is absolutely not important and most rules/"principles" of the same character as those that make up Clean Code are absolutely not important either, but are things that people with nothing better to do hold on to in order to validate the hornets nests they accumulate in code bases over time. It leads to over-abstracted, hard-to-change code that runs badly and is much harder to understand, generally speaking.

That is a very broad over-generalization. Maybe some or even many people do as you say, but not everyone. There are indeed good ideas in Clean Code. We just need to not be dogmatic about them.



> How are you going to "prove" lower maintenance cost? How are you going to prove, that less bugs happened? How are you going to prove that ahead of making the change? How would you prove it even after making the change? You cannot go back in time and live the same time again with a different approach. So what you are demanding as a kind of proof is never going to be possible and as such you are never going to have the refactoring. This argument could be had about any kind of refactoring, not only the one described in the blog post.

The existence of a situation where the code has to be changed in several places will prove the connection between all of these pieces of code. This is best proven in actual practice without guesswork, which any premature abstraction is.

> If one plasters the same code in many places, I have to doubt their basic understanding of computer programming. Have fun bugfixing N places instead of 1 place, once a bug is discovered. You hopefully don't be forgetting any places.

The first time this happens the programmer will undoubtedly be much more well informed about the connections between the pieces of code and will have an actual case of them being related, which is why it's much better to delay any abstraction until that point. Future-proofing and playing oracle about the future needs of code is generally a dead end but the programmers least capable (i.e. junior programmers and/or Clean Code enthusiasts) of playing that game are also the most likely to do so.



> The first time this happens the programmer will undoubtedly be much more well informed about the connections between the pieces of code and will have an actual case of them being related, which is why it's much better to delay any abstraction until that point.

In a perfect world maybe. And even then it would be better not to accumulate issues until they start hurting. But in the real world there are often people, who never had these issues on their calendar and will not be willing to allow for sudden fix it time.

> Future-proofing and playing oracle about the future needs of code is generally a dead end but the programmers least capable (i.e. junior programmers and/or Clean Code enthusiasts) of playing that game are also the most likely to do so.

I don't need to be an oracle to tell, that 10 times the same code will incur the need to change 10 places instead of 1. I don't need to be an oracle to write code, that from the beginning can expect some change in business needs.

I am fed up with people thinking, that developers are like little babies, who never learn anything from their past experiences. All in good measure of course. If I tell someone, that I see a problem for the maintainability of the code of some project, because it cannot easily be extended in ways, that are likely to happen due to business needs, I expect them to take me serious and not belittle my experience. Engineering is my discipline, and often not theirs.

I have build systems, that anticipated changing requirements and they did change just like I thought they would. How did I do it? By making parts of the code work in "additive" ways. I cannot state exact details, but there was some project about checking values of variables and give some result of whether the values were correct. The values were of various types, so they required various kinds of checks. For example string edit distance. In such a situation it is already foreseeable, that at some point you might want to add checks that check the values in different ways. For example number equivalence and then later on equivalence with some epsilon, some allowed deviation from the correct value. So I implemented it in a way that I can register checks. That way one can add a check later and register it, without changing existing code. These things can be anticipated and a good design will allow for change later easily. No need to be an oracle. Just need some experience.



I'm not sure you and the OP are even in significant disagreement. When exactly did the OP say developers are like babies who don't learn from experience? From my perspective, the OP seems to be critiquing the culture of indoctrinating inexperienced and/or developers into the cult of clean code. It is the clean code camp that is more likely to dismiss experienced engineers as unenlightened dinosaurs.


> The first time this happens the programmer will undoubtedly be much more well informed about the connections between the pieces of code

The first time this happens, the programmer will know about 1 or two of those places, and insert difficult to find bugs all over your software.



Indeed.

The first time this happens the programmer will spend hours looking for the first few places that need to be changed. Then he will spend days debugging, and hopefully will find the last few places.

Then, even though he is now informed about the connections, he will NOT refactor this code because he (a) has no time to refactor as he already spent a week on what seemed like a one-line fix, and (b) is now terrified to touch this code any more than absolutely necessary.

Then the cycle will repeat.



The advantage, if the abstraction is wrong, of using the abstraction is that you can use tool automation too look up all the uses of the abstraction and replace them with a new one at need without needing to memorize the codebase.

Hodgepodge copy-template code that has minor context-sensitive structural differences is relatively difficult to find all instances of in a codebase without tracing execution paths unless you were the original author and you have the uses memorized. In contrast most IDEs have a find all references and even grep can find all the common names of something in a codebase.



> It leads to over-abstracted, hard-to-change code that runs badly and is much harder to understand, generally speaking.

Abstractions, even bad abstractions, are far easier to deal with than unabstracted code. Abstractions are cheap and disposable. You're not married to them. They only seem hard to change if you're stuck on the mindset that to implement a change to a piece of code using an abstraction, that you have to change that abstraction, which is incorrect. If an abstraction no longer fits, you should more often than not stop using it in that part of the code rather than altering the abstraction to fit the new use case. They are also easier to understand by virtue of communicating their intent, unlike unabstracted code. It takes a lot of effort, on the other hand, to read through and understand a chunk of unabstracted code, and even more to change it or write a slightly differing variant of it for a new use case, and even more to extract any kind of abstraction from a bloated unabstracted mess. With this being the case, it always makes sense to eagerly abstract if you can, even if your abstractions are bad, because the result will still be more maintainable than unabstracted code.



I think the original blog post is badly worded. We can see by the proposed refactor that in fact those 10 lines were not identical - they were only similar. The formulas for resizing the top-left corner of a rectangle are different from the formulas for modifying the bottom-right of an oval. They look quite similar, but one will have a + instead of a minus here and another one there etc.

The complexity is built into geometry itself in this case, and "abstracting" it away is only moving the mess around, it fundamentally can't (perhaps there is some clever mathematical abstraction that could using some special number group, but unless you have that built-in, it'll probably be much harder to implement the special arithmetic using built-in ints than you gain).



To be fair, he does say that he took that away as a lesson. He listed it first, so I’m charitably going to call it the “primary lesson”


Assuming this story is real, there are several smells

* Colleague writes quite a bit of code, and merges it without anyone reviewing it or providing feedback.

* OP rewrites the code from a colleague without communicating, and again merges it without a review.

* The manager calls a meeting with OP and ask them to revert their changes.

The initial code might have been messy or not, and the refactor might have been a bad or good idea. Nevertheless I think OP is taking away the wrong lesson.



> Rewriting a PR is a quick way to insult someone

Wow. Just review the review and if it's good, merge it, resubmit to a different reviewer, whatever your process is. The reviewer/re-writer is helping get work done and being offended is counter-productive.



What the GP explained is commonly true when it comes to interacting with other humans, with only a few exceptions. You can complain about human nature being counter productive all you want, but refusing to adapt to this reality and foregoing fundamental soft skills is, ironically, even more counter productive.


>>many times you don't have the full picture of why the code was written this way.

Chesterton's Fence [0]

[0] https://fs.blog/chestertons-fence/



"Clean code" needs a re-brand. It seems like constantly, over my 25+ year career, I'm seeing never-ending push back against design patterns, abstractions, de-duplication etc.

There are always the same reasons:

- Abstractions make code more complex

- We don't have time to write clean code

- It's all just your opinions and preferences

The purpose of clean code is to make code SIMPLER, and easier to maintain as requirements change. The value of software is precisely its ability to change over time. Otherwise we could stick to fixed circuits which are FAR easier and cheaper to implement and maintain.

Therefore, if you did not achieve these goals with your refactor, and your boss can make a persuasive argument that your refactor made it HARDER to maintain as requirements change into the future ... then your refactored code wasn't "clean", now was it?

As for not discussing the refactor with the original developer, that's outside the scope of clean code. Has nothing to do with whether or not clean code is good or bad, or whether your changes were "cleaner" or not. That's a process and etiquette discussion. Just because you did a dick move and went cowboy doesn't say anything about whether or not clean code is valuable.



I think the pushback is against certain recipes that seem too absolutist. "Clean Code" is the title of a famous book that defines clean (among other things) as:

"No Duplication

Duplication is the primary enemy of a well-designed system. It represents additional work, additional risk, and additional unnecessary complexity."

So according to this definition, removing duplication is synomymous with simplification, which is simply incorrect.

Removing duplication is the introduction of a dependency. If this dependency is a good model of the problem then this deduplication is a good abstraction and may also be a simplification. Otherwise it's just compression in the guise of abstraction.

[Edit] Actually, this quote is a reference to Kent Beck’s Simple Design that appears in Clean Code.



Actually, when writing highly optimized code, cut-and-paste duplication is not unusual. We may also eschew methods, for static FP-like functions (with big argument lists), in order to do things like keep an executable and its working space, inside a lower-level cache.

But also, we could optimize code that doesn't need to be optimized. For example, we may spend a bunch of time refactoring a Swift GUI controller struct, that saves 0.025 seconds of UI delay, avoiding doing the same for the C++ engine code, that might save 30 seconds.

I find "hard and fast rules" to be problematic, but they are kind of necessary, when most of the staff is fairly inexperienced. Being able to scale the solution is something that really needs experience. Not just "experience," but the right kind of experience. If we have ten years' experience with hammers, then all our problems are nails, and we will hit them, just right.

I tend to write code to be maintained by myself. It is frequently far-from-simple code, and I sometimes have to spend time, head-scratching, to figure out what I was thinking, when I wrote the code, but good formatting and documentation[0] help, there. Most folks would probably call my code "over-engineered," which I have come to realize is a euphemism for "code I don't understand." I find that I usually am grateful for the design decisions that I made, early on, as they often afford comprehensive fixes, pivots, and extension, down the road.

[0] https://littlegreenviper.com/miscellany/leaving-a-legacy/



In my view, the path to mastery is to first learn the rules, follow them and then learn when to break them and then transcend all of that:

https://en.m.wikipedia.org/wiki/Shuhari

TBH, I don't know if I will ever reach the transcendence stage, but my two cents are (after 20 years of professional programming) that breaking the rules is sometimes desired but not when the person has not even reached the first stage. The example from OP, as explained in the post, seems a clear case of never reaching stage one, following the simple rule of no duplication.

The conclusion in the article is kind of good and bad: it seems the OP has reached stage two but for all the wrong reasons and there is no telling whether they now actually posses the knowledge or that they just discarded one rule to follow another, both of which can be wrong or right based on the situation.



That's not just a view or opinion, it's how learning works. Thanks for the link!

When starting to learn a domain, you lack deep understanding of it, so you cannot make sound decisions without rules.

But rules are always generalizations, they never fully encompass the complexity they hide.

Through experience, you should become able to see the underlying reason behind the rules you've been given. Once full understanding behind a rule is ingrained, the rule can be discarded and you can make your own decisions based on its underlying principles.

Then you become an expert through deep understanding of several aspects of a domain. You are able to craft solutions based off intricate relationships between said aspects.

Let's take `goto` as an example. The rule you will commonly see is "don't use it". But if you're experienced enough you know that's unnecessarily constraining, e.g. there's nothing wrong with a `goto` in C for the purpose of simplifying error handling within a function. It only becomes an issue when used liberally to jump across large swathes of code. But your advice to juniors should still just be "don't use it", otherwise your advice will have to be "use `goto` only when appropriate", which is meaningless.



I agree.

Also, that "unnecessary optimization" thing, can be a form of "bikeshedding."

Maybe the app is too slow, and the junior engineer knows Swift, so they spend a bunch of time, doing fairly worthless UI optimization, when they should have just gone to their manager, and said "The profiler shore do spend a mite of time in the engine. Maybe you should ask C++ Bob, if he can figure out how to speed it up."



I love this part: "It represents additional work, additional risk, and additional unnecessary complexity", because it could be "refactored" into "additional work, risk, and complexity". I assume it hasn't been, because (in the author's opinion) it communicates the intended meaning better - which might be the case with code, too. "Well-designed" is subjective.


Good design has objective and subjective elements. Or... it might be more accurate to say that it is entirely objective, but some/many elements are context-sensitive.

For example, a style of writing that is difficult to follow but rewarding to parse for the dedicated and skilled reader may be considered good. It is good at being an enjoyable reading puzzle. But from an accessibility standpoint, it's not a clear presentation of information, so it's not good.

Mostly we call things that are increasingly accessible well designed. But we're using a specific criterion of accessibility. It's a great criterion and it's one we should generally prioritize. But it's not the only facet of design.

In code, we generally could categorize high quality design as accessibility. Most engineers probably think of themselves as not really needing accessibility features (although how many are undiagnosed neurodivergent?), but writing code that is easy to read and parse and follow is an accessibility feature and an aspect of good design.



> Removing duplication is the introduction of a dependency. If this dependency is a good model of the problem then this deduplication is a good abstraction and may also be a simplification. Otherwise it's just compression in the guise of abstraction.

I think you're referring to coupling. Deduplicating code ends up coupling together code paths that are entirely unrelated, which ends up increasing the complexity of an implementation and increase the cognitive load required to interpret it.

This problem is further compounded when duplicate code is extracted to abstract and concrete classes instantiated by some factory, because some mastermind had to add a conditional to deduplicate code and they read somewhere that conditionals are for chumps and strategy patterns are cleaner.

Everyone parrots the "Don't Repeat Yourself" (DRY) rule of thumb and mindlessly claim duplicate code is bad, but those who endure the problems introduced by the DRY principle ended up coining the Write Everything Twice (WET) rule of thumb to mitigate those problems for good reasons. I lost count of all the shit-tier technical debt I had to endure because some mastermind saw two code blocks resembling the same shape and decided to extract a factory with a state patter turning two code blocks into 5 classes. Brilliant work don't repeating yourself. It just required 3 times the code and 5 times the unit tests. Brilliant tradeoff.



> saw two code blocks resembling the same shape

Yeah, this is the crux of it. What exactly is duplicated code? Humans are pattern matching machines, we see rabbits in the clouds. Squint at any 4 lines of code and something might look duplicated.

On the other hand, code bases that do have true duplication (100s of lines that are duplicated, large blocks of code that are exactly duplicated 16 different times), multiple places & ways to interact with database at differing layers - that's all not fun either.

It is a balance & trade-off, it goes bad at either extreme. Further, there is a level of experience and knowledge that needs to be had to know what exactly is a "duplicate block of code" (say something that pulls the same data out of database and does the same transform on it, is 20 lines long and is in 2, 3 or more places) vs things that just look similar (rabbits in the clouds, they are all rabbits).



> Deduplicating code ends up coupling together code paths that are entirely unrelated, which ends up increasing the complexity

Code paths that may be unrelated. If they are related, then deduplicating is most definitely a good idea. If they're trying to do the same thing, it makes sense that they call the same function to do it. If they do completely different things that currently happen to involve some of the same lines of code, but they could become different in the future, then deduplication makes no sense.



I'm not really sure I know what you mean by "compression in the guise of abstraction". Re-usable code is a great way to isolate a discrete piece of logic / functionality to test and use in a repeatable manner. A sharable module is the simplest and often smallest form of abstraction.


Reusing a function C in functions A and B makes A and B dependent on C. If the definition of C changes, the definitions of A and B also change.

So this is more than reusing some lines of code. It's a statement that you want A and B to change automatically whenever C changes.

If this dependency is introduced purely out of a desire to reuse the lines of code that make up C, then I'm calling it compression. In my view, this is a bad and misleading form of abstraction if you can call it that at all.



> Reusing a function C in functions A and B makes A and B dependent on C. If the definition of C changes, the definitions of A and B also change.

To pile onto this example, in some cases the mastermind behind these blind deduplication changes doesn't notice that the somewhat similar code blocks reside in entirely different modules. Refactoring these code blocks into a shared function ends up introducing a build time dependency where previously there was none, and as a result at best your project takes longer to build because independent modules are now directly dependent or at worsr you just introduced cyclic dependencies.



Unrelated modules often superficially look like each other at a point in time, I think that’s what the parent is referring to. An inexperienced developer will see this and think “I need to remove this duplication”. But then as the modules diverge from each other over time you end up with a giant complex interface that would be better off as 2 separate modules.

So deduplicating unnecessarily compresses the code without adding anything to code quality.



Unrelated modules often superficially look like each other at a point in time, I think that’s what the parent is referring to.

Yes, exactly.



One thing I learned in industry is that the best cleanly written code does not survive contact with product. Sometimes the technical debt is encoded in the product requirements themselves, and no amount of refactoring or best practices can fix it, only mitigate the pain.


Sure. We see that all the time. Error handling and security are often afterthoughts, for example. We engineers tend to get "commissioned" to build quick proof of concepts and once the business sees their vision come to reality they don't want it to be thrown out and redone with architectural consideration, they want to rush it to market as the "MVP" as quick as humanly possible. This is a reality of our industry that we need to manage and find ways to cope with.

When it comes to certain architectural decisions, there are things that are very hard to change later on, and sometimes you don't know how the product will need to scale, or what challenges the engineering team will be faced with a year, two years, 5 years down the line. This is where we need to make our best guesses and we can't always get it right.

When it comes to the rest, making the code as simple and easy to change is the best we've got.

I was recently reminded by someone I look up to and admire that not everyone has the "gift" of being able to think "top down." This came up because I am often frustrated by the claim that "we don't have time to write this cleanly."

It's a frustrating claim to me, because I don't find that writing short, single responsibility code that is well labelled takes any time at all. I also spent time learning how to work TDD so I can write unit tests as part of the development and design process rather than having to go back and write tests after the fact.

But a lot of developers build "bottom up", where they write really long functions or methods, or dump everything into a single file and then think about separation of concerns, and layers and isolated "components" later on and thus need to spend a lot of lengthy time refactoring after the fact.

That initial "bottom up" process is the "getting it to work" part of the process, which is fun for most devs. But if they then need to refactor after it is tedious and laborious and not fun at all. And I think this is probably where the vast majority of the push-back and friction amongst devs really comes from if we look at it from a human point of view.

If you enjoy the process of DESIGNING software, and thinking top down, you won't find refactoring to be a chore or time consuming. You will understand how a well chosen abstraction can SIMPLIFY rather than complicate. But if you like to solve problems primarily, and to build things you haven't built before, and by "building" it means getting something that "works" up and running quickly ... then you don't want to spend your time reading books on design patterns, or refactoring your PR because the Principal told you it's not up to standards.



Top down and bottom up have little to do with using long functions and not separating code in multiple files. In top down you start with a high-level overview of the system and gradually break it down into smaller and more detailed components as it provides a clear and structured view of the system. In bottom up you start with the individual components or modules and gradually combine them to create larger subsystems or the complete system. Each approach is suitable for a type of problem. I personally prefer to start off a POC as bottom up then once the general shape of the system emerges a rewrite top bottom is a good idea. I've seen lots of systems that started as top bottom and the architecture was atrocious because it's final shape wasn't know at the beginning, it was great on paper though. There's no silver bullet, there are multiple ways to solve problems and they all have their pros and cons. The takeaway is part is that applying methodology dogmatically is not a good idea.


> I don't find that writing short, single responsibility code that is well labelled takes any time at all.

Writing it takes no time; designing it does.

> That initial "bottom up" process is the "getting it to work" part of the process, which is fun for most devs.

It's not about having fun: it's about getting the functionality to the first users, and then, finally, you can get some slack and have fun and refactor your code for future changes. But functionality is what sells: clean code is invisible to the users and thus to the business.

Clean code is for us developers, and only for us. We have the right to be egoist and work for our own sake from time to time, and a working functionality is what will let the business close an eye over it for some time.



> It's not about having fun: it's about getting the functionality to the first users, and then, finally, you can get some slack and have fun and refactor your code for future changes.

You're speaking to business motivation, which I intentionally did not touch upon. Everything you said is valid.

What I'm speaking about is developer push-back against "clean code." And when a developer is advocating for the business, then it is business push-back.

Business push-back is valid because a) their business needs are valid and b) it is very difficult to correlate code quality with business value. There is a correlation, because developer velocity and ability to grow the feature-set and fix defects quickly and efficiently benefits the business as it reduces cost over time and benefits users. But that benefit is long-term and not immediately visible by end users. The business doesn't always want to pay for this up front which is their prerogative, and then we circle back to what you said about developers doing it in order to make our own lives easier because we know the day is going to come when the business gets frustrated that velocity was extremely high at the beginning of the project life-cycle but has ground to a halt.

In any event, developer push back is real, and is not tied to business values but to personal developer values. I see code comments (which IMO are smells and point to smells) that say things like "// no point in extracting this anonymous function as it is only a few lines long" ... meanwhile doing so would have taken no extra time at all and it is buried in the middle of a 300 line Promise chain or rjxs stream etc.



To see how clean code affects users you need to look beyond its direct, first order effects. Clean code is less likely to be buggy, is more malleable and performs better. That's how clean code benefits users, they get to enjoy faster software with fewer bugs and more features.


"Clean code" can mean two things: (1) someone's subjective feeling that a certain piece of code is nice, and (2) following the (majority of) the guidelines in the book Clean Code and related.

What people have started feeling, myself included, is that (2) and (1) are fundamentally different in many places. That is, that following some of the guidelines of Clean Code the book produces code that is very much not clean.

So sure, everyone agrees that clean code is better, almost by definition. There are indeed valid (usually business/time) reasons to go for code you know is messy, but those are all external - in an ideal world, everyone wants to write clean code. But, increasingly, people don't feel that Clean Code is a good way to achieve that. In fact, many of the patterns in Clean Code have ended up being considered sources of messy code by many people - over-abstraction and de-duplication perhaps chief amongst them.



After numerous failed attempts at reading Clean Code, I'd say (1) and (2) are very much at odds.


If you don't mind, I'd like to propose an alternative list:

- Most of our abstractions are bad, even those we think are good. The good ones are largely already in our libraries (although even that isn't always the case unfortunately).

- Bad abstractions are a time sink.

- Writing code is also about communicating with other humans (on the team), so what they think about communication matters a lot more than what I think



> Bad abstractions are a time sink

Yes, but contrary to the general perception, I fend them to be far less of a time sink than duplicated code. It takes almost no time at all to bypass or rework a bad abstraction, but code duplication necessitates either a literal multiplication of your efforts wherever you need to add features to duplicated code, or to refactor the duplication into a non-duplicated form, which also takes a lot longer than dealing with a bad abstraction. Abstractions are cheap and disposable, even if they're bad. Duplication is expensive and hard to get rid of.



Abstractions are not cheap and disposable. If 5 places use one abstraction, it takes at least (1) 5x the effort to dispose of them compared to changing one of those places.

(1) At least 5, because sometimes its worse. Example: lets say you are using a bad ORM which in addition doesn't expose its connection pool in any reusable way (most don't). You want to introduce a good ORM, but you can't because duplicating the size of the connection pool is too expensive. Additionally, because the ORM triggers events when objects are created and modified, and database state relies on that, you cannot use the other ORM for object creation and modification unless you replace them all at once. So you would have to refactor your code to introduce your own data access layer that has implementations for both ORMs, then switch over all at once.

In contrast, introducing deduplication later is somewhat easier. You can build an abstraction and test it with 1 or 2 of the existing parts of the code, then migrate the rest incrementally instead of all-at-once. It will also prevent some bad (expensive) aspects of the design due to the requirement that it must be incrementally adoptable (must be able to have a reusable connection pool, demand a more reliable change events system etc) and as a corollary it will also be incrementally removable.



"somewhat" easier is debatable.

The bad situations from the costs of duplication are often far worse. If those code paths had slightly different needs, they would diverge. It often feels like 90% of my career has been to take on massive undertakings to learn am entire system to ultimately pick these out after the fact & replace it with something reasonable.

I don't have data to support any absolute claims of what is better vs not. However, every project, team, company, industry that I have worked on has been filling my time with the above scenario. Not sure what universal truth could be pulled from that (if any)



> Otherwise we could stick to fixed circuits which are FAR easier and cheaper to implement and maintain.

While this is not the key point of your post, this in particular is also not the case. With even a little complexity, it quickly becomes much easier and cheaper to implement functionality in Software on a generic CPU than in fixed circuits.

For example, we often integrate simple CPU in products that otherwise are fixed circuits, just to perform single functions as they would be too resource intensive and hard to verify. The CPU then becomes a slave to the surrounding circuit.



People never think about complexity as a tool, but it's as much of a tool as the lowly assumption. If anything, they're two sides of the same coin.

You can make things simple by making assumptions, you can enable cool things by making things complex.

the thing about assumptions is that they're dangerous. easy example: assuming the timezone in a db column vastly simplifies things but if you're ever wrong things can go terribly wrong.

Which is why some of the most dangerous developers on your team are the ones who either don't _respect_ assumptions or don't even realize they're building in assumptions.

The flip side is true for complexity. You can absolutely harness complexity to great effect, but if you have developers who don't realize they're building in complexity or don't respect the cost of that complexity, you have dangerous developers.



I think the broader issue is that there is little valuable empirical evidence that the stuff people say is clean code actually makes things simpler. Likewise I find it unlikely that there is one way to make things simpler and its the way associated with the book. It's all vibes.


I like to keep in mind a companion rule along with DRY (don't repeat yourself): Don't make the reader repeat themself, i.e. re-read your code multiple times after you've cleverly repackaged it.


What a giant pile of “no true Scotsman”

The purpose of clean code and the results of practicing it, it turns out, are vastly different. The reason for that is because people are generally incapable of predicting any future state of their code even if they have the requirements in front of them.

Clean code necessitates that dreadful question of “what if”, which is a horrific Pandora’s box of never getting your shit out the door.

It is nearly always better to code for what you know, and to have a person experienced in the domain to otherwise guide the architecture.



I never made the argument of "no true developer." I pointed to specific reasons cited when push-back is given, and then explained why I disagree with those points.

> Clean code necessitates that dreadful question of “what if”,

Nope. Not even a little. "Clean code" can be as simple as keeping your moving parts isolated and loosely coupled so that they are easier to debug, to reason about and to reuse in new contexts if needed.

What you're talking about, the "what if" game ... is what Martin Fowler termed "Speculative Generality" in his book "Refactoring" when cataloguing code smells and sources of complexity. In other words, the exact opposite of what I'm talking about.



Sorry, but your entire post is a “no true Scotsman”.

>if the code is not maintainable, then it’s not real clean code”

If this isn’t a word for word prime example of “no true Scotsman”, I don’t know what is.



Yup, Clean Code is to make the life of $NEXT_PERSON easier.

The value put on this varies from company to company.

Some require it, others will balk at not releasing a feature because you want to tidy the code up from being prototype quality.



It's a good idea to remember that $NEXT_PERSON could quite possibly you a year from now, when you've probably forgotten what you were on about previously. There are selfish reasons to clean code as well!


Philosophy of Software Design is a much better book discussing these ideas than clean code. Part of the problem is uncle Bob's book is now tied up with the idea of clean code, but his book sucks.


Your colleague wrote lots of code using copy and paste. You refactored it post commit. He complained to your boss, who had a go at you, and now you're going to leave mess in the codebase next time.

There's a lesson to be taken from this experience but it isn't that copy&paste is better than writing functions.

Your refactor should have gone up as a change to be reviewed, tagging the author. However the escalation to boss who tells you to revert is a really bad sign, as opposed to your colleague talking to you directly.



I guess everybody has a different way of doing things. Personally I'd ask my colleague on Slack (privately), "Hey, you repeat 10 lines of math in each of these functions, wouldn't it make sense to make it a separate function?".

The two probable answers are: "No, because... (reason)" or, less likely, "You're right, I'll do it tomorrow". No fuss, no problems for anyone, and someone can learn a thing or two.



It also doesn't waste an entire evening obsessing over a refactor. A lesson I've learned to do with refactoring my own code.


If it is copy-paste out of laziness, then certainly "writing functions" (abstractions) would be the better solutions for repetition (DRY).

But the author adds crucial context:

> My code traded the ability to change requirements for reduced duplication, and it was not a good trade. For example, we later needed many special cases and behaviors for different handles on different shapes.

It shows the difference between *accidental DRY* and *inherent DRY*. The context makes it clear that what we saw initially was *inherent*. Quite often, code is the same because the underlying libs A and B are consistent with eachother. Or business needs state that something has to be 15 items large, and another thing also has to be 15 items large. And so on. But when we look closely, the libs' consistency is really just chance: our wrapper around lib A and a wrapper around lib B may contain the same code, but they are wrapping something entirely different and abstracting that makes the result worse. Same with the "15 items". When business states "every list must always show 15 items", then sure, we should abstract the 15 somehow. But if it says "the top 15 on the homepage shows 15 songs" and "the comments under a song, by default show 15 items" then abstracting that 15 is a worse option.



Without knowing too much about the context, it probably would have been possible to keep the structure where every shape is structurally independent from other shapes, but where calculations are abstracted away in pure functions that are just being called by the shapes. That keeps the maths in one place (and you can always add more calculation functions later if needed), but it doesn't entangle different shapes with each other needlessly.


For me the biggest red flag in all of this is changing a coworker submission without any kind of communication.

Second red flag is that the refactor is not a good refactor, it lacks modularity and is still coupled, yes, you reduced some duplication but that's mostly it.



> tagging the author

Sounds like trouble to me.

“Hey I rewrote what you wrote because it sucked. Got problems with that?”

I suppose the best solution to this problem would be a “late review” asking for an explanation, offering a solution, and hoping it doesn’t just get ignored, which is the easiest thing to do for the author.



> Hey I rewrote what you wrote because it sucked. Got problems with that?

Reading it like that is just insecurity speaking. If anyone wants to rewrite my code to be more readable, I'd love for them to tag me in the review and show me how they did it, so I can learn. Assuming good intentions and being open to discussion goes a long way :)

I'll always tag the author for review if I'm rewriting something recent. It's more "Hey, I was too late to mention this in the review, but I think this improvement was worth the time. Let me know what you think in case I missed something important about the original."



If someone had just summerly re-written my code I would personally go talk to them and found out 'whats up'. As it shows a communications breakdown. Second the person who did it should have approached the other dev and said 'hey I see you have blah why? would not this way be better?'.

This happens many times in codebases. People begin to 'own' sections of it and they become very protective of what is going on in their area. They take it very personally if something changes and they were not in on it.



It's a context thing. The point of adding the author as reviewer is to keep them in the loop, show an alternative solution and discuss which way to go. Also to learn why the did it the copy/paste route in the first place - it might be carefully reasoned, it might be a rush job, they might not know any alternatives.


Also to put the conversation on record somewhere so that if someone comes later with the same complaint they can see why the code was written like that.


I wouldn't necessarily tag the author for a review. I might ping them to notify that I was doing some refactoring in the area and leave it at that. Of course it all depends on the team size, diversity/range of work, typical interactions between members, etc.


I don't think a minor deduplication justifies a separate discussion even before a single line is changed. That would make a huge deal out of it and waste everyone's time.

I wouldn't probably make a separate PR for that - just refactor the code next time I need to touch it - and tag the author for review of course.



There was a significant bit of the article at the end where the author discussed future changes to the codebase that requires special casing handles for different shapes, how it would have been difficult with the refactor but simple with the repetitive code.

I think that’s the critical bit is that you have to realize that the code will change in the future. Requirements will change, related code will change. If you make abstractions that restrict that ability to change, you’re hurting future you’s ability to adapt the codebase. More so because we often hold on, emotionally, to these abstractions we’ve invested so much time and effort into that we waste more time trying to elegantly fit the changes into our elegant solution.

KISS



It all seems like a sign of social dysfunction to me. That this blog post lacks most of the important details that have nothing to do with the code.


I'm going to go out on a limb and guess that their "reason 1" was the sole source of the issue. This smells like an instance of the "nobody likes a know-it-all" problem that often arises with junior devs who haven't yet learned that their job isn't just between them and the computer, but them, the computer, and other people: the cleaner version likely would have replaced the dirty version had the author actually had a dialog with the author of the dirty code and mutually decided that there was a better way.

The article likely would be better titled as "I learned a lesson about working on a team", not "goodbye, clean code".



> instance of the "nobody likes a know-it-all" problem

Excellent insight. I think this exactly captures the dev environment. The more complicated code was viewed as a threat, the rival worth taking down a peg or two via management chain.

For juniors reading this who have the same experience. "Nobody likes a know-it-all" is context dependent. It's absolutely true of lots of professional settings. Simplicity over flexibility, avoid confusing language constructs, preference for lots of time at the keyboard and languages that encourage this.

The lifelong learning, software craftsman, reflect on the process philosophy advocated in various places is completely at odds with that setup. Trying to get better is unpopular with those who are not improving.

Both are valid life choices. Software has been a comfortable place to put in your seven hours at the office and go home. It's also a world to explore bounded only by your imagination and intellect. Beware spending decades working with those who fear the unknown if you'd like to be one who discovers novelty.

Or more concisely, if your incentives are misaligned with those of your colleagues, consider getting out of there.



I think the title is fine. It's a classic for good reasons and has been an eye-opener to a lot of devs falling into this exact trap, which you can do even if you're working alone. You overengineer a clean abstraction just to get bitten by it in the future.


> This smells like an instance of the "nobody likes a know-it-all" problem that often arises with junior devs who haven't yet learned that their job isn't just between them and the computer, but them, the computer, and other people

Meh, it is usually seniors on power trip who are guilty of this.



The problem here is that changes are committed directly to the main branch without a review. There needs to be some sort of peer review (I don't care if via pairing or with a classic PR) for every code change exactly to prevent an escalation like experienced by the author.

The boss was a bad boss, instead of reprimanding you, they should have realised that they were missing a good process. In a team with a code review culture, an overeager newcomer might attempt something like this (and tbh, whether it was a good attempt or not is impossible to judge without knowing the code and its context) and a peer would have said "yeah, thanks but no" or "good idea, but let's try it like this instead", etc.

Independently from that, I do think that one shouldn't just refactor code without good reason because even if it's bad code, it might be code that rarely needs modification. The best time for refactoring is when you need to change behaviour and you realise that the old code structure is making things too hard.

Unfortunately, I think the wrong lesson was learned here.



Code review after merge is perfectly fine. Better, in some ways.


Could you elaborate on how code review after merge is fine or better?

Some major downsides to review after merge:

- cost of context switching (author has moved on to something else new, which now remains paused to "go back", so there's no agility benefit to just merging if it works)

- increases risk of unnecessary conflicts (how do you address someone merging something, you have feedback, then someone else merges after on top? A PR helps resolve code that's done vs could be improved because it forces a communication moment between the authors)

- tooling (a PR or diff is well supported. How are you discussing feedback when everyone can just merge on top without review? I am assuming there's no point to a PR if everyone can just merge)

- decreases shared learning and understanding (I might think the code follows our standards but there still may be feedback from my team that could help improve. Why put that in the main branch before such feedback? It seems like it would be hard to keep track.)

I can't imagine my team performing well under those circumstances and I think we have a very healthy code review / quality culture. If I'm not giving or receiving feedback - that sounds more like code slinging than thinking and humility, even for the most experienced architects I've worked with welcome feedback, so it's not a matter of trust.



I haven't used this workflow but I imagine your concerns could be addressed.

> addressing merge conflicts

I actually think, from the pov of the change author, this workflow is better at this. Other code changes have to resolve conflicts with yours, not the other way around. The followup changes from review feedback can begin with conflicts addressed for you.

> tooling

I don't know how other platforms handle this, but on GitHub at least there is nothing stopping you from reviewing a merged PR. You can prevent pushing straight to the trunk while still allowing authors to merge their own PRs at will.

I do think your other points are clear drawbacks but on its face the practice doesn't seem without merit. Seems like the "show" point on the ship/show/ask spectrum.



That may work in some cases, definitely a no go in a workflow where deployment is fully automated via GitOps, i.e. every push to the default branch triggers a deployment to prod.


What's the point of code review after merging?


The same as reviewing at merge time.


Why is everyone stating what he should have done instead? This is a perfectly legitimate lesson.

In finance, we are constantly dealing with products which are somewhat alike but not quite. Equity options vs Fx options ? Sure, they're options, they have a strike, we price them using some version of Black Scholes, etc. But oh boy they are so different.

The temptation for new joiners to factorize a lot of behaviors is huge, and many times we are in the shoes of the OP's boss, telling the younger devs to remain calm and keep things separate.

It's not about saying goodbye to clean code, it's about actually maintaining a clean code by not overabstracting.



> Why is everyone stating what he should have done instead? ... In finance, we are constantly dealing with products which are somewhat alike but not quite.

Yes, but we are talking about geometry. What the (usual) handles do are affine transformations (translation, rotation, scaling and mirroring), if the scaling is uniform, these are similarity transformations. And are guaranteed to work for any shape. Which leads to the real problem: you can't (well, shouldn't) abstract something, if you don't have enough knowledge about that - especially what new possibilities may show up in the future. For example if somebody thinks that the only possible ovals are circles and ellipses, they will be quite surprised when the first "real" oval shows up.



It's funny that you phrase this as such, because this exact kind of simple geometry is my go-to example for why this stuff is not as simple. Let's look at a square. Everyone knows a square is a rectangle, right? So of course a square should extend from a rectangle.

But math doesn't deal with mutability all that much. It's interested in the visible properties and constraints. A square is a rectangle because it meets all the constraints and has all the properties of a rectangle. But that is no longer true if the shapes are mutable. A rectangle might appropriately have `setWidth` and `setHeight` operations. A square cannot implement these operations and still obey Liskov substitution, without the ability to downgrade its type to a rectangle. In OOP you might correct this by making the square immutable and the `setWidth` and `setHeight` operations would return a rectangle instead of a square.

To bring this specifically to this use case: Yes, it's a defined mathematical operation to scale a shape such as a square on the x-axis. But, if you do, that square is now a rectangle. This may be important to the implementation, it may not. But it's certainly a relevant technical concern.



No, a square is a rhombus and a kite, same as a rectangle is a kite, so both should be kites. And both are also trapezoids, so they should be trapezoids too. And paralellograms. And of course all of these are quadrilaterals. A square is also a The idea of geometric objects in a hierarchy leads to either incorrectness or problems in the implementation (most of the time both ;). Just don't do that.


What goes back into finance, because rewriting a Black Scholes predictor every time one instrument needs it would be insane. And yet, it's what this code seems to do.


> It's not about saying goodbye to clean code, it's about actually maintaining a clean code by not overabstracting.

But the title says so! :p

I get the whole point of this post, but it can easily be read as an excuse to actually messy (as opposed to "non-clean") code. The author doesn't actively distinguish them, so others do.



Agreed, it's quite telling that everyone in here feels like they know better than his team mates and him (at the time of writing the article), with no other context than some pseudocode.

DRY has value but abstractions come at a cost. It's always a tradeoff.



Part of the reason I enjoy languages like Haskell is because of the abstraction maximalism enshrined at the language level, rather than the coding level.

We all agree on what an Applicative is supposed to do. The challenge shifts right in the idea-to-code pipeline to finding the right Applicative for the problem. Most OO design patterns have natural, well defined equivalents in such an environment, but the difference is this way helps minimize the amount of project specific abstractions you have to deal with.

The downside as a beginner is of course a greater upfront cost to learning the language, and quite frequently opening up another Haskeller's work and realizing there are like 12 compiler extensions you've never heard of that you now need to grok. But even there I would argue that's a net win: you can carry those compiler extensions with you to all projects in the future. (The downside as an expert is realizing you have to get a PhD to get a sense for how much RAM your cats will gobble up!)



We know what Applicative and Monad (forgetting the burrito) do because the have mathematical LAWS. If a piece of code uses Applicative or Monad you know it’s behaviour due to the Applicative/Monad mathematical laws.

No OO design pattern has a law, they are a loosely defined concept/pattern which means they are open to personal interpretation/customisation but no concrete law to say exactly what it is and its expected behaviour/output.



> If a piece of code uses Applicative or Monad you know it’s behaviour due to the Applicative/Monad mathematical laws.

Well, we actually only know what it is supposed to do. The laws are not enforced, so it is perfectly possible to implement a type class that violates the laws of the type class.



Several test frameworks implement Law Testing which will test your instances for lawfulness as long as you can generate instances of your data structures. E.g. https://hackage.haskell.org/package/hedgehog-classes . So you only need to be able to generate instances and you get comprehensive tests for free, so that's nice too.

It is also the case that implementations of these very abstract classes typically only also have very few implementations that make any kind of sense type-wise and that the 'obvious' one is correct.

(Just to preempt: Yes, tests aren't proof, but such is life without a full proof system in the language. In practice, I've found property-based testing sufficient.)



Oh, I know and I'm not arguing against anything you've said. I just wanted to express - for people who don't know Haskell - that while the laws exist, they are not (as of now ;) enforced or checked by the type checker.


No worries, I was also just trying to add info.


Your observation reminds me of Philip Wadler's Strange Loop talk [1], where he states that some languages are discovered, whereas most languages are merely constructed.

I think he would have agreed to Haskell (or at least, languages based around lambda calculus) is more in the first category, whereas Java, C++, Smalltalk, Javascrip (ie. OO languages) are in the second. Computational fundamentals versus engineering problems.

[1] https://thestrangeloop.com/2015/propositions-as-types.html



But Monads are like what you compose. The equivalent in OO would be the semicolon. You still need business logic that follows messy life / human rules no matter what paradigm you use.


I think these are two wildly opposed world. What the industry calls OO is just a way to try to avoid large business failure. Other paradigms were more about computation, the logical basis is way way deeper. These people like to own all computational layers very precisely from parsing, to compiling, to algorithmic analysis. Enterprise/Application coding is not about that, it's how to bridge the customer / IT gap with something not too horrendous and still adaptable enough to handle market chaos.


I am not seeing how these claims hold up in practice.

> law to say exactly what it is and its expected behaviour/output. What does Monad laws say about expected _behaviors_ of some Monad?



They specify part of the behavior. But at least it always tastes like a burrito!


As a fellow haskeller, I agree on that the language has very good abstraction capabilities.

On the specific example in the blog post though I can only say that there are so many times I've been down the clean code path and it's more often not another person, but the future you, into whose foot you are shooting. Abstraction is not beneficial when you need to study it later to understand it again.



> Abstraction is not beneficial when you need to study it later to understand it again.

This is where a good study routine comes in. I study Haskell like I study all things of any long term importance, using an Anki deck. It takes me longer to start using a new abstraction, but once I have it it's pretty much there for life, due to my daily commitment of half an hour or so to my reviews. It makes far less sense, of course, to do this for a one-off abstraction I had to apply to a specific project, so this naturally tilted me over the years to work more and more in loss like Haskell.

(N.B., I do not state this as a prescription. I think this is well above and beyond what most people expect of themselves professionally, and that is okay. Everything in life comes with tradeoffs.)



This is really interesting to me. What exactly do you write on the anki cards? I cannot really image learning a haskell abstraction by mere memorization. How can you learn it without actually applying it to a real problem. Often times the hard part is to know when to use which abstraction. How do you learn this using flashcards?


What you detail is pretty much how you would recognize it. You write out a brief summary of the high level problem, and then ask yourself "which abstraction would you reach for first, absent any other information?"

The latter part is often implied, but it's best to make it explicit. You're trying to build up expert intuition, which means allowing for the fractal nature of software development to mean you might be in the 20% of times where this won't work for you.



The OP would have done better to use more OOP principles, unless eschewed by the team in general--then I'd probably work elsewhere. The place where OOP has the most natural fit is implementation of GUI widgets and their editors. I've written a few myself.

The 'rule' that unifies OOP's other rules is tell-don't-ask. So for the example at hand we have the handles which are shared in a known way and I presume shape-specific control points (e.g. circle may have center, radius). Then a change of a box handle could call a updateControlPoints(changedHandle). There would also be updateBoxHandles(changedControlPoint). There may be other clean splits, but that's how I'd do it.



> We all agree on what an Applicative is supposed to do.

Maybe. But not on which parsing library to use (parsec? megaparsec? attoparsec? regex-applicative?), on whether to use Data.Text.Strict or Data.Text.Lazy, on how point-free the code should be, on whether to use "." or ">>>" or any other combinators, on which language extensions to use...

I like Haskell, but it suffers from similar problems as any other language: people can't really agree on the best way to do things



> The downside as an expert is realizing you have to get a PhD to get a sense for how much RAM your cats will gobble up.

Go one step further with affine types and use Rust. You get haskell type system with predictable performance.



> You get haskell type system

No. When trying to use Rust's traits like Haskell you learn really fast why higher kinded types are needed.



Oh, wait, does Rust not have kinds? That's a bummer.


Rust has GAT's which have the same expressivity as HKT's. But many Haskell patterns turn out to be unidiomatic in Rust due to extra overhead. (For instance Rust has multiple function-like traits that access their environment differently. There's no equivalent to this in Haskell of course; GC and default boxing choices for Haskell types obviate the issue, but this is not zero cost.)


I know, I know. It's very silly of me not to pick up Rust when I've already got all of its forbearers in my bones. Someday, once I've finished studying everything else I need.


oh, oh my. The golang opposite where you don't have language extensions and should be able to grok everything about the language in your head all at once... yeah I'm going with "simpler language".


I'm not dissing Golang, I use it actively, both at work and in my own projects. There's a lot to like about it.

There's also a lot to like about Haskell. Complaining about something which you don't understand isn't a good look on anyone, though.



As a freelancer and consultant I’ve seen this drive or urge for simplifying coding go to the extreme to become pointless or even very damaging at a larger scale.

An idea that sometimes popup in some ambitious people are usually centered around making programming easier or more foolproof by developing a platform that supposedly would make that easier and safer to build software systems within an enterprise. What these people seldom realize is all the effort that needed to build a good platform from scratch versus finding and contributing to a suitable OSS project.

This is what has happened at a client I’m working for now. An architect thought it would be a great idea to develop such a platform with multiple components like a runtime, a visual designer to create flow diagrams that are finally transformed that into code, integrations with databases and lots of other features based on code generation from configuration files.

After some years of development this platform have only managed to create hurdles for the only project that have tried to adopt it. The documentation is outdated and lacking, the extensibility is lacking, the performance is not great, there is one consultant who is still developing it and charges a fortune and other teams avoid it like the plague. While this has been going on other great open source alternatives have been developed or matured.



You could have just moved the repetitive math into separate functions and called them from the resize functions.

The resize functions themselves only looked repetitive because they were forming an interface, which is why refactoring them away was wrong. But the math they performed is a perfectly valid target for cleaning the code.



Exactly. Seems like the obvious way to go and this is what I typically do.

The end result is of course a huge bag of small, pure functions, but they're easy to test and refactor so it's never an issue.



And you can re-inline and modify the special cases in the future if necessary.


I feel like what you’re describing is bad abstraction. This is the real issue, people designing things off two or three examples, not making sure that what they have is general enough to accommodate future usage.


In particular, a good approach to this is to abstract the common parts to all those into just some very simple functions. Abstract bottom to top, not top to bottom. There is no need to create a single point where everything happens and deals with the 2^n combinations.

You still reduce repetition, you can still change stuff in a single point, but you retain the ability to make custom changes if needed.



> Abstract bottom to top, not top to bottom.

Amen. With the caveat that this is a guideline, not a rule.



Yes, the problem in the real world is that no one KNOWS they're writing a bad abstraction at the time they're writing it, otherwise they wouldn't write it.

Therefore it's insufficient to say things like "avoid bad abstractions" or "make sure you accommodate future usage". No one can predict the future, generally speaking.

Abstraction should be used with extreme caution, even when you feel that it's probably right. Be an extreme skeptic.



I completely agree. The issue with the "clean" version of the code is that it's completely coupled together. If you touch one thing in the math code, it affects everything. At some point, it becomes easier to keep some duplicated code, point out the similarity using some code structure cues or comments, and over time, highlight any key differences using more comments.

A "clean code" approach to this kind of code doesn't handle evolving requirements well. It always ends up as a mess of helper functions with multiple code paths within depending on where the function is called from. This can sometimes be abstracted: some people might have noticed already the previous sentence is describing OOP polymorphism. But how would anyone know the exact requirements ahead of time? If we're in such an early phase, any attempt to "clean code" will only result in a bad abstraction, or in overengineering. The right approach is to do nothing, note the possible issue, and wait until more is known about the problem domain.



We could call it The Premature Abstraction Anti-pattern :)

Been a victim of this myself a few times myself so I normally do exactly what you've suggested and wait until a clear, and still performant abstraction has emerged from the code and slowly refactor-in the abstraction. But it's always better to do this on a nearly completed codebase, so at the end, where you should naturally be looking to remove lines, not add.



This is why I've grown to like discovering patterns based on changes - new features and bugs being fixed.

I wouldn't worry too much about somewhat duplicated, but easily understood code, like the original example.

I would however worry if I have to fix the same issue in 3 places, or add code for a new feature in 3 places. That's when I start wondering if I can push the code I need to change or add into a common place somehow.



There is no need to accommodate for future usage. Most of the issues resulting from abstractions actually come from people treating abstractions as sacred, trying to adapt abstractions used by some piece of code to fit new use cases, when they should be treating abstractions as disposable instead, swapping them out and creating new ones instead of altering existing ones. If the abstraction that you come up with based on the first few examples is not suitable for new use cases, that's completely fine.

As far as generality is concerned, it's not the number of examples that matters. Once you learn to distinguish whether your duplicated code actually duplicates some piece of knowledge or just happens to resemble another piece of code by happenstance, you stop having to rely on an unreliable magic number as your limit on how many instances of duplication to allow before abstracting something. And that skill probably correlates strongly with the understanding that abstractions are not just macros that make it more convenient to repeat some chunk of code, they actually represent knowledge, and thus their primary use is not to eliminate duplication, but to encode an understanding of a problem domain into the code.



This is completely right. People read about design patterns or DRY, usually apply them wrong since they're still learning... and then act like it's the source materials fault?


Well, it kinda is? Teaching materials should show how to not refactor too.

Also have u seen refactors in clean code?

Guy refactors thread safe code by introducing static variables which make code not thread safe, but hey! At least it is a little bit shorter!



Yup this is the worst part of Clean Code. I've seen real-life bugs caused by this kind of refactoring.

The best part is that the same commit added unit tests so our coverage grew over the magic 80% bar, but of course unit tests don't usually test for race conditions.



The problem is that the source material (the Clean Code book in this case) has almost no caveats. And it rarely acknowledge other styles or points of views. For example, when the book discusses possible objections to having "lots of small functions", it simply doubles-down, suggesting also having tons of small classes, and rationalizes it. Experienced developers know that reality is a bit more nuanced than that.

This is actually a big reason for the popularity of the book, showing a "one true way".

And this carried on to followers of the style. Some linters even enforce the style without much regard for the practicality of it.



If there is a source material that many people are "applying wrong", then I'd question the quality of that source material.


Design patterns, for example, really should only be studied once you have quite a bit of experience with complex code bases under your belt. You need to have done battle with some nasty code problems in bigger code bases before you really understand the problems design patterns are trying to solve. Inexperienced developers can't be trusted to apply them because they don't have that judgement. It just looks like a shiny thing.


True for so much of compsci stuff. You won't understand regular languages, automata, grammars etc until you have done a fair bit of pattern grokking yourself.


Imagine if math books told you that you should ALWAYS solve for X and if somebody tells you to solve for Y they are a bad mathematician :)

That's Clean Code.



I don't see how that's a fair characterization of a book that has this quote in the first chapter:

> Many of the recommendations in this book are controversial. You will probably not agree with all of them. You might violently disagree with some of them. That’s fine. We can’t claim final authority. On the other hand, the recommendations in this book are things that we have thought long and hard about. We have learned them through decades of experience and repeated trial and error. So whether you agree or disagree, it would be a shame if you did not see, and respect, our point of view.

It's been a while since I've read Clean Code, but I seem to recall it stated many times that blindly applying the rules of Clean Code without good justification would lead to bad code. The author even provides examples of this. People in this thread are criticising Clean Code principles as if they are meant to be a rigidly enforced dogma. They aren't, and the author never intended them to be so.



I think that quote is a good example of why it is a fair characterization. It uses the authors’ seniority to argue from authority, even explicitly requesting respect.

To a beginner, it reads like ”these are subjective matters so experience is king, and we have more experience than you do”.



"Respecting my experience" does not translate to me as "do everything exactly accordingly to these strict rules". To me it says "consider my opinions before doing something different". Consider. Not follow blindly. I can see how one may interpret it as the first if they read the quote in isolation, but certainly not in the context of the book. Which, as mentioned before, goes out of its way to state these rules are more like guidelines, and gives examples of where strict adherence causes worse code.


Well, based on this disclaimer, this is a book that never should be recommended to somebody learning how to program.

And yet, that's clearly their target audience.



Is it? The book requires being quite familiar with programming already. Perhaps this is me projecting my own experience of when I read it, but I feel like the book is targeted at someone in the late-phase of being a Junior Dev, and/or the early-phase of being an Intermediate Dev.

I personally derived a lot of value from reading the book, and I feel like my skills noticably improved from before to after. So perhaps my own biases are showing, but I believe discounting the book's contents wholesale is a mistake. There is a lot of value to be garnered from reading it. The book doesn't have to be the infallible word of the Software Gods for it to be useful.



A junior dev is somebody learning to program.

That disclaimer filters out anybody that isn't at least on the transition to be a senior dev (with real seniority, not just in inflated title). It takes quite a lot of experience to agree or disagree with a rule, and respect a point of view without automatically applying it.

In fact, since the rules on the book have way deeper impact than they look like, being able to read that book and not getting damaged by it is a good test for seniority.

But, funny thing, if you are mature enough to fit the disclaimer, you've necessarily already seen all that the book talks about and don't need reading it.



Not bad advice when you remember that advice is only ever for people who know absolutely nothing and need encouragement to start. "Only solve for X" gives some constraint so that one can start to focus on what really matters and not be bombarded with so many choices as to derail the entire effort.

With practice, one soon comes to understand why you might also want to solve for Y. Those with experience are going to ignore what everyone else says anyway, so it doesn't matter if it is not true for them.

Indeed, we in this industry are bad at practicing, and that's a problem. Imagine being Taylor Swift and having your guitarist for the night's performance having picked up a guitar for the first time yesterday. That would never fly, yet that's often how we treat software development – and that's how we get to these kinds of places.



Rob Pike once said "A little copying is better than a little dependency" as part of the reasons why Go is the way it is, and the more I use programming, the more I understand what he meant. Repeating yourself is not a problem in itself. What matters is grasping the unit you're currently reading and possibly modifying. DRY often means adding another abstraction, but if the abstraction is not orthogonal enough you now have to consider 2 units instead of one, and you've made it worse.


I don't agree with that being clean code in the first place. It's just adhering to the DRY maxim, which is one perspective on clean code, and imho the worst one. DRY causes you to mix use cases and glueing stuff together just because it's the same code.

DRY caused so much problems in my professional life. I'm now in the "DDD, ports & adapters" fraction and I encourage people to duplicate logic as long as the implemented use cases are isolated. That produces code that's easy to read, maintain, test, and is intertwined with the business logic. How often couldn't you explain business logic to stakeholders/customers because the code you write is worlds apart from how the app operates?

That's what the author also means with "My code traded the ability to change requirements for reduced duplication, and it was not a good trade." I guess.

Really good self-reflection though.



+1 for this, code should be layout to express business logic, abstractions that difficult the readability of the business logic are not worth it, it is a distractions. Yeah takes a while for juniors to understand when/how to use DRY.

Repetition is not a terrible evil, it has its place



On another note, props for the writer to :

- actually write a story relevant to the lesson learned, and not make up some fable that takes 1/3 of the article before even starting to make a point - not have any sort of popups asking me to subscribe or login - not have any sort of annoying javascript animation that gets in the way of reading the article

it's rare enough to be noted.



This experience has zero to do with clean code. The title is very much a misnomer and incredibly misleading.

This is an elaboration on the experience of abhorrent engineering culture at this company. From people who can't professionally speak with one another to a manager who cannot seem to effectively solve the core issues underlying the problems at the company. Instead of a moment of reflection on HOW to improve the root cause of the problem, it feels like they're kicking that can down the road.

Two things I'd like to point out:

1. It's nice to see some form of humbling commentary from the author; albeit, I don't think it targets the right solution to the problem.

2. I still think there's a lesson here for any org to digest and that is communication is such a vital and integral part of "a raising the tide lifts all boats", so to speak.



I think the point of DRY is to avoid side effects. For eg. If you repeat same code 10 places and in one instance you decide to change the logic, now you have 9 places with old logic and 1 place with different logic.

Removing duplicate code for readability and making code clean is second order affect in my opinion.

Times have changed since pragmatic programmer [1] was written. Lot of the code we write today does not have the longevity nor it needs the resilience that was required in past decades. It is purely because things move very fast, engineers iterate very fast these days. Lot of principles still hold true but we have to pick and choose what's right for a given problem or situation.

[1] https://en.m.wikipedia.org/wiki/The_Pragmatic_Programmer



That pragmatic book is very keen on code generators. Distill the information to some variant on ASCII tables and then derive all the redundant forms from it as a compilation step.

I think that's the right approach. I'm totally up for generating C++ from JSON if that's the available data representation.

So far, essentially everyone I've worked with has hated code generation in all forms. In language mechanisms may or may not be acceptable (C++ templates more likely to be OK than C macros). Outside language mechanisms - python writing source code during the build and similar - very hard to get past review.

I'm curious whether that aligns with other people's experience.



We duplicate code to avoid bugs.

We have a lot of customer integrations, and often the source system is one we've integrated with before. So we usually copy the whole integration and make some tweaks. Invariably our customers want some data massaging and custom logic, and by duplicating we can't screw up another customer's integration when we make changes.

Of course we do have some code in shared libraries, like parsing dates and numbers, cleaning mixed utf-8/latin1 input and such.

Of course, that does mean we might have to change many integrations if something fundamental changes. But overall it has worked very well for us.





Thanks! Macroexpanded:

Goodbye, Clean Code (2020) - https://news.ycombinator.com/item?id=38366222 - Nov 2023 (1 comment)

Goodbye, Clean Code (2020) - https://news.ycombinator.com/item?id=29239861 - Nov 2021 (212 comments)

Goodbye, Clean Code - https://news.ycombinator.com/item?id=22022466 - Jan 2020 (575 comments)



I will never understand the general thought process behind these articles.

"I'm giving up on X/Y/Z because I've found a case where it didn't work well for me".

Isn't that how pretty much everything in the world works? There's no reason to be dogmatic about things like this.



This IS the actual lesson they want to convey. "Use but don't overdo it and attempt to use everywhere."

It's just a bad mix with the clickbait title, a story that has a different root cause and a somewhat obscurely written conclusion.



But surely this should be already inherently known? I have never met anyone that stuck dogmatically to "rules" from things like Code Complete, Clean Code etc etc no matter how much they espouse them. Very simple examples:

- 1 assertion per test

- Exceptions instead of return codes

- CQRS

etc

All of these are good guidelines in some contexts. I cannot fathom why anyone would think they apply to every situation...



I unfortunately have.

Guy managed to bloat a simple mobile app to 180kloc. Every change we wanted to make after inheriting the codebase required changing abstractions in >10 places and all their corresponding unit and integration tests. I'm talking changes like adding an additional value to a dataset. Everything was abstracted as far as humanly possible.

He was a very proud clean code aficionado and would not let anyone from his old team dissuade him. He would also regularly scold his somewhat more junior co-worker when he did not follow his 7 layer abstractions for services, providers, surfaces, use cases and bindings.

But alas, I don't want to say you're wrong but there are people that would do well with this simple piece of advice.



>Clean code is not a goal. It’s an attempt to make some sense out of the immense complexity of systems we’re dealing with. It’s a defense mechanism when you’re not yet sure how a change would affect the codebase but you need guidance in a sea of unknowns.

Vomit. Guy picks bad abstraction, goes rogue and refactors a coworker's change, gets told off by the boss, and then writes a blog post with lofty garbage like the above as though he's an authority on clean code. "Brilliant" jerk vibes. Pass.



I'm sorry? I also disagree with the authors conclusion but the article is definitely giving me less "jerk vibes" than this reply.

The lesson learned in the article should be "talk to your co-worker on why they solved a problem a different way". It should be about communication, which this reply does not shine in either.



Yeah, I think I agree with the learning. A good middle ground would involve building up some helper functions, so you get something like:

let Rectangle = {

  resizeTopLeft(position, size, preserveAspect, dx, dy) {

    doLeftHandleThing(...);

    doTopHandleThing(...);

    // something else too


  },
Then you don't lose the ability to quickly override things when there are special cases or requirements change, but you can keep the overall code repetition down a bit.


It went from one extreme to the other. Is not cleanest code or its opposite. You have a gradient of options in how you'd code.

Here is another way to present the issue discussed by the author:

Every level of indirection you add will have a cost for the next developer that needs to extend or maintain it.

If, for whatever reason (making code "cleaner" for example) you didn't do it gracefully/welcoming/invinting-to-change/self-docummented enough, then you increased costs, which is the same as being aggressive to business margins.

So what's a more interesting conclusion?

Get over childish reactions of saying "goodbye clean code" but embracing improved discernment on when and how to introduce it.



I don't care about clean code. I care about readable code, easy to change code, efficient code. Of course, I applaud the virtues of OOP, SOLID, design patterns if I sense an interviewer is an Uncle Bob disciple. But I try to steer clear of Uncle Bob disciples, as I don't like living in the kingdom of nouns. [0]

[0] https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...



DRY might be the bane of my existence as an engineer. The vast majority of developers I meet that are self proclaimed as "senior" are extremely dogmatic. They've had roles where it's been drilled into them {{engineeringPractice}} is god tier engineering and it should be applied indiscriminately to everything you write.

Being a great engineer is about being able to be pragmatic, learn the rules, then learn where you can break the rules.

DRY is like the entry level dogmatic practice everyone adopts, and it leads to some god awful abstractions and it's really hard to explain why having un-DRY code is ok, it's really unintuitive to the "initiated".



In all the things I've ever built I'm not sure I can remember a time where changing the same thing in a few places was all that time consuming.

I can remember lots of times where debugging an abstraction was very time consuming, though.



The changing part is not time consuming, is having to fix the same bug over and over because you didn’t realize this same function was copy-pasted in 7 different files. It really gets worse when you realize this a year after the first bugfix, still dealing with a stupid bug you fixed a year prior, if it wasn’t for copy-pasted code.

I’m also against DRYing up unnecessarily, but if the code is a 100% match in intention and/ord behavior, then it’s going to be deleted.



just to challenge that, why would you not realise the function is copy pasted in this scenario?

I mean, you'd see that the function is inlined in the place that you discover the bug, so you already know that changing that function's code isn't going to fix similar bugs anywhere else right?



How do you know where the function has been copy pasted? How do you know whether it has been copy pasted in the first place?

The bug may be latent, hard to find, or express itself very differently in different places. It is in fact, a different bug if it originates from code in a different place, even if the code is largely identical.



You may not remember.

I definitely remember having to debug an old colleague's code who did things like turn and scale 2D shapes. The code was written like the original version in the article, except even more dirty. 4 entirely different code paths for turning things depending on the angle. Several dozen thousand lines of code where almost all of the logic was repeated 4 or two times with minor (and difficult to find) differences. Additionally, different functions which needed to do the same intermediate calculations didn't use intermediate functions, the same code was just copied.

Listening to some people you'd think this code was super easy to understand and modify. It wasn't. Because it was missing abstractions.

Turns out, code has bugs. Not only finding bugs was excruciatingly painful due to every function having 10 mutating variables on average. Actually fixing bugs was an exercise in futility because it was impossible to tell whether the same buggy logic wasn't copy pasted somewhere else in the code. And when you found one, you had to understand all of the specifics of that other place to make sure you didn't break anything.

Abstractions make things easier to understand by allowing the reader to care only about small parts of the logic at a time. The best abstraction is when you read the name of something and decide not to look into it because the information you need is already clear from the name and usage. What repetitive, imperative golang-style "simple" code lacks is the ability for the reader to not read it. My understanding of code is at a high level of abstraction, preferably as high as is relevant to my task. I'd much rather read code that says sum filter predicate than decipher a for loop and then having to figure out that it does indeed do a sum over a filter. Going down this low in abstraction is only useful for intro level programming students.

Then again, golang is clearly aimed at being not a low level programming language, but a language for programmers at a low level of programming.



The code from the article is at some point going to need to be abstracted probably. Unless it is 'done'. As they add much more to it and they will be in the exact situation you are talking about. That is a tough call sometimes when to just bash it out and move on or DRY it up. I personally would toss a comment in there for saying exactly as much. That is going to be a future mess (but not yet).

That whole article though I see as a total communication breakdown. Between the two devs and the manager. That the other dev just invoked 'management chain button' is not a good sign for a good working env. Also the new version did not ssem much better than the prev version. The trust of the developers is broken and the manager did not help and the dev who wrote the blog post now will be shy of doing things.



The problem with these kinds of articles is that they disprove an idea but leave you without any concrete guidance on what you should do instead. Yes, abstractions could become obsolete because we didn't foresee the future (rightfully so), but what would you do instead? At any point in time, you have a limited amount of information and you should come up with the least stupid code that you can. Overdoing abstractions almost always leads to this terrible result, but you can't just call it "goodbye clean code", this is more of a marketing post than an actual useful informative one.


The long answer is a bit complicated, but the short answer is you should duplicate code.

It is far less work to dedupe WET and abstract later once you truly understand the correct abstraction than it is to try to untangle the wrong abstraction later.

Or as Sandi Metz would say, "Duplication is far cheaper than the wrong abstraction."



Does every post need to propose a new One True Way?

Letting go of the Old Way is just as valuable without having a preferred replacement.

It's similar to the bell curve meme about any niche/skill. A beginner only focuses on the fundamentals, a "mid-level" focuses on all the best tools & methodologies (to the point of overcomplicating things), and the master only focuses on the fundamentals.



The thing to do instead is to keep the code simple and concrete, even if it is duplicated a bit.

I think maybe what you mean is it doesn't give you any guidance on when to do that, other than "sometimes".

That's just a matter of taste and experience. It depends on the specific code and circumstances.



> Each shape (such as a rectangle or an oval) had a different set of handles,

Yet, that's not what the code says. The objects have multiple methods with similar signatures representing handling.

The right thing here is to make the handles objects. The objects know what their parent is and send it messages "I have moved". There is one method in the shape which is reponsible for handle-initiated movement. The object looks at which handle has moved (it's an argument), and other state like Shift constraint, and adjusts its representation accordingly.

The behavior of how to respond to which handles should still be the responsibility of the shape. Commonality there could be factored out into some common functions such that if you need to special-case something, the in that shape you stop using the common functions entirely, or use them as a fallback for whatever behavior remains that is common.

Multiple dispatch is made for this; the movement behavior is a combination which shape and which handle. With a precise specialization of a multiply-dispatched method, you can code behavior for an exact shape and handle, overriding any less specific definition.

This is all a Solved Problem.



The point is how should such code grows? If it meant to grow on its own, separately on different requirements, then its not duplication. It's just a code that looks the same, for now.

Everything comes with a price. The Clean Code book basically argue on one side, on what you can gain with such practice. It's not gospel, its something that argue on benefits on certain practice. As a developer, you probably are more privy on what is a better trade off based on your project. Do what make sense.

You may default on writing "messy" code, or writing "clean" code, before you find the middle ground that make sense, and thats okay. Its a journey, not a destination. It takes experience to make a good trade off in your daily coding.

Learn from your mistakes and move on.



> My code traded the ability to change requirements for reduced duplication, and it was not a good trade.

This is the the one that really clinches it for me.

I think it's important to be aware that "clean code" standards came out of the early-mid 2000s contract development community. "Maintainability" means something different in that context, for two killer reasons. First, you're very likely to walk away from the project after a fairly short time period. Maybe a couple months, maybe a couple years. In 5 years, though? It's almost certainly someone else's problem. Second, you probably bill by the hour, or something like it. So decisions that make the code resistant to future change aren't really a bad thing, because, while they do make that hypothetical future maintenance work more expensive, you also get to charge more money to cover those costs.

I'm not trying to cast shade on contract development shops here. Business is business. But I do think that those of us who are developing software for ourselves might be operating under a somewhat different incentive structure.



Not only the resulting abstraction is harder to adapt, but the abstraction itself is badly chosen.

The first and foremost thing you want to refactor is the repetitive set of arguments. As it stands, all those arguments will come from interacting with handles, so you can reasonably expect that they are deeply coupled with handle changes (for example, it may allow for fixed 1:1 aspect ratio in the future). So they should go to its own object or even class, so that some duplicates can go into its methods while allowing for individual changes. And then you can decide whether you want to refactor more or stop there. Some refactoring changes are clearly beneficial in virtually all counts, others less so, and mixing them is not a good strategy.



Clean code explicitly addresses situations like this: the code "seems" identical for the time being, but it's not in the long term, because it applies to different objects, that will evolve independently. This is in the book, so I am not sure why they're saying "bye Clean Code", it should be "I should read Clean Code again".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com