(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41607059

Make 是一种多功能构建自动化工具,可简化从各种输入文件(包括源代码、文档和脚本)创建和更新输出的过程。 虽然最初看起来很简单,但其高级功能为管理复杂项目提供了许多好处。 Make 的一项基本功能是它能够处理“.PHONY”等伪目标,允许用户为不生成对象或二进制文件等传统输出文件的操作声明自定义规则。 用户在定义“安装”、“清理”或特定于项目的目标等目标时可以使用此功能。 尽管 Make 自 Unix 早期就已存在,但某些版本仍然缺少诸如“.PHONY”之类的关键元素。 Make 的现代变体包括“.PHONY”,在创建项目工作流程时提供增强的灵活性和实用性。 当尝试使用 Make 同时生成多个输出文件时,会出现另一个挑战。 在这些场景中,用户必须构建自定义机制来满足他们的需求,例如采用递归规则或通过 make 变量实现额外的抽象层。 虽然 Make 可能会因看似过于简单的初始外观而受到批评,但它的适应性使其能够有效地满足各种需求 - 从琐碎的脚本到涉及不同编程语言和组件的大型项目。 为了方便起见,Makefile 通常用于简洁地概述工作流程,提供文档以及必要的说明。 对 Make 的一项普遍抱怨集中在空白敏感性上。 多年来,许多调试问题都源于此设计缺陷,因此需要仔细注意布局和格式。 此外,Make 对一致构建环境的依赖增加了复杂性,特别是在跨操作系统的命令行标志和行为方面的差异。 开发团队可能会建议 Make 作为通用解决方案,尽管它的适用性因项目规模和复杂性等因素而异。 有时,当特定目录(例如“build”或“dev”)出现在项目根目录中时,Makefile 会停止运行。 尽管如此,Make 对于不需要“更好”构建系统的中等复杂项目仍然有效。 经过仔细检查,Make 展示了非凡的功能和实用性,使其成为简化项目工作流程而不牺牲灵活性或可定制性的有吸引力的选择。

相关文章

原文


Don't be discouraged by all the people in this thread saying you're using make wrong. One of the things that makes make a great tool is how deceptively simple it is. Yes not using .PHONY can potentially get you in trouble. But for a small project that's the sort of trap you'll fall into a year later, if at all, and even then you'll only be scratching your head for an hour. 99% of the time you don't have to care about doing things the proper way. Make lets you just hit the ground running and only imposes as much complexity as you need to keep the thing from falling apart.



> One of the things that makes make a great tool is how deceptively simple it is.

One of the worst things of Make is how deceptively simple it looks.

Make does exactly one thing: it takes input files, some dependencies and generates _exactly_one_ output file. To have rules which don't generate output (like `install` or `all` or `clean` or all targets in the article) we need to resort to a hack, a special magic target like `.PHONY` (which hasn't been part of POSIX up to the 2017 version - IEEE Std 1003.1-2017 - https://pubs.opengroup.org/onlinepubs/9699919799/utilities/m..., only the current one - IEEE Std 1003.1-2024 - https://pubs.opengroup.org/onlinepubs/9799919799/utilities/m... includes `.PHONY`). If you want to generate more than one file (like an object file and a module or a precompiled header or ...) you are on your own to build some brittle hack to get that working. Don't forget that not every Make is GNU Make, BSD and other nix like Solaris/Illumos still exist.

Don't get me wrong: Make has it's uses for sufficiently complex projects which aren't too complex yet to need some "better" build system. Problem is that such projects may get too complex when more code is added and they inevitably gain some sort of scripts/programs to generate Makefiles or parts of Makefiles (so, an ad hoc meta build system is created).

And the problem isn't that they use it, but that they are proposing it as a solution to "everybody". And that their Makefile stops working as soon as there is a directory (or file) `build` (or `dev` or ...) in the project root.



I work on a project with 4.4 million lines of code and using a single Makefile with no generated code works fine. It's really not all that difficult.



I don’t object to “it works for me”, but “it’s really not all that difficult” is a bad generalization.

* If you need portability, Makefiles are hard.

* The whitespace design of Makefiles is bad and has swallowed up countless debugging hours over the years. This design flaw isn’t intrinsic to the way Makefiles work, it’s just a lousy artifact from a superficial decision from decades ago: to change behavior based on distinctions invisible in source code. It’s mitigated by syntax highlighting but still bites people.

* Makefiles are dependent on the consistency of the build environment, for example the availability and behavior of command line switches. Even if your project doesn’t need OS platform portability, this is still a pain across time and requires external tooling to manage.

* There are certain subtleties to the way Makefiles behave that are addressed by `.PHONY`. I agree that these are manageable in the absence of other complexities, but they contribute towards Makefiles being more difficult than appears at first.

I’m sure you’re familiar with those critiques and others. They may not bother you, but you don’t speak for everybody.



My Makefile is portable. It builds binaries that run on six OSes and two architectures. So I used my Makefile to build GNU Make and a GCC toolchain. Now I can run my Makefile on any of those OSes / architectures too, and it'll produce the same deterministic output, bit for bit.



> My Makefile is portable.

Oh yes, in the good old tradition of "... as long as it's some Linux on x86".

    [...] on Linux 2.6+ (or WSL) using GNU Make. 
Sorry, it's actually AMD64 _and_ ARM64!


If part of your build is building your own build tool in order to ensure you have the proper build tool then why not build a different “better” build tool?

Part of the premise of Make is its ubiquity, but if you can’t rely on that save as a simple bootstrap (as you seem to be doing) then why not forego it for something else?



My project is literally to make a compiler toolchain. Do you expect me to not use it? The nice thing is that you can build my project on any computer that has modern GNU Make. That and sh are the only things that needs to be installed.



> Projects of much smaller sizes often have recursive convoluted makefiles.

You name any technology and anyone can enumerate dozens of projects that use it wrong.



I’d walk that before talking. Take any complex makefile system and turn it into a single “not really difficult” makefile without sacrificing anything important. Wins this argument and helps those who “use it wrong”.



Can
  % grep '^include' Makefile -c 
  159
includes with
  % wc --lines --total=only $(awk '/^include/{ print $2 }' Makefile) Makefile 
  22547
lines in total really count as having a single makefile? I have no dog in this fight, just wondering.


I don't understand this statement, "which hasn't been part of POSIX up to the 2017 version - IEEE Std 1003.1-2017."

I've definitely been using .PHONY on various Linux and MacOS computers long before 2017.

Maybe it's just me, but I've never much cared for whether or not something is specified if it happens to be present everywhere I go.



> I've definitely been using .PHONY on various Linux and MacOS computers long before 2017.

Me too, and I've also used Makes which didn't (on e.g. Irix). What I wanted to express had been that you can't even rely on `.PHONY` existing, much less many other features.



> Make does exactly one thing: it takes input files, some dependencies and generates _exactly_one_ output file.

Not true. Your dependency graph might culminate on a single final target, but nothing prevents you from adding as many targets that generate as many output files as you feel like adding and set them as dependencies of your final target.

Think about it for a second. If Make was only able to output a single file, how in the world do you think it's used extensively to compile all source files of a project, generate multiple libraries, link all libraries, generate executables, and even output installers and push them to a remote repository?

> To have rules which don't generate output (like `install` or `all` or `clean` or all targets in the article) we need to resort to a hack, a special magic target like `.PHONY`

I don't understand what point you thought you were making. So a feature that boils down to syntactic sugar was added many years ago. So what? As you showed some gross misconceptions on what the tool does and how to use it, this point seems terribly odd.

> And the problem isn't that they use it, but that they are proposing it as a solution to "everybody".

I think you're making stuff up. No one wants Make to rule the world. I don't know where you got that from.

I think the whole point is that Make excels at a very specific usecase: implement workflows comprised of interdependent steps that can be resumed and incrementally updated. Being oblivious of Make leads many among us to reinvent the wheel poorly, using scripting languages to do much of the same thing but requiring far more work. If you can do this with a dozen lines of code in a Makefile, why on earth would you be churning out hundreds of lines of any random scripting language?



> Not true. Your dependency graph might culminate on a single final target, but nothing prevents you from adding as many targets that generate as many output files as you feel like adding and set them as dependencies of your final target.

Sorry, I did phrase that badly. A better version of that sentence would be

   A single target (a single node in the dependency graph) of Make does exactly one thing: it takes input files, some dependencies and generates _exactly_one_ output file.
> I think the whole point is that Make excels at a very specific usecase [..]

Excatly what I wanted to express with my post above. But the article isn't about such a case, but for something for which a single shell script (or, better, just adding the commands to the `scripts` stanza of `package.json`, which is the more common, expected way to do it) is actually better suited and way less error prone.



A single target (a single node in the dependency graph) of Make does exactly one thing: it takes input files, some dependencies and generates _exactly_one_ output file.

Yes, but this is not particularly relevant to the user. With pattern rules it's trivial to define a large number of targets automatically, such as in the example (from the manual):

    objects = foo.o bar.o

    all: $(objects)

    $(objects): %.o: %.c
            $(CC) -c $(CFLAGS) $< -o $@


I have been unclear in my formulation, sorry.

The problem is a target with more than one output files, that target would look something like, which does work

    foo bar: baz
         compile $< -o foo -o bar
but as this is the same as writing
    foo: baz
         compile $< -o foo -o bar

    bar: baz
         compile $< -o foo -o bar
to generate `foo` and `bar` the rule is run twice in a parallel build (`make -j 2`. Which may just be unnecessary or it may break the whole build.


So can that be solved like this, or is there some other subtlety I'm missing?
  foo: baz
    compile $< -o foo -o bar

  bar: foo
(Can someone tell me how to do code tags on HN please? :) Edit: fixed now, thanks!)


Subtlety. Now the dependencies are `baz` -> `foo` -> `bar`, that is `foo` is a temporary target and won't be (re)build if `bar` already exists. Which may or may not be a problem. This temporary target (whatever the actual term is) can be "elevated" to a "normal" target with the use of some special target (which I'm too lazy to look up right now).



Among other things, now you have to maintain a set of dummy targets. If you have a variable (possibly generated) that is basically
  A_MESS_OF_FILES := foo bar zot
You now have to create dummy targets for bar, baz, and zot and not forget to add them. Or maybe break it into
  MAIN_FILE := foo
  SUBORDINATE_FILES := bar zot

  foo: baz
    sudo make me a foo

  $(SUBORDINATE_FILES): %: foo


> A single target (a single node in the dependency graph) of Make does exactly one thing: it takes input files, some dependencies and generates _exactly_one_ output file.

I'm still not really following the point about one output file? That might be Make's stated purpose, but a Makefile rule can certainly create extra files as a side effect (or do pretty much anything a shell user could do, from creating directories and downloading files to launching applications)

One of my projects has a single makefile rule which downloads and unzips a tarball, applies a patch to it, then builds the application within, resulting in half a dozen binaries which are then used in building the rest of the project.

Edit: Ah - I see what you mean now, in your subsequent comment.



> If you want to generate more than one file (like an object file and a module or a precompiled header or ...)

He's not using C, though :-)

> And the problem isn't that they use it, but that they are proposing it as a solution to "everybody".

He's proposing it for the same reason I'm starting to like it, after many years in the industry: as a simple build wrapper.

> And that their Makefile stops working as soon as there is a directory (or file) `build` (or `dev` or ...) in the project root.

And they can fix that problem in 5 minutes, big deal :-)

> Don't forget that not every Make is GNU Make, BSD and other nix like Solaris/Illumos still exist.

This is a very bad reason in this day and age. 99.999999% of *NIX usage these days, probably 99.9999999999999999% for the average person, since most people won't ever get to those environments where BSD and Solaris are still used, is Linux.

And even for BSD and Solaris, guess what... you add an extra step in the build instructions asking them to... install GNU Make.

Heck, even back in 2005 (I think?) for Solaris one of the first things you'd do was to install the GNU userland wherever allowed because the Solaris one was so forlorn I swear I heard wooden planks creak and dust pouring down every time I had to use their version of ps.

And regarding POSIX, meh. If you're a C developer (C++, Rust, I guess), knock yourself out. Most of the stuff devs use are so far removed from POSIX... Actually, not removed, but has so many non-POSIX layers on top (I mean not standardized). Ruby bundler is not standardized like awk. Python pip is not standardized like make. Etc, etc. That's the reality we're in. POSIX is very useful but only as a very low level base most people don't need to chain themselves directly to. I'd definitely not avoid a tool because it's not in the latest POSIX standard (or only in the latest POSIX standard).



> He's not using C, though :-)

As said elsewhere, the use-case in the article is too simple to warrant a Makefile. So: if you aren't compiling some static language, you do not need - and certainly don't want to use - Make.

> you add an extra step in the build instructions asking them to... install GNU Make.

The main reason to use Make is that it is installed everywhere, as stated multiple times in other posts. If you must install something, you can also install a better alternative for your specific use-case to Make.

> one of the first things you'd do was to install the GNU userland

Yes, and the Unix vendors even shipped them on companion CDs or similar.

> is not standardized like awk

Same problem with awk (and sed and ...): some weeks ago I had problem with the SDK for some real-time Linux that works with mawk only, and not with GNU awk (most of the time it's the other way round, only working for some GNU program).



> As said elsewhere, the use-case in the article is too simple to warrant a Makefile. So: if you aren't compiling some static language, you do not need - and certainly don't want to use - Make.

I've found that I prefer make as a command runner and most of the time I'm just running Python poetry commands or building Docker containers or running AWS infra commands. It's very useful to have a simple tool to run commands and have them depend on each other.

And regarding many of the alternatives to Make, they're either more complex or have other issues:

https://news.ycombinator.com/item?id=41608555



That depends if the person who must do the porting knows Make or not and which GNU Make (it's always about GNU Make!) feature had been used. And chances are JS devs don't at all or just as little as the one who wrote the article.

Don't get me wrong: I don't like Make, but I hate CMake and Autotools (and many other C++ build systems) too (and C and C++ and Fortran compilers and their vendors).



> And they can fix that problem in 5 minutes, big deal :-)

Honestly, a big issue I see is that people can somehow argue with a straight face (and successfully too!) to invest weeks of work introducing a pet project to avoid a 1 hour inconvenience that happens once every blue moon. Proportionality takes a backseat very quickly to motivated reasoning.



It's a general observation on over-engineering, "resume driven design", and proportionality being somewhat of a blind spot in software. But yeah, I'm not going to lie, my brain certainly patterned matched towards "this is going to be a Bazel guy isn't it?". So, Buck2 was close enough. Those are exactly the kind of multi-week pet projects I'm talking about that are too often introduced under vague and disproportional pretenses. Well, multi-month and dedicated specialists going forward are perhaps more accurate for those. But maybe that's the point.



But my argument has been that _Make_ is already too complex for the given task.

And talking about complex C and C++ (to be fair, the complex ones are almost always C++ ;) projects, I would not say that CMake (or Meson or ...) is less complex than Buck 2, it certainly has _way_ more magic than Buck 2. And getting Make & C++ & ccache(or whatever) & distcc (or whatever) to work _reliably_ isn't easy either ;)



> This is a very bad reason in this day and age. 99.999999% of *NIX usage these days, probably 99.9999999999999999% for the average person, since most people won't ever get to those environments where BSD and Solaris are still used, is Linux.

You have a lot of confidence. In reality, it's probably more like 30-60%, more now because of WSL. The rest is Mac OS, which uses a BSD userland and hence BSD make by default.



WSL basically runs GNU/Linux distributions so I fail to see the significance of that point.

And for MacOS you do the same thing, you get them to use their beloved homebrew to install GNU Make.



Why would they do this? I could understand using a non-GPL make because they hate it, but using an ancient GNU make is just handicapping your users for no gain.



I've been a happy make user for 20+ years across many, many projects and many languages. I've never had issues with the .PHONY task that seems to bother people so much.

It's simple, readable, editable, composable and already installed everywhere.

It does what it says on the tin and not much else.

FWIW, I also wrap up whatever fad (or nightmare) build system people use in other projects when I need to deal with them.



It's simple, readable, editable, composable

I'll eat crow if wrong, but I'm guessing I know more about GNU make than you do. It is none of the four things you claim. Also, people who say "on the tin" need a good ass-kicking.



> Don't be discouraged by all the people in this thread saying you're using make wrong.

Fully agree, and I would add that it's far better to adopt the right tool for the job, even if you are not an expert, than avoiding the criticisms from perfectionists by adopting the wrong tool for the job.

Everyone needs to start from somewhere, and once the ball is rolling then incremental changes are easy to add.

Great job!



People who want to call me out would be a lot more productive pointing me to some guides instead of chastising me over an ancient framework who's best documentation has been lost to time. And whose best practices are locked behing proprietary codebases.

Little tips here and there are nice, but that doesn't teach me the mentality of how to achitect a makefile



> People who want to call me out would be a lot more productive pointing me to some guides instead of chastising me over an ancient framework who's best documentation has been lost to time.

Fully agree. Don't get discouraged, and keep it up!



Every makefile recipe should produce exactly one output: $@. The makefile as a whole produces an arbitrary number of outputs since rules can depend on other rules.

This leads us to a neat rule of thumb for phony targets: any recipe that does not touch $@ and only $@ should have $@ marked as phony.

I find that keeping track of phony targets with a list makes things much easier.

  phonies :=

  phonies += something
  something:
          ./do-something

  phonies += something-else
  something-else: something
          ./do-something-else

  # touches $@ and thus does not need to be phony
  create-file:
          ./generate-some-output > $@

  .PHONY: $(phonies)


It is also possible to define `.PHONY` multiple times, so you can simplify this to:
  .PHONY: something
  something:
      ./do-something

  .PHONY: something-else
  something-else: something
      ./do-something-else

  create-file:
      ./generate-some-output > $@


Makefiles are terrible tech. The problem is that they're slightly less bad than most other build system we've come up with, which makes them "useful" in a masochistic way.

Build systems tend to commit one or more of the following sins:

* Too basic: Once you try to build anything beyond a toy, it quickly becomes chaos.

* Too complicated: The upfront required knowledge, bureaucracy, synchronization and boilerplate is ridiculous. The build system itself takes an order of magnitude more data and memory than the build target.

* No standard library (or a substandard one that does things poorly or not at all): You must define everything yourself, leading to 10000 different incompatible implementations of the same build patterns. So now no one can just dive in and know what they're doing.

* Too constricting: The interface wasn't built as a simple layer upon an expert layer. So now as soon as your needs evolve, you have to migrate away.

* Too much magic: The hallmark of a poorly designed system. It doesn't have to be turtles all the way down, but it should be relatively close with few exceptions.

* Cryptic or inconsistent syntax.



My 2c: Makefiles are excellent tech, just that a lot of people haven't learned to use it properly and use it as it was intended. I'm sure I'll get pushback, that's ok.

- Too basic: At least half of the software I use just uses plain makefiles and maybe a configure script. No autotools. I optionally run ./configure, and then make and make install, and it just works. I definitely wouldn't consider my setup to be a toy by any stretch of the imagination. It's built out of smaller programs that do one thing and one thing well.

- Too complicated: I don't know, I think make and how it works is really easy to understand to me at least. I guess everyone's had different experiences. Not necessarily your case, but I think usually it's because they had bad experiences that they probably blamed make for, when they were trying to build some complex project that either had a bad build setup itself (not make's fault), or without the requisite knowledge.

- No standard library: It's supposed to be tooling agnostic, which is what makes it universally applicable for a very wide range of tools, languages, and use cases. It's viewed as a feature, not a bug.

- Too constricting: I'm not sure what you mean here, it's designed to do one thing and one thing well. The simple layer is the dependency tracking.

- Too much magic: Cryptic or inconsistent syntax: See 'Too complicated'



The worst build systems are the ones centered on a particular programming language. Since there's N>>1 programming languages that's N>>1 build systems -- this does not scale, as the cognitive load is prohibitive.

The only general-purpose build system that spans all these languages is `make` or systems that target `make` (e.g., CMake). And this sucks because `make` sucks. And `make` sucks because:

  - it's really difficult to use right
    (think recursive vs. non-recursive make)
  - so many incompatible variations:
     - Unix/POSIX make
     - BSD make
     - GNU make
     - `nmake` (Windows)
  - it's rather ugly
But `make` used right is quite good. We're really lucky to have `make` for the lowest common denominator.


I've never seen anyone use Nix to actually build software; it's a glorified launcher for shell scripts in a sandbox, and typically is used to start the actual build system, such as make/cargo/go build/npm/etc, with known inputs.



One or more, OK that leaves of course lots of room. I would estimate:

(too basic) Makefiles are not. (too complicated) They can be, depends on what you make them to be. (standard library) Well, there is one, there are some builtin functions you can use in the makefile. (too constricting) Haven't noticed that, so I would say no. (too much magic) Hmmm I don't see it. It is very clear what is a target and a dependency and so on. Not so magical. (syntax) Yeah definitely could be better. Even a plain JSON file would be better here.



> Once you try to build anything beyond a toy, it quickly becomes chaos.

Of course the chaos is not caused by, "very hypotheticaly" let's say, a compiler or maybe a language without modules.

How would you estimate that ? 20%, 40%, or 70%, true ?



Not OP, but its not just that C/C++ lacks modules. I think that is missing the real issue. Any complicated program probably needs a custom developed tool to build it. As a simple example, imagine a program that uses a database - you want to keep the sources as SQL and generate classes from them. Thats a custom build step.

Its just that in some languages and build systems (Node, Maven), we have abstracted this away by calling them plugins and they probably come from the same group that made the library you need.

No such pluginsystem exists, as far as I am aware, for makefiles.



Good luck writing Makefiles for Fortran, OCaml or (whenever they will really, actually work) C++ modules.

There aren't many widely used build systems that can handle such dynamic dependencies without some special "magic" for these, the only one that I know of (with a significant number of users, so not Shake) is Buck 2 (Bazel and all C++ build systems use "special magic", you can't write in user rules).



If you've got your rule working for arbitrary named (i.e. not the name of the file) modules and submodules and an arbitrary number of modules and submodules generated by a single source file which uses FPP (whatever program that actually is ;) or CPP as preprocessor, then yes. And with "working" I mean adding each module file as a single target which is able to trigger a rebuild of the module. You should be able to get that to work easier with GNU Make 4.3 and later, as that now supports grouped targets - which I have learned elsewhere in this forum. Now the only problém is getting your module dependencies without first compiling all files to generate the modules, as `gfortran -M` (and any other compiler that generates dependency information) AFAIK still doesn't "know" which file produces which module without actually generating the module files.



> So what's the problem exactly?

They all have the samé problem: that you don't know the name (or even the number) of modules (module files) being generated without reading the source. And as a bonus every compiler uses a sligthly different naming scheme for the generated module file (this is of course no problem for OCaml ;).

As an example (using Fortran). File `test.f90`:

   module first

   contains

    subroutine hello ()
    end subroutine hello

  end module first

  module second

  contains

    subroutine world ()
    end subroutine world

  end module second

`gfortran -c test.f90` yields the following files (2 of them are modules):
  -rw-r--r--    1 roland  staff    221 Sep 21 19:07 first.mod
  -rw-r--r--    1 roland  staff    225 Sep 21 19:07 second.mod
  -rw-r--r--    1 roland  staff    185 Sep 21 19:07 test.f90
  -rw-r--r--    1 roland  staff    672 Sep 21 19:08 test.o


> Good luck writing Makefiles for Fortran, OCaml or (whenever they will really, actually work) C++ modules.

I've successfully written Makefiles for Fortran and they worked with ifort/ifx and gfort. In my experiments I've also made GNU Cobol, GNU Modula-2 and Vishap Oberon fit within the Makefile paradigm without much trouble. You have failed to provide reasons as to why those languages in particular (or more likely any language that's not of C heritage) can't be used with Makefiles. For instance, you can definitely couple OCaml with Makefiles, just use ocamlopt and treat .cmx files as object files, generated beforehand by ocamlopt -c (like you'd do with GCC). I am not familiar with C++ modules and as such I didn't experiment with them.



> I've successfully written Makefiles for Fortran and they worked with ifort/ifx and gfort.

Did the samé (I'm not sure if gfortran did exist at all at the time, I guess it had been g95), plus they worked with Absoft, PGI and Pathscale too (yes, that has been some time ago). And it was a great PITA. Not the least because at the time no Fortran compiler did generate the dependency description, so you either had to parse the Fortran sources by yourself or use makedepf90, which didn't work with all sources.

> You have failed to provide reasons as to why those languages in particular [...] can't be used with Makefiles.

I have obviously badly worded that. I didn't mean it is impossible, just that is a great PITA.

> I am not familiar with C++ modules and as such I didn't experiment with them.

They have the same problem, you don't know the name of the module that is going to be produced.



There are projects that generate files, depend on multiple languages, etc. If you push the job of a build tool to the compiler infrastructure, then why even have a “build tool” in the first place? Make is simply anemic for anything remotely complex, and there are countless better tools that actually solve the problem.



Yeah my biggest problem with make is that the compiler has to generate the header file dependencies. This means starting a C or C++ project with make from scratch is a hard problem and there is no default solution or default file for this other than to just use CMake.



Yep, terrible:

I will show how Make hits every one of your complaints:

(sarcasm on)

in file hello.c:

  #include 
  int main(int ac, char **av) { printf("hello\n"); return 0; }
How to compile and run this? We need a build system! Download and install GNU Make.

When that step is complete:

Type in

make hello

and its done. Now, run via ./hello

See, Too much magic (didn't even have a makefile or Makefile), no standard library, Too constricting, cryptic, too basic. And, because you had to install Make, too complicated. Hits every one of your objections.

(sarcasm off)



I agree. Also a lot of the replacements are focused on one language rather than being a generic "do stuff" tool like Make.

The fact that Make can't even do subdirectories sanely is kind of ridiculous.

Does anyone know of anything better than Make? There's Ninja but it's not designed to be written by hand.



Have you taken a look at using Nix as a build system? One thing I don't like about most build systems is the lack of a dependency check, C is most guilty of being the troublemaker here. But anyways, with Nix you can lock in dependencies and handle arbitrary feature flags and platforms as well.

Though it's possible this goes beyond your "just do stuff"



Yeah I've been following Buck2. Definitely interesting.

Xmake looks interesting too (even though I hate Lua). I wonder why it isn't more popular - I don't think I've seen a single project use it.



I adore Make. I've written one (or more) for every single task or project I've touched in the last 20 years.

No smarts. It's just a collection of snippets with a few variables. "make run", "make test", "make lint", that kind of thing.

"make recent" = lint then run the most recently modified script.

You could do the same thing with Bash or other shells, but then you get stuck into Developer Land. Things are so much more complicated, without giving extra value. Make is just a DSL saying "files like this, are made into files like that, by running this command or two". That's it.

This is incredibly powerful!



> Make is just a DSL saying "files like this, are made into files like that, by running this command or two".

Nicely put.

Decades ago i wrote a testing framework in java where you could specify your tests and their dependent classes using make-like syntax. So you could have a set of test classes which define the "baseline suite", then another layer of test classes which is dependent on the above and only run if the above is successful and so on.

I really do not understand why folks today make everything so complicated. My advise has always been, stick to standard Unix tools and their way of doing things (tested and proven over time) unless you run into something which could absolutely not be done that way. Time is finite/limited and i prefer to spend it on System/Program Design/Modeling/Structure/Patterns etc. which are what is central to problem-solving; everything else is ancillary.



That just rosy tinted glasses most of the historical users are wearing. It takes time and nerve to admit that you have decades of experience with a footgun that isn’t even trivial to use beyond tutorial/builtin use cases.



Likewise! I haven't been using them in the past, but at my current position almost every repository has a Makefile.

Running `make test` and knowing it will work, regardless of the stack, language, repo is a huge lifesaver.



> Make is just a DSL saying "files like this, are made into files like that, by running this command or two". That's it.

The problem with make isn’t make - it’s that what makes calling usually doesn’t do that anymore. On my last project we had a makefile that had 4 main commands - build test frontend deploy. Build and test called through to maven, frontend called npm, and deploy called docker + aws.

All of those tools do their own internal state tracking, caching, incrementalness and don’t report what they’ve done, so it’s not possible to write a molecule that says “only deploy if build has been updated” because maven/cargo/dotnet/npm/go don’t expose that information.



The author is not even using the mtime-based dependency tracking. Also the targets are supposed to be PHONY but not marked as such. The author could have replaced it with a shell script that read $1 and matched on it to determine what to do.



Or just with a simple command which is guaranteed to be on most Linux systems already - make.

Maybe his Makefiles aren't complex, nor they seem to follow all the best practices invented by code gurus in sandals, but it works and, what's important, it works for him.



There was a time when people would have said the same about make. The shell is the simple command that is guaranteed to be on all Unix systems from the get go. Make is the new kid on the block.

If you just want to run commands in the order written down, don't need the topological sorting feature of make and value ubiquity then a shell script is the answer.

If you are not stuck in the past and you truly live by the UNIX philosophy of doing one thing and doing it well, a command runner is the answer.

The command runner avoids the ton of foot guns both shell scripts (no matter which flavor) and make files have. just also brings a couple of features out of the box that would be very tedious and error prone that replicate in make and shell scripts.



Right but writing dependency management (of targets, not package management) in shell seems like a nightmare compared to just leveraging make. Why complicate things? It's dead simple to debug, the interface is dead simple, what's the downside?



Right, but the original point which started the thread is that "The author is not even using the mtime-based dependency tracking", in which case a plain shell script is very much a viable alternative to make.

I don't particularly mind this use of make, but as an article on make it fails to exemplify what I think is its main purpose.



In my PS script solution, I just added a clean option+command.

I rewrote my makefile in PS and don't miss anything from make and have no regrets, as it is simpler now.



Who develops just? Will it be around in 5 years? Will it be ad supported? Will the developer sell my data? Etc.

I don't have any of those concerns with GNU Make.



Software with small scopes can be finished. It doesn’t sound too complicated to just push a new bug fix each year, by anyone. If anything, make is probably a significantly more complex codebase due to all the hacks it accumulated over the years, as a result of a dumb model it started with.



> There was a time when people would have said the same about make. The shell is the simple command that is guaranteed to be on all Unix systems from the get go.

That would've been a pretty short window of time since make first came out (according to wikipedia) in 1976.



Phew, I was so worried. So for 48 years out of Unix' 53 years of existence (90% of that time), make hasn't been the new kid on the block. Oh, let alone the fact that we're talking about stuff from 48 years ago, when their "screen" was a paper printout of the output.



> There was a time when people would have said the same about make. The shell is the simple command that is guaranteed to be on all Unix systems from the get go. Make is the new kid on the block.

I seem to recall it being praised very highly at the time as a great tool that saved many billable expensive CPU minutes and made a developer's job so much easier.



I believe you to be correct. I think it's important that one uses the right tool for the job, regardless of whether or not it's widely adopted or supported.



Or just use a simple command which ALL Unix system have: sh.

If you're using make a glorified task runner, why don't you just create a scripts/ directory with shell scripts to do whatever you want. This is simpler, cleaner and works everywhere.

Make doesn't really add anything. I get the feeling that using make this way is an aesthetic preference that has somehow developed with time into a "this is the one true Unix way" cargo cult.



> but it works

That it what a lot of SW developers forget: your code might be the best in the world, but , if someone is not able to build it, it is useless.



I remember countless times me and forum fellas debugging makefiles written under developers’ assumptions about systems. That is also what lots of developers forget or simply aren’t aware of.

Make isn’t a silver bullet for builds. It isn’t even a bullet. Most software gets built from scratch and make’s deps graph makes little to zero sense in this mode. Make is a quirky poor dev tool footgun, jack of all trades master of none.



> it works and, what's important, it works for him.

Until it doesn't. And then you really have to learn about PHONY targets, why and when there must be a tab and not spaces - good luck with an editor that doesn't treat Makefiles special and is configured to convert tabs to spaces.



But those are things that he’ll learn about as he keeps using make. And why does it matter that some editors don’t know about makefiles? The one he is using handles them just fine so what’s the problem?



> And why does it matter that some editors don’t know about makefiles?

Because it isn't fun checking if the whitespace at the beginning of the line is a tab or spaces. And as said, you must know when to use tabs and/or spaces in rules.

For doing such a simple thing as calling some commands, Make has way too many subtle footguns which _will_ bite somebody, someday. The problem (that's not a problém at all, that's a reason to celebrate!) is that most JS devs and users aren't used to Make, compared to e.g. C programmers. To rephrase: as someone writing C, you have to use something like a Makefile, as anything else (like scripts) gets unreadable and -usable quite fast. But if you can get away with a less complex solution, you should really use that instead of Make.



> Because it isn't fun checking if the whitespace at the beginning of the line is a tab or spaces. And as said, you must know when to use tabs and/or spaces in rules.

that's why https://editorconfig.org/ exists, so that neither you nor your teammates have to think about these things



> neither you nor your teammates have to think about these things

You're better off using a Makefile linter. But you must know about the problem before being able to solve it. And error messages like

   Makefile:2: *** missing separator.  Stop.
aren't the most helpful.


I genuinely don’t understand why that matters. The fact that there exists bad editors that don’t support my workflow shouldn’t prevent me from using the tools that I like and am comfortable with. I use editors that don’t screw up makefiles so what’s the problem? If I take your argument to the absolutely absurd logical extreme, I shouldn’t use lower case letters because some character encodings don’t support them.



And as said, you must know when to use tabs and/or spaces in rules

Is that Stockholm syndrome? Or an appeal to history/authority in action? What makes people believe that this is even remotely reasonable.

inb kids these days, I started in the '90s and wrote my share of makefiles. Tolerating make only made sense until 2010-ish, then both hw and sw advances rendered it useless.

Edit: just realized my reply to a wrong person, but let it stay here



The strengths of make, in this context where it's been coaxed into serving as a task runner for small projects, are:

1) It's already installed practically everywhere

2) It reduces your cognitive load for all sorts of tasks down to just remembering one verb which you can reuse across multiple projects, even if the implementation ends up differing a bit

3) In conjunction with the similarly ubiquitous SSH and git, you have everything you need to apply the basic principles of DevOps automation and IaC

There's something special about waking up one day with an idea, and being able to create a fresh git repository where the first commit is the Makefile you've had in your back pocket for years that scripts everything from environment setup to deployment to test automation to code reviews.

There's zero effort beyond just copying your single file "cookbook" into that new repo.



> It's already installed practically everywhere

This always comes up, and is a sad chicken/egg problem.

We can all somehow agree that Make mostly sucks, but OS maintainers aren’t interested in providing a default alternative, due to choice overload or something.



I always find it a questionable choice, when someone, who wants to be a professional software engineer, uses Windows. If it is a choice at all. Of course they could also be working at some job, where there are silly ideas like everyone having to use Windows or so.

If it is a choice, it sort of shows an "I do not care" attitude to software development, or being seriously uninformed about proprietary software. Usually those are the types, for whom software engineering is merely a 9 to 5 job, and not a craft they take pride in. An activity they do not really care about at other times. Which is OK to do, not a crime. If I were hiring though, I would rather look for passionate software engineers/devs, who know a lot of stuff from tinkering and exploration. Ultimately using Windows means you are not truly in control of your productive system and are at the whim of MS. It is a risk no self-respecting software engineer should take.

To clarify, that is not to say, that there cannot be craftsmanship people using Windows. It is just way less likely. More likely they are "enterprise" software people. Even the choice to explore and use a GNU/Linux distribution betrays some kind of mentality of exploration. Wanting to know what is out there. Learning a new thing. Adapting it to ones needs. This kind of learning mindset in the long term is what sets engineers apart from others.

So I would claim, that not many good software engineers use Windows to be productive. If they have to, they will likely install some VM or some means of making things work as if they were on a GNU/Linux system. WSL or whatever, to circumvent the limitations and annoyances of a Windows system.



This is a silly take, have you heard about game developers?

If there's craftmanship anywhere, its in game development, and they surely don't want to spend all their time working on a platform without proper tooling that their end users overwhelmingly do not use.

The choice of OS has nothing to do with craftmanship or "exploration". I "explored" linux many times and am not using it currently.

In fact, I'm happy to argue that most developers that care so much about the choice of OS that they are uninterested in using another one (and do not work in OS development) are probably somehow stuck in their ways and uninterested in exploration themselves.

Taken even further, currently the only important OS is the browser, and nobody cares who launches it.

I hope you are not somehow in charge of hiring.



Game developers are such an exception ... And I left room for exceptions in my explanation.

But of course, if you want all the IP to leak via MS spyware phoning home, sure, let your devs work on Windows machines.



I think you're generalizing from your position inside some bubble, I am not sure which. Equivalently I could imagine game developers generalizing that linux people are terminal fetishists and have no interest in getting stuff done, who would rather customize (a.k.a fight) their OS for days on end than provide end user value, and through their idealistic, puristic and dogmatic approach to FOSS they feel safe but are meanwhile vulnerable to exploits of bad actors through supply chain attacks.

I think neither take is true, nor does it hold much value to claim it, unless your aim is to divide developers into arbitrary adverse tribes.



> You think you're superior because you use something niche.

Ah? Interesting! Tell me more about what I think!

> Some people just don't want to have to worry about laptops melting in their bags...

That is funny, because this is what happens with Windows installed, due to nonsensical system default settings. I've almost had that happen with Windows 7 once, due to an idiotic default setting regarding when the system is to wake up from hibernation, so that it turned back on while in my bag, inside an inner protection bag, nicely accumulating the heat.

Recently there was a whole long thread of people describing such problems with Windows machines. Here on HN.

Perhaps you are the one, who is not properly informed? Or are all those experiences with Windows machines somehow null and void, invalid?



Nothing is installed out of the box on windows, but anyone with a functioning development environment for a large number of programming languages will have installed wsl, msys or git bash along the way and have make installed as part of it.



1. MSVC is not installed by default on Windows.

2. nmake is with POSIX make, let alone GNU make. It doesn't even support .PHONY target, which is what you need to replace Just with make.

3. Installing Just with WinGet is simpler, faster and takes probably only 1% of the space of installing Visual Studio for nmake.



`just` is great and I use it all the time.

* All commands in one place, view them all with `just --list`

* Stupid-simple format

* Small standalone binary

* Configurable (with arguments, environment variables etc) but not _too_ configurable

When I see a git repo with a Makefile, I'm filled with dread. When I see a repo with a Justfile, I get warm fuzzies.

Some people say it just doesn't do enough to justify existing. These people are just wrong.



To me, just is make without features I don't need. There is not a lot of benefit for me, but there is a lot of benefit for other people who need to learn the repo and have no knowledge of either make or just.

Another benefit is that justfile just cannot get too complex and tangled. Simplicity at its finest.



When step one of using a tool is to disable the tools primary benefit (everything is phone) you’re reaching for the wrong tool. Like it or lump it, make deploy is much neater than docker build -t foo . && docker tag foo $item && docker login && docker push && helm apply

I wish there was a flag for make which set it to be a command runner by default for the current makefile.



Please help me understand why this thing exists. Like, no snark, I like using the proper tool for a job -- when would I look at the project and think "this is something that is better done with 'just' tool". Instead of readme.txt and a folder with scripts



It’s gotten some syntactic sugar recently that’s made it pretty nice. Specifically, I’m thinking of its OS-specific sections: you can do something like
    [macos]
    # do a thing
And without any other checks, that section will only ever run on Mac. Yes, of course you can replicate this in Make, but it isn’t nearly that easy.


I can count on one hand the number of times a simple script or make worked out of the box. Sure, part of the reason is dependencies, but then I might as well use a build tool that is actually doing what a build tool should. Makefiles/bash scripts are hacks, and I don’t get this strange Stockholm syndrome UNIX-people have for them.



why do you need a "command runner"? Have you heard of bash functions? Or... make? The thing is too simple to justify installing another tool, however nifty it is.



Here’s a one-line horror story for you (from a real project I’m working on):
  .PHONY: $(MAKECMDGOALS)
> The author could have replaced it with a shell script that read $1

Sure, but `./build.sh dev` is a bit less obvious than `make dev`.

Another reason to use Make even if you don’t have any non-phony steps is that you can add those later if needed. (I agree that the author should mark {dev,build,deploy} as phony though.)



Why is this a horror story? Under certain assumptions of how the author intends to use this, this sounds like a sensible way to define a dynamic list of phony targets to me, without having to specify them by hand.

There are many reasonable scenarios why you might want to do this: determining at the point of calling make which targets to force or deactivate for safety, projects with nested or external makefiles not directly under your control, reuse of MAKECMDGOALS throughout the makefile (including propagation to submakefiles), ...



Consider:
    .PHONY: $(MAKECMDGOALS)

    qux:

    foo: qux

    bar: foo
Now make bar and make foo bar will disagree on whether foo is phony, which may or may not be what one wants depending on both what foo and qux do, and how bar depends on foo and qux side effects.

It also very much depends on what the intent is, notably such a "autophony" make foo is very different from make -B foo.



First of all, misusing a tool doesn’t “give it a bad name”, and second of all who cares? A tool isn’t a human being. Make’s feelings aren’t going to be hurt by this article.

The author just shared something they think is cool. That takes guts to show the world, and our critiques should respect that.



> I’d argue it’s a waste to use separate files here

Fine. Write a `make.sh` that parses the arguments; that would be better.

> How so?

Well, read the comments here. Do you sense that Make is a beloved tool? Most of the complaints are about some details about syntax, and those complaints are completely valid. If you use Make for its intended purpose, then it's still easily well-worth using, despite that. But if you use it as a glorified script, then all you see is the warts, without any upsides. And you then tell all your friends that "Make sux!" Which is a huge shame because Make is awesome.



That's fine, the separate files are a benefit here. The only annoyance is clogging up the root project folder -- though some people don't seem to care about that. If they got too numerous (the 5 in OP's "more advanced" project would probably not be too numerous), I'd consider putting the scripts in their own folder. I might even call it 'make' just to mess with people. Then my commands are just make/build and make/deploy and so on (.sh unnecessary). But really, in OP's case, I just have no need for some of their simple wrappers. "npm run dev" is two characters longer than "make dev", pointless.



While you're technically correct, what I gathered from their experience is the consistency of usage, between not only their own projects but third-party projects too.

They could make technical improvements to their own Makefiles, sure. But it's more about being able to enter a project and have a consistent experience in "getting started".



> But it's more about being able to enter a project and have a consistent experience in "getting started".

I'd say putting the Makefile content in `package.json` would be more consistent, especially as they are already using Gulp as the build system.



We are not arguing whether not declaring phony targets is worse than using comments in `package.json`?

But anyway, comments in a Makefile or `package.json` are not documentation anyway, that's what the `README` or `INSTALL` (or whatever) is there for (in projects like the one the Makefile is written for).



We all were beginners at one time or another. And if you want to learn a tool, it helps to actually use it, even if your greenhorn usage is less than perfect. You can make incremental improvements as you learn, like we all do.



That's the beauty of make and shell, it's follows the UNIX principle of being simple and doing one thing and one thing well. People want it to do many other things, like be a scripting language, a dependency tracker, etc, so they're willing to pull in bloatware. New isn't necessarily better. Autoconf and automake isn't make.



Yes, the UNIX principle of being simple and doing one thing and one thing well.

Make does dependency tracking relatively well (for 1976). But if you just want to run some commands, your shell already does that just as well, without any of the caveats that apply to make.



Not satire, sorry I didn't clarify. They want make to have a builtin scripting language rather than using shell scripts, and a dependency tracking system that more complex and less tooling agnostic rather than leveraging the appropriate tool (like `npm ci`).



None of them are simple, they are chock full of hacks upon hacks, “fixing” their own idiocies, and by extension, none of them are doing their one thing well. Especially bash scripts, they should be left behind..



Can you be more specific what you view as hacks or idiocies? Besides the criticism of .PHONY targets, which I don't think is a hack nor particularly ugly. When I mean shell I'm referring to a family of shell languages that are used run commands, change directories, etc. Fish is a shell, for example. Babashka can be considered to be a shell. It doesn't even need to be those, someone using make could use python or javascript for the scripting part if it works better than a shell language.



There isn’t even a need for a shell script. The author is already invoking three separate tools, each of which has a mechanism for invoking custom commands.



Gulp? That JS tool that was last cool in 2018? After it which it was replaced with Grunt, which stopped being cool in 2020? And that was replaced with Webpack, ESBuild, Rome, Bun...

Why would anyone voluntarily subject themselves to that kind of insanity? :-))

Better to just use the hacksaw that is Make than all these Rube Goldberg contraptions :-)



I don’t personally care about the JS ecosystem. But OP is already using Gulp. He’s then calling Gulp from npm run. He’s then calling npm run from make. Adding make into the mix is solving nothing here. If you’re saying he should use make properly I agree!



Sure but these are completely orthogonal to make. Might as well complain about gcc.

If anything, it's an argument for making better use of make's own features for configuration in the first place.



Technically all of these make targets look for files by the names of the targets. Each one should really be defined as .PHONY.

That said, I used to write makefiles like this all the time, but have since switched to just and justfiles in recent years which make this the default behavior, and is generally simpler to use. Things like parameters are simpler.

https://github.com/casey/just



I kinda like these make-ish systems, but they all have one problem: Make is already on any Linux and Mac, and is pretty easy to get on Windows as well. (It’s a real pity they don’t include it in the Git Bash!) Just using the lowest common denominator is a big argument for Make IMO.



On Windows if you don't use WSL, Cygwin gets you 95% of the way there. I've been using it for decades to develop CLI tools and backbends in Python and a few other languages. You learn the quirks in about 1 month, add some tooling like apt-cyg and map C: to /c and you're off to the races.



You have to handle dependencies either way to build a project - what’s one more tiny executable?

This criticism might make sense for some non-vim editor because you might have to ssh into a remote location where you can’t install stuff. But if you should be able to build a project and thus install its required dependencies, then you might as well add one additional word to the install command.



A big mistake Make has is mixing phony and file targets in the same namespace. They should be distinguishable by name, e.g. phony targets start with a : or something.

Too late of course.



yeah just is really cool but it's not really commonly installed so that's kind of annoying.

i feel like we're due for some kind of newfangled coreutils distribution that packages up all the most common and useful newfangled utilities (just, ripgrep, and friends) and gets them everywhere you'd want them.



But I want please, ag and friends! The "problem" with this kind of package is that everybody wants something else. And the chances that they get a part of the default MacOS or Windows install (or even part of the XCode command line tools or Plattform SDK (or whatever that is called now)) is quite small.



I like `asdf` a lot for this, but I actually don't use it for either of those examples (though it does have plugins for them). Ripgrep is in most package repos by now and all my dev machines have a Rust toolchain installed so I can build and install `just` from source with a quick command.



Sure, really though I don't understand why installing a single binary which is available from several easy to use package managers somehow becomes an insurmountable barrier for people when `just` is involved. "If it's not already on my system I can't use it" seems like an absurd limitation to place on your projects.



Please talk to security. My machine is locked down so tight I need a director (or higher) override to get anything not in the default distribution or "blessed" by security installed, and I can't even be the one to install it. May you never have to work at The Enterprise. It sucks!



That sucks for sure, I did work a giant enterprise for a few years and it was plenty painful but not that bad at least. Well maybe it was that bad, because we didn't use make either, everything had to go through Jenkins and nobody bothered with anything for local development beyond an occasional `build.sh` somewhere in the project. Simply push your code when you think it's done and wait 30 minutes to get the next set of linter errors.



At that point wouldn't you "just" download the source and compile locally? Since you presumably could compile stuff. Add a 'bin' folder in your home directory to your PATH and enjoy.



oh i have no problem at all installing stuff in my own environments, i'm all about having cool new tooling -- it just starts to get a little rude to ask others to do so in order to use something you're distributing (and therefore absent coreutils-ii-electric-boogaloo installed everywhere, i'm much more likely to reach for make, unfortunately).



Maybe it's different kinds of projects then. For most of what I work with distribution would have nothing to do with the build system in the repo, only people who would ever have to deal with it are other contributors that likely have some environment setup to do regardless.



Meanwhile gradle people are like: just run these included gradlew or gradlew.bat files, they'll download the actual gradle from somewhere online, pollute some folders in your home dir, and then execute the build stuff.

I notice just has some pre-built binaries that could be used for the same thing. I find it a little beyond rude what gradle normalized, but hey, it "works", and it removes the source of friction that's present any time you violate the principle of least surprise with your choice of build tool.



The reason why Gradle needs this junk in the first place is that they aggressively change and deprecate APIs. Tried to build a 6 year old project today and of course nothing works. Gradle wrapper proved pretty useful here. Make, on the other hand, has maintained almost perfect compatibility since it's inception.



I don't understand why Gradle doesn't just provide the wrapper for download. They do provide the checksums [0], so it's not like the wrapper is customized for each repo or anything, but to download it you have to download the full distribution, extract the archive to extract the archive to extract the archive and run Gradle to run Gradle.

The properties file specifying the version and checksum is great, but we shouldn't need millions of identical copies of the binary itself checked into every repo.

[0] https://services.gradle.org/distributions/



It funny that make evokes such fierce arguments, almost like the semi-religious vi-vs-emacs wars of old.

I agree fully with the OP, in particular I find it smart that he wraps anything in a top-level makefile, even if other, more sophisticated build tools are used. The advantage is standardization, not having to remember anything and to know that if you wrote it, you will just be able to type "make" and it will work.

Let's say a C person wants to compile a Rust project, they would not have to look up how cargo works, but could simply type "make" (or "gmake"; I don't use GNU specifics, but try to be POSIX compliant, even if it is certainly true that almost 100% of makes are gmakes).

Thanks for proposing the use of the timeless "make" as a sort of top-level build system driver; this will probably still work in 250 years.



It's funny such a simple title inspired a flamewar. The article itself is an insanely simple use case for make (that uses gulp in 2024?) that clearly no one read.



cargo is a bad example as it's universally `cargo build`.

Make on its own is great but most of the time I've worked with C projects it's been cmake/autotools + global pkg installs, which you Do have to frequently look up.



> cargo is a bad example as it's universally `cargo build`.

Except if you want to use some specific feature. Or specific log level. Or build a specific crate in a workspace. Or...



Parent states that it's always "cargo build" which in 90% of the cases, is true.

Except for the projects that would require something like "cargo build --feature=wayland" for example, in order to run.

So "cargo build" ends up not being universal, and adding make will make it just "make build" regardless of what flags people use with cargo, meaning it's more universal than "cargo build".



Yes, Make is awesome. I use it for so many things. It's a great way to automate tasks. For example my personal website is built using a Makefile that calls bash scripts to rebuild the updated web pages, and I deploy it using a git push to my server and a git hook there that calls Make. However there are files that I don't want to put into the Git repository because they are blobs that may change often like PDFs of my teaching materials. It's okay, I have an "uploads" target in my Makefile that will upload only the modified PDFs to my server and this target is a dependency of the "deploy" target which does the git push so I don't even have to think about it.

Also the updated PDFs for my courses materials are automatically put into my websites source tree by another Makefile that I use to manage and build my teaching materials and which let me either build the PDFs I use from my LaTeX sources or build from the same sources alternate versions of the materials for my students (without solutions to the lab sessions exercises for example) and automatically publish those to my local website version to be uploaded whenever I want to deploy the updated website.

It's kind of Makefiles all the way down. I like Makefiles! =)



It's an interesting phenomenon. ChatGPT and other LLMs have really opened up previously "archaic" tooling like Make and Bash. I've "written" more Bash in the last year than my entire career previously, because LLMs are such good copilots for that.



make as a task runner is not too bad, but there are better alternatives today like just (as others have commented).

make as a build system is ok until you hit the warts.

- make/Makefiles aren't standardized, which is why automake exists. So now you're not writing Makefiles, but templates and generating the actual makefile. This doesn't matter if you own the whole toolchain, but most people don't, so this is what some folks do to guarantee their Makefiles are portable.

- make cannot do any kind of dependency resolution, it assumes that whatever you need is right there. That leads to configure scripts, which like makefiles, are not standard, so you use autoconf/autoreconf to generate the configure script that runs before you can even run a target with make.

- make (and adjacent tools like automake/autoconf/autorefconf) use mtime to determine if inputs are out of date. You can get into situations where building anything is impossible because inputs are out of date and running autoconf/autoreconf/automake/configure leaves them permanently out of date. (fwiw, many build systems can get away with using mtime if they can do proper dependency tracking)

All in all the fundamental design flaw with make is that it's built with the unix philosophy in mind: do one thing well, which is "rebuild targets if their inputs are out of date." However this is an extremely limited tool and modern build systems have to do a lot of work on top to make it useful as more than a basic task runner.



I'm referring to package management. Modern build systems all have some way of doing package management directly or interfacing with package managers instead of just shelling out to them, which you would have to do with make.



It normally works in conjunction with GCC’s “-MMD -MP” arguments which provide .d files which then get included back into the Makefile with something like “-include $(OBJS:%.o=%.d)”.

It doesn’t directly interpret any source file though, if that’s what you mean.



Do you really expect an answer from a self-defined "ignorant"? Or is this a rhetoric question and you are hiding an answer inside it? If so I don't get it. Wouldn't it better to explain it in plain words?



> make cannot do any kind of dependency resolution, it assumes that whatever you need is right there. That leads to configure scripts, which like makefiles, are not standard

The ancient convention there is "make configure", which sets up whatever "make [build]" needs.



the only thing I really miss in make is the ability to resolve mtime as something other than mtime. So I resort to using touchfiles which are gross but still work better than a lot of other things (I'm looking at you, docker build caching).



I like Makefiles as well, and although many people have commented on the limitations and the fact that the author's usage of make is fairly simplistic, I think it's great to get started with the basics.

Kudos to the author for writing this up and _not_ feeling the need to learn every last bit of make and do everything "properly" before sharing.

I've worked on a team where GitLab CI pipelines replaced Makefiles, and I was asked not to commit a makefile to the project because it's a customized developer workflow. They were allergic to local testing, but I thought it was a great way to just store and share knowledge about how to build, test, clean, etc. Far easier to read the GitLab CI files (which yes of course were also necessary and served a different porpoise).



For me, make has two fatal flaws: 1) the lack of a builtin scripting language 2) poor recursion support

The problem with the lack of a scripting language is that I either have to do horrible shell contortions to do simple things like using a temporary file in a recipe or write a standalone script that doesn't live in the Makefile which is needless indirection. This is exacerbated by Make interpreting newlines in the recipe as a separate shell invocation, which I consider a poor design choice. It also requires needless forking for many small tasks which could be done more efficiently in process.

The lack of proper recursion means that I either have to use recursive make, which is largely considered an anti-pattern, or I have to use a flat directory structure.

What Make does have going for it is ubiquity and good performance for small projects. It is the tool most projects should probably start with and only switch to something more advanced when its scalability issues become a genuine problem.



Who even implements an alternative implementation to GNU make? FWIW, no one "can't" use GNU make... even Apple uses GNU make (hell: they even ship GNU make, lol).



BSD Make exists. Also, are you saying that no one is developing software under a contract where they aren't allowed to install software on the target machine? I'm also unsure whether GNU Make works on embedded systems.



I do this a fair amount as well. It's really just a way of documenting the configuration and idiosyncratic commands in one place, which happens to be executable. I will happily create (uncommitted) Makefiles with hardcoded paths and keys and things, since otherwise that information would go in my ~/NOTES file and there's too much in there already. My default target tends to echo out things that I told myself I needed to remember when coming back to the project.

As soon as I notice I'm reaching for anything more than `.PHONY` targets and dead-simple filename dependencies, I stop and do the real build work in something else (callable via a make target, of course!) I know how to do complicated stuff with make, which means that I know I will do it wrong. Repeatedly. Or possibly eventually do it right, but then have to maintain the resulting fire-breathing hairball.

(But to those complaining about not marking all the non-file targets `.PHONY`: lighten up. If the correctness matters so much that you're going to be messed up by a file named `all` or `build` or whatever, you've probably already gone too far down the rabbit hole and should switch to something else.)



> If the correctness matters so much that you're going to be messed up by a file named `all` or `build`...

That's not the problém. _We_ know what "dev is already up to dáte" means, but chances are people who don't know about `.PHONY` don't.



Many responses suggesting a simple bash script instead. One reason I like make is that I can just tab-autocomplete targets. No extra setup needed.

If you’re using a single entry point script (e.g. do.sh) and handling $1 you don’t get that for free.

And the moment you need to make your entry point script aware of “B requires A” then you’re going to half-bake something similar to make, anyway.

Here’s [1] my ~80 line version for Python projects (micromamba + uv) which I’ve been pretty happy with.

[1] https://github.com/giovannipcarvalho/micromamba.mk



> tab-autocomplete

bin/ also perfectly tab-autocompletes, don't understand what you mean.

If you're having only one bin/do.sh script -- you're doing it wrong.

And if you know $1 can only be either "prod" or "dev", then create scripts build-prod.sh and build-dev.sh.

Simpler, installed on all systems, no quirky syntax with tabs and phonys.



Makefiles become a horrific technology the minute dynamic variables and the ensuing escaping games get involved. Massive footgun. Bash is already a fairly bonkers language (think from a junior engineer's perspective) and Makefiles so frequently compound that by an order of magnitude.



It doesn't help that it's easy to trip over something where the right answer is "use the more recent version with the feature which fixes that" but because everyone's so used to make being ancient tech, they don't think it would be worth upgrading.



I like Makefiles too :) I use them more or less as a command runner, not often to build new targets based on sources (sometimes still).

In particular, I like:

* The ubiquity: it's easily available almost everywhere I touch and if not, it's usually a package install away. * The auto completion: I often define variables with default values at the top, but they can be both easily discoverable and their values can be changed just by typing `make VAR=foobar ...` * Chaining commands: make targets can be chained with `make target1 target2 target3 ...`. They will execute in the order specified. If I run this too often, I can usually create a 'ew make target that chains them all.

Make is definitely not perfect and could be simpler. My biggest griefs are:

* The abscons list of built-in variable. I can only remember a few of them ("dollar duck `$<` being my favorite), but they are not great to search in the docs and my brain is limited. * The "one line = one shell" is a bit of a PITA for the use case I have. I usually move more complicated script put of Makefiles.

I thought I would have been bitten by the M4 indentation more often, but it's not really a problem for me anymore at this point (my editor _knows_ Makefiles since it's so ubiquitous and does the right thing)

I have Just on my list of things to check, one day...



"I usually move more complicated script put of Makefiles."

We probably all should. But it is valuable a few different ways to have everything in a single file.

Even though it's an ugly mess having a script within another script, with make variables mixed with shell variables mixed with make logic mixed with shell logic mixed with child proc output... the task at hand wants bits from both worlds at the same time, so the mess is also the value.

You can rig up alternative answers for each of those, but really it just means make should probably have a more generic scripting language itself so you don't have to leave it to get something done, like how you don't have to leave awk to do non-record-processing stuff. People don't, but you can basically write anything you want in awk just like in shell or python.

Or it should have less, be intentionally so limited that you can't embed any significant logic in the makefile and so it all lives somewhere else, either way, so long as it's all together, either all in the makefile or all in something the makefile runs.



I used to hate makefiles, but really I just hated the way C/C++ make it a manual task to decide what to compile, as opposed to something like Python modules.

Now I love Make, for non-C work.



Make files: it’s a good build system in the same sense C is a good programming language. It’s also a terrible build system for the same reasons C is a terrible programming language.



I love makefiles as well and do something similar to OP: for every repo I contribute to, especially if it's not one of mine, I'll create an uncommitted makefile to track all the shell commands that add up to my workflow within that codebase.

The ability to create dependencies between targets is the cherry on top, but the main value is just the ability to create a cheat sheet of shell snippets accessible via `make ` from the root of the repo.

Such a makefile is always (1) version controlled as a secret github gist (though, as personal rule, i never hardcode secrets into it), (2) committed & pushed on `make` (3) git ignored via `.git/info/exclude`. This has worked quite well for me.

One downside with this approach is that the best syntax for passing parameters down to the target from the shell is to use environment variables, which is a little awkward. `NAME=value make target` is less pleasant than `make target --name=value` would have been.



Take out the dashes and that's a supported syntax for overriding variables:
  NAME := foo
  test:
      echo "name is $(NAME)"


  $ make test
  echo name is foo
  name is foo
  $ make test NAME=bar
  echo name is bar
  name is bar


I agree with everything, except that I moved to just, which runs fine on win/mac/nix and is a single-file no-dependency task runner made for this use case. It irons over a lot of warts like working directory, loads dotenv files, lets you write multiline scripts, it’s just magic.



Just really is great, you should just give just a try. I put all the project-specific incantations in my Justfile and save my teammates lots of typing and copy-pasta.



Make has been my favourite unix tool for years and a really useful tool to have in your pocket. It’s simple, elegant, and powerful.



I would theoretically prefer that as well, but it doesn't give you shell completion of targets/tasks for free like Make does. So for "UX parity", you'd also have to write a completion script and get users to source it into their shells somehow, which isn't great.



I also like Makefiles, but some design choices are really dated.

My latest gripe with it was its extremely poor support for filenames with spaces and special chars (;) in them: I just wanted to convert a flac library to MP3 files, it seemed well suited for the job at first glance.

I don't know what tool could be a proper replacement. scons perhaps? ninja files are too verbose. just doesn't do dependencies, job control, etc.



Conceptually I sometimes feel that install and clean targets should have prerequisites. The problem for clean is that make's only condition for evaluating a rule is "a prerequisite has a newer modification time" and not "the prerequisite exists". As for install, because each rule can only have a single target, the installation of each individual file would have to be a separate rule. As install destinations are usually subject to side effects (excluding nix or guix), there is usually a need to force an installation even if the prerequisites haven't been updated. And installations often require additional postprocessing that can't be captured by individual rules. So to reconsider, perhaps it makes little sense to for the install rule to have prerequisites. And forcing make to check the clean rules prerequisites might actually hamper performance, as opposed to a direct `rm`.



The correct answer is I don’t like makefiles when they are abused. They have no state yet people try use them as such and create pain for others.



I use Make fairly similarly. One fairly major footgun the author is not avoiding is their targets will break if files or folders of the same name exist.

You should always .PHONY any non-file targets though lest you want your build to break suddenly and confusingly when you add a folder named say "build" in the authors case.

Make checks file modification dates by default to see if it can eliminate steps. Setting a target as a .PHONY indicates it's a "fake target" as in a command set to be run and not a file to be written. Makes default assumption is it's being ran to "make a file" so "make foo.html" or such.

Here's a very simple example

https://github.com/donatj/force-color.org/blob/dev/Makefile



I instictively know that makefiles would make a lot of things easier, but I've never found the right tutorial that would help me understand them. I'm not a 'C' programmer, and so much seems weighted to the idea that you're generating object files and linking and producing a.out.

Any good tutorials or resources for learning that show a broader applicability for makefiles?



There is much misunderstanding about Makefiles, what they are, and what they are not. Make is not a "programming language build system" as some would imply, but rather a recipe builder. With Make you provide the file you want built, the shell commands to build it, and any files the build depends on.

Here's a simple example to get you started. Create a file named "Makefile" with the following text and an empty file alongside it named foo.txt.

    bar.txt: foo.txt
        cp foo.txt bar.txt
When you run the "make" command in your shell it will check if the "bar.txt" file exists. If it does not exist OR if "foo.txt" has a newer timestamp, then it will rebuild "bar.txt" by executing the tab indented shell commands underneath. In this case, the only shell command used is the 'cp' command. In the linked article the author invokes npm, bundler, and netlify.

When people use Make to compile their C code they are simply invoking the C compiler just like they would any other shell command. You might have seen something in a Makefile that looks like this:

    foobar.o: foobar.c
        gcc -c foobar.c
This is just saying: "The output file 'foobar.o' depends on the input file 'foobar.c' and to build 'foobar.o' run the shell command 'gcc -c foobar.c'" which is conceptually the same as my previous example were we built "bar.txt". Since explicitly listing every .o and .c file in a Makefile is tedious, many authors opt for wildcards.

I hope this helped!



> When you run the "make" command in your shell it will check if the "bar.txt" file exists.

I got to this point and ran into an error: Makefile:2: *** missing separator. Stop.

Ok I'm just giving you a hard time, and you mention right after the existence of "tab indented" so whatever. Still it's one of the things I detest on an aesthetic level about make, even if my editor has special syntax support for makefiles to handle this archaic requirement of actual tabs without me ever having to worry about it in practice.



I have a suspicion that a lot of people who rant about makefiles using tabs also praise python for the brilliance of using whitespace for scoping. Or who love yaml for its indented block structure.

Nah, who am I kidding ... no one loves yaml, it's just better than most of the alternatives.



I quite like python's whitespace requirements, but notably it doesn't care what your stance is on tabs vs spaces, or how many spaces, so long as you're consistent in a block. Never liked yaml, though, I don't think it's better than any of the alternatives.



> Still it's one of the things I detest on an aesthetic level about make

There's a little-known variable called .RECIPEPREFIX that lets you switch from tabs to anything else. Probably a bad idea to use it in anything shared with anyone else.



unironically, the info manual. It's great.

the philosophical "c" centredness is true, but doesn't get in the way of using other languages. (there's things like indirect rules for compiling .c files auromatically for instance, but even this can be turned off)



> and so much seems weighted to the idea that you're generating object files and linking and producing a.out.

Well that's kind of wrong (it's used for that but that's an extremely limited viewpoint). Here's a short introduction to get started:

  foo: bar
      baz
This means "whenever bar has been updated, create or recreate foo by running baz". You run it with "make foo", and make will run "baz" by default in "sh".

Here's an example in a totally different context:

  .PHONY: build
  build: node_modules

  node_modules: package.json yarn.lock
      yarn install
      touch node_modules
With this, when you run "make build", it'll only do "yarn install" if node_modules's last-modified timestamp is older than both package.json and yarn.lock. The touch is there to mark it updated for the next time you run "make build", so it knows it doesn't have to do anything. Normally you wouldn't have to do that but make assumes the commands given will update the file, and "yarn install" won't necessarily update the directory's last-modified time.

This example isn't terribly useful because "yarn install" is fast and doesn't do anything itself when it's up-to-date, but it should give ideas about how flexible make actually is.

One of the big criticisms of how people use "make", and why people recommend things like "just" instead, is they don't bother to use that functionality (or any of the piles of stuff built on top of it like pattern matching) and would have just done:

  build:
      yarn install
...which appears to be how OP uses it.


I also use make this way and have done for years. I even have the same kind of religious ritual the author has, like writing the Makefile is part of setting up the codebase and organising in my own head how the whole local dev environment is going to work.

The only thing is, this isn't what make is actually for. A number of commenters have recommended Just - the one I've been using on my personal projects is Task - https://taskfile.dev/ - which is pretty great. As other commenters have said, the problem is that make is installed everywhere already. I would love to see a task runner become standard to the same extent, and will have a look at Just if that's the one people are using.



Thing is, make is not readily available on windows. It should’ve been in git bash, in my opinion, but just fills the gap in a cross-platform way



If you like Makefiles to run random tasks, and insist on not using the proper syntax for that, use `just` instead. It uses the same syntax, but actually for running tasks and not producing files, so you don't need .PHONY statements.



Why use Make if they already use Gulp? Why not put that in `package.json`'s `script` stanza? And never ever use Make as a script runner without declaring the targets `.PHONY`, there will be a day when somebody has a directory (or file) named `build` or `dev` in their project root.



Because my Rust, Python, bash, Perl, etc. projects aren't impressed with the `package.json` file I wave at them. Especially before I've installed npm or any other JS runtime on my system (let alone Gulp).

As the article said, it's generally installed everywhere as soon as you install any dev-related stuff. So is bash, but it's a little clunkier for very basic usage. (`if [[ $1 = build ]]; then...`)



While these npm examples are easy to grasp, they do not reflect any strength of Make. With same success one could use package.json's script property, code would be pretty much identical.



Exactly. If your project already uses npm and Node, why not just use npm scripts for trivial tasks like this? Adding an extra dependency (Make) to build a project doesn't make sense.

Even if your project needs a more sophisticated build system that requires caching builds and managing conditional dependencies, turborepo and the like offer even better support for javascript codebases out of the box. Turborepo can be a workspace dependency so technically you don't even have to manually and separately install a build system to build your project.



Here are some things I love about Make:

* Already installed

* Does not require some random runtime

* Does not require some random runtime version

* Same build system for (nearly) all languages

* The dependency tree can be constructed incrementally from simple, composable, verifiable building blocks

* Incremental builds are trivial

* Task bodies are (basically) shell scripts

* Isolated complexity can be moved to external shell scripts and called from a task body

* Unlike fad-language-build-systems learning is amortized across decades

* It's not CMake



Here is a quality of life one for me (if you copy pasta, remember to switch spaces to tabs). When you type `make targets`, you get a list of available targets. Kinda like `just --list`.
  .PHONY: targets
  
  targets:
     @make -qp | awk -F: '/^[a-zA-Z0-9][^$#\/\t=]*:([^=]|$$)/ {split($$1,A,/ /);for(i in A)print A[i]}' | sort


I went through a similar phase of excitement, but then realised it's a confusing layer of indirection on top of 'whatever-your-project-default-is' (e.g. npm + package.json). But I always need Make for BE+FE (small) projects, e.g. backend build with go and frontend is an SPA - this is where the power of Make to track dependencies really shines.



Back in the time we had a complex system to prepare content (encoding, metadata extraction, ...), then we tried to shive that to Make: free parallelization, partial remakes when sth went bad, for 1/100 the code and almost nothing to install. Throw a GUI to observe the filesystem and voilà. Good times.



I too like makefiles, but the sophistication that I’ve added to some mine are a bit much and I now wish I had explained, via comments, to myself what I did :-D

That said, I wish there would come a new tool that could make things easier, that would gain wider appeal and adoption.



Makefiles are ancient at this point, and work OK for what they were intended for.

The problem is when people try to expand them to everything, and they end up some arcane file full of junk nobody understands. What is a phony target anyways? And even if you understand the concept, realize it makes no sense to passerby.



> What is a phony target anyways? And even if you understand the concept, realize it makes no sense to passerby.

This is an entry level bar to the profession, one level below that will be an insult to anyone who calls themselves an engineer.



What? You sound like someone who's complaining about all the "pointless" semi-colons in C++ code.

If you can't be bothered to learn the basics of the language, your criticism isn't going to be worth much.



> You sound like someone who's complaining about all the "pointless" semi-colons in C++ code

Not sure how the two relate at all. One is a statement terminator, the other a rather complex system.

> If you can't be bothered to learn the basics of the language, your criticism isn't going to be worth much.

That's the entire point! Remember my complaint is not against Make as a build tool, but Make as a do everything(including deploy) system. There are countless options in the space. With most, you can generally open a config file or even bash script and figure out what's going on. Not so with Make, without learning as you call it. It's nonobvious. And I say that as someone who -does- understand it.



I don't hate the idea of AI generated podcasts (quite the opposite), but I can't shake that I can't trust what they're saying. High quality podcasts have fact checking and a reputation to uphold. On this, they just slap a disclaimer on that it might contain inaccuracies.

Will give it a listen tho!



It is actually only the good AI generated material that will even have that kind of disclaimer. The ones that want you believe in their lies will not even have any disclaimer.

Having said this, after having listened to 100+ episodes of this podcast, I have yet to spot a single lie, although they may still exist with a very small probability. The reason why any fact checking step has not been added to this podcast is because it hasn't proven to be too necessary in the first place.

I have a different project "newssurvey" which uses external data and has citations for claims, although I still need to add an extra citation verification step for it. Perhaps in time I will.



I really was surprised that the main concern in the comments here was about the missing .PHONY.

And kudos for this article that gives a point of view, then fixes the main HN concern ;-)



As a guy who doesn’t write code, but reads a lot of it, Makefiles have always been a PITA to deal with. But less so than cmake or other build systems.



In my first job out of university, I inherited a data pipeline that had been written in Make using ~40 Makefiles. Needless to say, it was a hell to debug.



People in this thread have already listed the terrible terrible things about make. It succeeds because it's simply everywhere. And rather than attempt to improve, people romanticize the good enough solution.

I dream of a world where a "lessons learned make" is installed everywhere. How do we get that to happen?



First you have to appreciate that Make isn't as bad as you think it is. If you want to improve on it, then first you will really have to understand it.

That's the pitfall that so many would-be build system authors fall into: _Holy crap, why is it so complicated? Surely something simpler would do the job... [YEARS LATER]... Shit, my build system is so ugly because of all those damn real-world edge cases I've had to fold into it._



In short:

Person puts short shell scripts "build", "deploy" and "dev" in a single makefile as targets, instead of putting each script in a separate well named file. Then concludes that make is a great build tool as long as you don't use it as a build tool ("nothing complicated").

Please do not do this. make is not a shell. Also, consider reading (and writing) build docs, instead of running make with random targets as soon as you see a makefile



Personally, I've always created shell scripts too. Or PowerShell scripts if I need Windows based support (like for Golang projects that might see Windows based usage.) I find they're way, way better and easier to work with.

That being said, the only issue preventing me from using shell scripts well is shared code. Like common functions etc. So far I've simply used a common.sh script and sourced/imported it.

Also, what do you do once you have 4-5 shell scripts at the root of your project? I find it gets messy quickly. I need a solution for this.

But yes, I agree shell scripts are generally better.



> Also, what do you do once you have 4-5 shell scripts at the root of your project? I find it gets messy quickly. I need a solution for this.

Put them in a folder! I see "tools" used commonly for dev/build scripts



I'd take php over python any day.

Everyone has their favorite python modules they like to use in scripts, half of them are not built or wrong version for whatever current distro I'm in, so I install stuff with pip, then something with requirements.txt breaks because of wrong pytorch version, I start reading about envs, much crying.

At least with bash or make nobody expects you to provide external modules/libraries



I’ve never so much as looked at a Php script in my life. I’d have to ask ChatGPT how to even run one.

A build script should work with vanilla out the box Python. No extra modules needed. I mean if you can assume vanilla bash tooling then vanilla Python is perfectly fine.

If there’s some Python library you really want then add it to repo. Easy peasy.



Make is great.

When even a simple one misbehaves, like it almost always does, I get to spend a solid chunk of time ignoring my actual goals to play with a bit of computing history.

And then I get to do it all over again a month later, when it breaks in yet another new way.



Despite its age, my first and only encounter with it was in building godot. I was impressed from the user side. (Until, due to godot's dev policies (which might have changed by now) and not the fault of scons, a git pull resulted in a broken build even with a scons --clean. (Issue was some generated files had their parents removed by some commit, and the generated files were marked to not be cleaned by scons for some windows reason I think. Solution was to just delete them or do a git clean.)) But when I looked at their scons files, and their proliferation into every subdirectory like .svn folders, eh... The simple cases are simple, sure, but that's the case with everything. I think I'd only use scons for a project approaching godot's complexity -- especially amount of build targets -- and language mix (i.e. mostly C++).



Am I the only one who uses make properly? I don't build C projects with it at any more (I used to use autotools for that), I use it like OP does, except I do write PHONY and I do use dependencies. I simply find the declarative way easier than writing a script. I try to avoid scripts whenever there's a better tool for the job.



Hmm, same here. Make is my go-to toll for writing top-level instructions on how to deal with a collection of files. (re)constructing data files (including dependency tracking of course), compiling files (implicit rules are great!) and all sort of things needing to go into the toplevel directory of a project. Its easy to write, almost no syntax overhead. Compare that to ant :-) And it can get arbitrarily complex if you need it to be. Projects like buildroot are a wonderful example of how much you can actually do with Makefiles, while staying relatively readable... Sure, if you are working in big tech, your code base is likely so large now that you want ninja or some other modern build system. But for the rest of us, Make is really sufficient and actually great value for the money...



Yup, I was sat here thinking "anyone complaining about Make really should go away and try using ant, then return here in a few hours licking their wounds, repentant".



Makefiles are an eerily lisplike turing tarpit. I hand wrote the makefiles for my projects, they generate wonderfully organized build trees. Hell I use makefiles to manage my dotfiles repository, even blogged about it.

https://www.matheusmoreira.com/articles/managing-dotfiles-wi...

The sanest way I've found to write makefiles is to think of it as a tool that maps input paths to output paths.

When compiling a program, I want to map source/program.c to build/$(config)/objects/program.o so I write pattern rules that do just that.

  $(directories.build.objects)/%.o: $(directories.source)/%.c | directories
          $(strip $(CC) $(CFLAGS) -o $@ -c $<)
Then I write make functions to convert paths in some tree to paths in another tree, which makes it easy to create lists of files which then match the pattern rules.
  source_to_object = $(patsubst $(directories.source)/%.c,$(directories.build.objects)/%.o,$(1))
These functions are then augmented by lots and lots of project specific variables to organize things...
  directories.source := source

  directories.build.root := build
  directories.build := $(directories.build.root)/$(config)
  directories.build.objects := $(directories.build)/objects

  sources.program := $(shell find $(directories.source) -type f)
  objects.program := $(call source_to_object,$(sources.program))
Then I add a real target which is supposed to get the makefile going.
  targets.program := $(directories.build)/program

  $(targets.program): $(objects.program) | directories
          $(strip $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $^)
Then I add a phony target for it.
  targets.phony += program
  program: $(targets.program)

  .PHONY: $(targets.phony)
Then typing "make program" triggers the rule to make build/$(config)/program which depends on all the object files which will be linked into it which in turn is derived from a dynamically computed list of source files, and making those object files triggers the pattern rule which builds the entire thing.

Using make without phony targets is insane. Without phony targets, I'd need to type things like "make build/aarch64/program" in order to get a build started. So I use phony targets for everything. Much easier to type "make program".

It got to the point I created a phony-targets shell script which parses make's database output and processes it into a sort of help text for any given makefile's phony targets interface:

https://github.com/matheusmoreira/.files/blob/master/~/.loca...



I've been writing [GNU] Makefiles for years, and have a love-hate relationship with the [GNU Make] tool. I tend to push tooling to the limit, I think it's in part because I believe in "soundness of scope" -- a tool should have a well defined scope and within that scope "all bases should be covered". In practice that would mean, that with Make I am able to define the dependency graph of pre-requisites and targets (files that Make makes) such that it just about handles the graph resolution complexity for me -- _with variables_, that is.

I love Make because it largely delivers on its promise -- and I am using it almost in _opposite_ to what the author describes. That is, I consider phony targets to be an "illegitimate" feature of Make, and avoid them like the plague. While convenient, targets in Make are heavily geared to be files, certainly most of the machinery in Make was written to assume so, and even the well-known (and documented) targets like "install" and "clean" leave a terrible taste in my mouth as of late, despite these being very conventional.

The problem with phony targets is that they're hard to reason with by Make (unless you actually turn "install" and "clean" into files) and break half of the rest of its assumptions on what targets should be and how they behave. The rest of the problem is the linguistical aspect of it -- if I `make install` am I making an install program/script or what? These kind of vagaries have led me firmly away from ever using phony targets.

As for the rest of it, Make is terribly archaic, but that also lends it strength since the archaic nature is very simple on the flip side.

The "hate" part is me taking a dislike to its bare-bones, in my opinion insufficient variables facility, where only truly global variables (certainly sharing one namespace) exist and juggling these for any non-trivial Makefile is always a problem of its own.

I am no novice with GNU Make, not any longer, but occasionally I need to look back into its manual to remember how e.g. the so-called double-colon rules work (when I suspect I might need one) and the real difference between the `=` and `?=` variable assignment, for when I want to provide default values and so on.

Lately I've been playing with the idea of _implementing_ a [GNU] Make compatible tool just to see if the somewhat _patchy_ scope of [GNU] Make can be given more coverage -- for instance to see if adding locally-scoped variables and more flexible target definition can improve Make? What I mean is to experiment with an implementation that retains fundamental principle and mandate of Make -- dependency graph resolution and reliance on [normally] UNiX shell -- but "upgrading" the syntax and extending the semantics. Because while Make is useful, it's also at times terribly hard to work with. To paraphrase Bjarne Stroustrup (the man behind C++), "inside Make there is a neat sound idea struggling to get out".



The author is confused about what Make is for, and frankly this kind of thing is why Make gets a bad rap. Make is for traversing a graph to determine what should be built, and how to parallelize the steps. Here he doesn't have a graph or any dependencies defined at all. He does have a weird scripting language though, which more or less nobody likes.



> Even on my MacBook, I don't remember installing it explicitly.

It's there, but last I checked, it only supports building with the Apple-supplied toolchains (Xcode). If you want to use anything else, do yourself a favor and install gmake(1) from MacPorts.

For C/C++ projects that are simple enough, I can reuse the same Makefile with only minor changes, and it works on both Linux and macOS, with support for sanitizers and valgrind(1). CLion eats it up like it were candy, although the gathering of sources is automated with find(1), and even respects my `--sysroot` setting.

CMake has good cross-platform capabilities, but GNU make gives me more control over the actual compiler and linker invocations, and its CLI is less horrible than that of CMake.



I know this is not your main point, and you touched briefly on it but, cmake is absolutely terrible.

It is supposed to make the building process more straight forward and painless but what it brings to the table is

* Weird bugs: In one version of cmake would return the python version that I had to be 0.16 (????) and the only fix was to update the cmake version

* Messy Structure: Under the guise of giving freedom to the person designing their build with no real enforcement of any rules, in any way, people can write their own absolute craptastic version of cmake scripts which really really is going to make you lose your mind (mind you some people leverage it but it is the exception not the rule)

* Opaque process: Due to the fact that the structure can be messy you are never really quite sure of what it is doing and which compile flags are effectively fed to the compiler as well as sometimes totally ignoring the library you explicitly ask it to use and trying to find another one...

OK maybe this is a bit superlative, but unfortunately for me, reflects well my experience with using cmake and feels like an extra item between me and getting things working painlessly... But I might have been doing it wrong.



I understand this perspective if considering CMake in the past (i.e. pre 3.0 or so), but in my experience CMake today is much nicer (albeit definitely still not without its flaws). Specially for the points you mention:

* “Weird bugs”: I suspect CMake wasn’t finding the version of Python you wanted because your find_package command was just finding a different version first. This is much easier to debug today by using —-debug-find-pkg=Python, which will print all the places it searches Python for and what it finds. You can then modify your find_package invocation as appropriate to find the Python you really want to use.

* Messy structure: yes, unfortunately I’ve also seen my fair share of nightmare-inducing CMake files.

* Opaque process: here I actually quite disagree. In my experience I’ve found it super easy to modify things like compiler flags (just use target_compile_options, or add_compile_options for directory wide options). And what made a big difference was using Ninja as the generator (I also use Ninja on Windows), which makes it super easy to view the final compiler commands that will actually be invoked. CMake is essentially a compiler that emits Ninja on the backend, and several times it’s been invaluable to confirm in the generated Ninja code what is actually being invoked.

CMake is definitely not perfect, but it’s much better than what it used to be! It’s ultimately a perfect match for C++ (both extremely powerful, configurable, hamstrung with decades of backwards compatibility, terrible ergonomics, etc.).



The version of make which ships with Xcode is old (as Apple has stopped updating GPL software in general, and hopefully new builds of GNU make are GPL3 anyway), but it is otherwise not in any way somehow tied to Xcode's toolchain: it is a normal copy of GNU make 3.81 which runs commands the same as any other copy of make (as you provide them, and using the path).



While I dislike makefiles for the same reasons many echoed in this thread, with AI it doesn't matter.

It's such an ubuiquitous gormat Llm are well trained on them.

So you don't have to read or edit it, just let the robot do it for you. It will do it well, and it's not something you do often anyway.

Then you can enjoy the other qualities of the tool, like the fact it's already there on unix, or that it's more than a task runner.

Although I prefer "just" or "doit", in 2024, make is fine again.

联系我们 contact @ memedata.com