|
|
|
| If you truly need that kind of perf (and at Amazon, we had plenty of dev desktops running on ebs without that kind of performance) then you should really opt for an instance type with local storage. |
|
| How is a mono repo the simple solution compared to one repo per independently releasable component ?
All the tooling is much easier to use when each application has its own repo. |
|
| This is my biggest complaint with GitHub CodeSpaces.
I should be able to launch a local VM using the GitHub Desktop App just as easily as I can an Azure-hosted instance. |
|
| Maybe I'm missing something here but couldn't you just track the whole VM setup (dependencies, dev tools, telemetry and everything) in your monorepo? That is, the VM config would get pulled from master just like everything else, and then the developer would use something like nixos-shell[0] to quickly fire up a VM based on that config that they pulled.
[0]: https://github.com/Mic92/nixos-shell (not to be confused with nix-shell) |
|
| If you have 100 services in your org, I don't have to have 100 running at the same time in your local dev machine. I only run the 5 I need for the feature I'm working on. |
|
| I'm curious about the # of svc's / stack / company / team size --
if you have your own blog -- would love to read it when you publish
could be a cool lightning talk (or part of something longer) maybe it's a good piece for https://nixinthewild.com/ ? I'm @capileigh on twitter and hachyderm.io if you wanna reach out separately -- here is good tho too |
|
| > By running a Linux VM
Or just run Linux on your local machine as the OS. I don't get the obsession with Macs as dev workstations for companies whose products run on Linux. |
|
| Err — sorry I mean https://containers.dev/
Which has pretty nice support across IDEs. For example - https://code.visualstudio.com/docs/devcontainers/containers I set this up for my last company where we had all sorts of “works on my machine issues” and a needlessly painful onboarding experience. Local development became streamlined with this tooling BUT pre-apple silicon macs couldn’t handle running Docker like this. Glacially slow. We had a python monorepo with a bunch of services within it. I am curious whether nix is an alternative / improvement for this. Was going down the nix road at first but an infrastructure team member steered me toward devcontainers instead and I’ve been pretty happy since! |
|
| > When I read about nix, reproducibility is a goal, but [...] isolation is a non-goal.
Generally, yes. But you can use or put together something like this to run Nix inside a devcontainer instead of locally: https://github.com/xtruder/nix-devcontainer So you can use them in conjunction (or alternation, if for some projects you're okay running without a container) without having to specify your development environments twice. > I feel better doing my "npm install"s in such an environment (of course it's still not a VM – but that's another topic). There are basically two kinds of integration you can do for a project with Nix, which I'll call deep and shallow. In shallow integration, you just have Nix provide the toolchain and then you build the project (manually, with a script, with a Makefile, whatever). This is pretty common and pretty easy, and gives you no protection from malicious NPM build scripts. For deep integration, you can actually have Nix build your whole project. This has some downsides, like that it can't really handle incremental builds. It also imposes restrictions, like no network access by anything but Nix at build time, all packages are built by special build users with no homedirs and no perms to access anything, etc. When you do that kind of build/install, you do get some protection from crypto miners lurking in the NPM registry or PyPI or whatever. |
|
| I chatted to Nelson when I was designing brisk (https://github.com/brisktest/brisk) and his insight informed the development of it.
Among other things, Brisk allows you to run tests for your local code changes in the cloud (basically the pay mini test piece but for any test runner) We also have a sync step much like the one described here and allow users to run one off commands (linters, tsc etc) |
|
| > Brisk allows you to run tests for your local code changes in the cloud
how does this work for interactive debugging? I was going to ask the same about the system in TFA but I might as well ask you :) |
|
| Can't you achieve all that just using a build system with reliable remote builds & caching e.g. Bazel, Buck, Please, etc?
That also avoids hacky sync scripts. |
|
| To be clear the sync step is used for the test suite execution not only the one off command running - it’s just something we can also easily do because we have a hot env in the cloud |
|
| Due to the work that this team put in though, right?
The choice to run dev environment far away from the files puts you in the position of needing to engineer your way past the inconsistency. |
|
| The code syncs on every keystroke. Consistency isn't an issue unless you are having connection issues. And if you are then pretty much all development is broken anyways. |
|
| We’ve been using a hundred repositories and a hundred Go services in a local docker-compose setup that’s worked fairly well. CI runners can struggle if their disks can’t keep up with Docker.
It comes up that we should make a devprod for front end folks to make the backend abstracted more. Overall a lot of people prefer local dev because it gives them access to the entire stack, lets them run branch images easier, and has better performance than remote boxes. https://moov.io/blog/education/moovs-approach-to-setup-and-t... |
|
| This is an awesome writeup of the tools and culture issues you run into maintaining dev environments.
From post, the problems that justified central dev boxes are roughly: 1. dependency / config mgmt / env drift on laptops 2. collaboration / debugging between engineers 3. compute scaling + optimization 4. supporting devs with updates and infra changes The last one is particularly interesting to me, because supporting the dev env is separate engineering role/task that starts small and grows into teams of engineers supporting the environment. I'm helping build Flox. We're working on these pain points by making environments (deps, vars, services, and builds) workable across all kinds of Mac/Linux laptops and servers. 1) a. Virtualize the pkg manager per-project b. Nix packages can install across OS/arch pretty well 2) Imperative actions like `flox install`/`upgrade` always edit a declarative env manifest.toml -- share it via git 3) less Docker VM's -- get more out of devteam Macbooks 4) reduce toil with a versioned, shareable envs --> less sending ad-hoc config and brew commands to people (as mentioned in the post.) Just `git pull && flox activate`. I think on problem point #2, collab tools are advancing to where, pairing on features, bugs, and env issues can be done without central SSH. (ex: tmate, vscode liveshare, screensharing, etc) -- however, that does sort of fall apart on laptops for async debugging of env issues (ex: when devprod is in the US, and eng is in London). Having universal telemetry on ephemeral cloud dev-boxes with a registry and all of the other DNS and SSH goodies could be the kind of infra to aspire to as your small teams run into more big-team problems. In the Stripe anecdote, adopting the centralized infra created new challenges that their devprod teams were dedicated to supporting: - international latency from central, US-based VM's - syncing code to the dev boxes (https://facebook.github.io/watchman/) - linting, formatting, generating configs (run it locally or serverside?) - a dev workflow CLI tool dedicated to dev-box workflows and sync'ing with watchman's clock - IaaS, registry, config, glue for all the servers This is all very non-trivial work, but maybe there's a future where people can win some portability with Flox when they are small and grow into those new challenges when it's truly needed -- now their laptop environments just get a quick `flox activate` on some new, shiny servers or Cloud IDE's. I really like the notes from the author on how useing Language Server Protocol across a high latency link has great optimizations that work along side the watchman sync for real-time code editing. |
|
| Yet another replay of timesharing development experiences, I guess we need a couple of generations more to count how many times does a pendulum swing back and forth during a developer's lifetime. |
|
| You have different problems with both. Some problems are hidden in one, but there is no one best answer. (unless your project is small/trivial - which is what a lot of them are) |
|
| In addition to what you mentioned, the ability to atomically commit to a library and all of its consumers. And for a change to a library run the tests of all of its consumers as well. |
|
| imo monorepos are great, but the tooling is not there, especially the open-sourced ones. Most companies using monorepos have their own tailored tools for it. |
I've been working on an ephemeral/preview environment operator for Kubernetes(https://github.com/pier-oliviert/sequencer) and as I could agree to a lot of things OP said.
I think dev boxes is really the way to go, specially with all the components that makes an application nowadays. But the latency/synchronization issue is a hard topic and it's full of tradeoff.
A developer's laptop always ends up being a bespoke environment (yes, Nix/Docker can help with that), and so, there's always a confidence boost when you get your changes up on a standalone environment. It gives you the proof that "hey things are working like I expected them to".