![]() |
|
![]() |
| I think deploying and then going to sleep is the red flag here. They should have deployed the change at 9am or something and had the workday to monitor issues. |
![]() |
| The blog post is 404ing, here's a Web archive link
https://web.archive.org/web/20240610032818/https://asim.bear... The author has added an important edit: > I want to preface this by saying yes the practices here are very bad and embarrassing (and we've since added robust unit/integration tests and alerting/logging), could/should have been avoided, were human errors beyond anything, and very obvious in hindsight. > > This was from a different time under large time constraints at the very earliest stages (first few weeks) of a company. I'm mostly just sharing this as a funny story with unique circumstances surrounding bug reproducibility in prod (due again to our own stupidity) Please read with that in mind |
![]() |
| Could they have deleted it because of all the negativity?
They did make a silly mistake, but we are humans, and humans, be it individually or collectively, do make silly mistakes. |
![]() |
| I don’t find the source of the error being a careless human writing original code without proper review vs a careless human copy/pasting code without proper review to be significantly different. |
![]() |
| It's not some innocent mistake. The title is purposefully clickbait / keyword-y, implying that it was chatgpt that made the 'mistake' for SEO and to generate panicked clicks.
"We made a programming error in our use of an LLM, didn't do any QA, and it cost us $10k" doesn't generate the C-suite "oh shit what if ChatGPT fucks up, what's our exposure!?" reaction. There's a million middle and upper management posting this article on LinkedIn, guaranteed. It's like the Mr. Beast open-mouth-surprised expression thumbnail nonsense; you feel incredibly compelled to click it. While we're on the subject: LLMs can't make "mistakes." They are not deterministic. They cannot reason, think, or do logic. They are very fancy word salad generators that use a lot of statistical probabilities. By definition they're not capable of "mistakes" because nothing they generate is remotely guaranteed to be correct or accurate. Edit: The mods boosted the post; it got downvoted into oblivion, for obvious reasons, and then skyrocketed instantly in rank, which means they boosted it: https://hnrankings.info/40627558/ Hilarious that a post which is insanely clickbait (which the rules say should result in a title rewrite) got boosted by the mods. I'm sure it's a complete coincidence that the story was apparently authored by someone at a Ycombinator company: https://news.ycombinator.com/item?id=40629998 |
![]() |
| They don't have logs and commit directly to production 10/20 times a day.
I don't think 128 bits vs 36 byte performance it's a main concern right now |
![]() |
| Everything can be understandable if this is a small first personal project of someone.
Here we are talking 1.65 MILLION CAD $ backed YC company |
![]() |
| > a language that they lack experience in
Perhaps also the tooling because any remotely decent IDE should show an error there, let alone the potential warnings of some code analysis software. |
![]() |
| > You don't need to make DTOs when you don't have to, using AutoMapper is considered a bad practice and is heavily discouraged (if you do have to use a tool like that, there are alternatives like Mapperly which are zero-cost to use and will give you built-time information on what doesn't map without having to run the application).
The thing is, that you'll probably have entities mapped against the database schema with data that must only conditionally be shown to the users. For example, when an admin user requests OrderDetails then you'll most likely want to show all of the fields, but when an external user makes that request, you'll only want to show some of the fields (and not leak that those fields even exist). DTOs have always felt like the right way to do that, however this also means that for every distinct type of user you might have more than one object per DB table. Furthermore, if you generate the EF entity mappings from the schema (say, if you handle migrations with a separate tool that has SQL scripts in it), then you won't make separate entities for the same table either. Ergo, it must be handled downstream somewhere. Plus, sometimes you can't return the EF entities for serialization into JSON anyways, since you might need to introduce some additional parsing logic, to get them into a shape that the front end wants (e.g. if you have a status display field or something, the current value of which is calculated based on 5-10 database fields or other stuff). Unless it's a DB view that you select things from as-is, though if you don't select data based on that criteria, you can get away by doing it in the back end. Not to say that some of those can't be worked around, but I can't easily handwave those use cases away either. In Java, MapStruct works and does so pretty well: https://mapstruct.org/ I'd rather do something like that, than ask ChatGPT to transpose stuff from DDL or whatever, or waste time manually doing that. I'll probably look into Mapperly next, thanks! The actual .NET runtime is good and tools like Rider make it quite pleasant. |
![]() |
| C# differs quite a bit from Java particularly in surrounding libraries, and there is a learning curve to things...the expectation that it's another Java ends up misleading many people :)
I'm sure MapStruct would also require you to handle differences in data presentation, in a similar way you would have to do with Automapper (or Mapperly). .NET generally puts more emphasis on "boilerplate-free happy path + predictable behavior" so you don't have autowire, but also don't have to troubleshoot autowire issues, and M.E.DI is straightforward to use, as an example. In terms of JSON (with System.Text.Json), you can annotate schema (if it's code-first) with attributes and nullability, so that the request for OrderDetails returns only what is available per given access rights scope. In either case different scopes of access to the same data and presentation of such is a complex topic. Single-layer case might be a bit extreme - I did use it in a microservice-based architecture as PTSD coping strategy after being hard burned by a poor team environment that insisted on misusing DDD and heavy layering for logic that fits into a single Program.cs, doing a huge disservice to the platform. Another popular mapping library is Mapster: https://github.com/MapsterMapper/Mapster, it is more focused on convenience compared to Mapperly at some performance tradeoff, but is still decently fast (unlike AutoMapper which is just terrible). For fast DTO declaration, you can also use positional records e.g. record User(string Name, DateOnly DoB); but you may already be aware of those, noting this for completeness mostly. Overall, it's a tradeoff between following suboptimal practices of the past and taking much harder stance on enforcing simplicity that may clash with the culture in a specific team. |
![]() |
| the emoji's seem fine
instead of `bug: fix blah`, it's `:bug:: fix blah`, which, honestly actually seems clearer and easier to parse at a glance edit: hacker news doesn't support unicode emojis |
![]() |
| Because they have half a million dollar to star with and 1.2 million dollars on top and then free AWS credits to burn. |
![]() |
| A ten dollar Hetzner with dokku would do fine at this stage.
But then the whole startup culture is generally speaking, a culture of waste, pride and vanity. |
![]() |
| if the default argument is an object it’s reused between invocations. Hence why setting default parameters to empty list / empty dict is flagged by static analysis suites |
![]() |
| It read like no one really knew what they were doing. "We just let it generate the code and everything seemed to work" is certainly not a good way to market your company. |
![]() |
| The big tech tooling probably cost tens of millions of dollars to create, and probably had a couple $10k mistakes on the way to getting it written and running. |
![]() |
| It is possible to use autocompletion correctly.
It is possible to use libraries correctly. It is not possible to use AI correctly. It is only possible to correct its inevitable mistakes. |
![]() |
| You likely needed to do more extra work than needed, when compared to some other options.
The lock-in here is the added developer time and complexity vs. just paying premium. |
![]() |
| Environment destruction gameplay but there's always another ad under the ads, except when it's a lootbox.
I can imagine worse, too! They haven't even really started turning that knob yet. |
![]() |
| Is there a trend there of moving from Next to FastAPI? I would be surprised.
Perhaps they are doing some AI thing and want to have python everywhere. |
![]() |
| Honestly same can be said about a lot of frameworks. You will pry my vanilla JS debugged with print statements hand-coded in vi from these hands only when they're cold and dead. |
![]() |
| Or add some debug logging? 5 days into a revenue-block bug, if I can't repro manually or via tests, I would have logged the hell out of this code. No code path or metric would be spared. |
![]() |
| Load testing - yes, but it's not that usual unfortunately. (Even though it should be) Acceptance testing - again, maybe, if they use 20 or so subscriptions in one batch, which may not be the case. |
If you haven't fixed that alerting deficiency, then you haven't really fixed anything.