![]() |
|
![]() |
| Yeah, I moved the paragraphs around and pasted the quote in where it belonged, forgetting that it had been pasted at the top. Too late to edit, though. |
![]() |
| One thing I've found in doing a lot of coding with LLMs is that you're often better off updating the initial prompt and starting fresh rather than asking for fixes.
Having mistakes in context seems to 'contaminate' the results and you keep getting more problems even when you're specifically asking for a fix. It does make some sense as LLMs are generally known to respond much better to positive examples than negative examples. If an LLM sees the wrong way, it can't help being influenced by it, even if your prompt says very sternly not to do it that way. So you're usually better off re-framing what you want in positive terms. I actually built an AI coding tool to help enable the workflow of backing up and re-prompting: https://github.com/plandex-ai/plandex |
![]() |
| I'd refer you to a comment I made a few weeks ago on an HN post, to the same effect, which drew the further comment from gwern here:
https://news.ycombinator.com/item?id=40922090 LSS: metaprogramming tests is not trivial but straightforward, given that you can see the code, the AST, and associated metadata, such as generating test input. I've done it myself, more than a decade ago. I've referred to this as a mix of literate programming (noting the traps you referred to and the anachronistic quality of them relative to both the generated tests and their generated tested code) wrapped up in human-computer sensemaking given the fact that what the AI sees is often at best a lack in its symbolic representation that is imaginary, not real; thus, requiring iterative correction to hit its user's target, just like a real test team interacting with a dev team. In my estimation, it's actually harder to explain than it is to do. |
![]() |
| This type of bug is trivial for GPT to fix though. It was born for this. Sometimes it does generate real footguns but this sounds like an example from an earlier generation of generative AI. |
![]() |
| The file object is named "openFile", but used as "f". The class is defined as "CachedFileIterator", but used as "CacheingFileIterator". That's two typos, before discussing the actual code. |
![]() |
| One is that the variable is called openFile and not f. I don't know enough python to see something else wrong with that but would love to know too, since I've written such a line just last week. |
![]() |
| The models I've used don't make typos on variable names that already exist in the context. Typos are not the failure mode, this is literally the easiest text prediction task they can do. |
![]() |
| The second one actually seems to be harder for some people. It requires separating the syntax from the semantics
Agreed on all further ones, however |
![]() |
| the worst is when someone knows all the keywords to make it seem like they are technical but after talking for a few days you realize wait they really don't know wtf they're talking about! |
![]() |
| Coding requires a willingness to understand and manipulate systems made of unbreakable rules. Most people don't want to deal with such an uncompromising method of communication. |
![]() |
| And then there are those of us who find computers to be more bearable than people...
At least computers don't think it's their god-given right to treat you like garbage. |
![]() |
| I can guarantee that all these users are following some "hustle university" course peddled by a Twitter influencer. Crypto and AI are the two favorite words of all these get rich quick scams. |
![]() |
| I won't say all investors are entitled and overconfident, inspired by grifters, emboldened by survivorship bias, and motivated by greed. That would be rude |
![]() |
| Adding a couple aliases for your endpoint might be a decent middle ground where you throw the hallucinating "AI" a bone, without contorting yourself at its will |
![]() |
| another consideration: if a popular AI model hallucinates an endpoint for your API for one customer, chances are another customer will run into the same situation |
![]() |
| Doesn't solve the problem that the budget they has in mind for the app was $300, while an experienced dev can directly see this is going to be a $20K v.1 before change requests project. |
![]() |
| So they'll try 5 $300 fixes, and then either give up entirely, or figure out that maybe the 100 developers they ignored who told them it's gonna cost $20K were right. |
![]() |
| And this is why people continue to do it. And at that, with such a discrepancy, why not? Proof of concept, you know if it adds value and makes money, enough money to pay $20k. |
![]() |
| 99% of them were never in the bespoke sofware market to begin with. If the $300 did not work put, they'll cobble something together in Excel pr use some SaaS that comes close enough. |
![]() |
| There's room for all of us in this industry. What someone is unwilling to do is just an opportunity for someone else to pick up the yoke. |
One thing I loved about doing technical enterprise sales is that I’d meet people doing something I knew little or nothing about and who didn’t really understand what we offered but had a problem they could explain and our offering could help with.
They’d have deep technical knowledge of their domain and we had the same in ours, and there was just enough shared knowledge at the interface between the two that we could have fun and useful discussions. Lots of mutual respect. I’ve always enjoyed working with smart people even when I don’t really understand what they do.
Of course there were also idiots, but generally they weren’t interested in paying what we charged, so that was OK.
> Helping a customer solve challenges is often super rewarding, but only when I can remove roadblocks for customers who can do most of the work themselves.
So I feel a lot of sympathy for the author — that would be terribly soul sucking.
I guess generative grammars have increased the number of “I have a great idea for a technical business, I just need a technical co founder” who think that an idea is 90% of it and have no idea what technical work actually is.