![]() |
|
![]() |
|
> it can't write shit on its own It sure can when you use multiple agents having discussions and optimizing characteristics with each other. CrewAI and AutoGen come to mind. |
![]() |
|
I consider assembly to be a basic skill like arithmetic. Most of us have a calculator on hand 24/7, but being able to look at a clock and say 'I've got 12 minutes' is useful.
|
![]() |
|
> and concepts like subnets are basically emulated for the sake of legacy systems. Until you scale up a Kubernetes Cluster so far that the abstractions begin to leak. |
![]() |
|
This is very well put. I think the next point of discussion is what fraction of "software developers" "need" to know how to code in the deep way you describe.
|
![]() |
|
You can upgrade any family member to Kagi Ultimate plan for $15/mo and it will give you unlimited access to GPT-4, Claude 3 Opus, Gemini 1.5 Pro and other state of the models.
|
![]() |
|
An alternative is to keep the lab assignment but grade them on a final exam. Then it is in their interest to do the lab assignment by themselves rather than with an LLM.
|
![]() |
|
I'm not sure if you're implying that I'm making shit up by putting "people" in quotes, but here's at least a little evidence. [1] [2] [3] Otherwise it's been with in-person conversations and I didn't record them, there's a spectrum to how completely they suggest I allow AI. Everything else you said I more or less agree with. Obviously if someone wants to cheat they're going to do it, but I feel that until we restructure the job market to not take GPAs as seriously (which I think I'd probably be onboard with), we should at least have cursory efforts to try and minimize cheating. I'm not saying we have to have Fort Knox, just basic deterrence. I'm not an adjunct anymore, partly because I took stuff way too personally and it was kind of depressing me, partly because it was time consuming without much pay, but largely because I realized that most universities are kind of a racket (particularly the textbook corporations are a special kind of evil). [1] https://news.ycombinator.com/item?id=36089826 |
![]() |
|
Do you also think that primary school students should not be taught, say, multiplication? Because they can, of course, use a calculator. No need to ask them what 3 times 4 is.
|
![]() |
|
If you're trying to do things efficiently you can't afford not doing code reviews. Only well-funded organizations can afford to write the same code twice after the first attempt has to be thrown out.
|
![]() |
|
It's definitely possible to inject malicious code that humans don't spot, there was a whole competition dedicated to humans doing this in C well before LLMs: https://en.wikipedia.org/wiki/Underhanded_C_Contest Now I'm wondering, can you put in a comment which the LLM will pay attention to such that it generates subtle back-doors? And can this comment be such that humans looking at the code don't realise this behaviour will be due to the comment? |
![]() |
|
So do I, and worse. Look, all I’m saying is I’m thankful for this crutch that helps me deal with the limitations of my associative memory, so as long as it can’t think and can’t replace me entirely
|
![]() |
|
I don’t need 90% of my IDE features. For me personally, it’s a fantastic way to learn. I could definitely see people just using it and not actually learning, but to each their own. |
![]() |
|
> Having an oracle that knows all answers is useless if you don't know what to ask. But that's not what these LLM systems are. https://hachyderm.io/@inthehands/112006855076082650 > You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to. > Alas, that does not remotely resemble how people are pitching this technology. It is exactly what happened to you: it wrote bullshit. Plausible bullshit but bullshit nonetheless. |
Well before AI co-pilots something happened to the good old admins--they started to disappear only to be replaced by "AWS devops" (their job titles) who have never wired a network using routers, switches, and cables. I noticed that they started lacking basic networking knowledge and couldn't set up networking inside AWS. They just didn't know what a gateway, NAT, or subnet is.
Similar things are happening with AI co-pilots. We have an increasing number of people who "write code", but the number of people who can understand and review code is not increasing. There is also a problem of injecting ethics and politics into those tools, which can produce silly results. I asked Bard to write me a Python function to turn the US Constitution into a palindrome. Bard refused and gave ma a lecture on how the US Constitution is too important to be played with in such trivial fashion. I then asked it to produce code that turns the US national anthem into a palindrome, it refused again. So I asked it do the same but with the Russian national anthem and it spat out code without telling me off. I then asked it to generate code for simple tasks and it did an OK job, except the formatting and the fonts used were different every time, because it just lifted code from different webpages and recombined it like a massively hungover student waking up to realise he's supposed to hand in the assignment in one hour.