![]() |
|
![]() |
| Exactly. It's similar in other (non programming) fields - if you treat it as a "smart friend" it can be very helpful but relying on everything it says to be correct is a mistake.
For example, I was looking at a differential equation recently and saw some unfamiliar notation[1] (Newton's dot notation). So I asked claude for why people use Newton's notation vs Lagrange's notation. It gave me an excellent explanation with tons of detail, which was really helpful. Except in every place it gave me an example of "Lagrange" notation it was actually in Leibniz notation. So it was super helpful and it didn't matter that it made this specific error because I knew what it was getting at and I was treating it as a "smart friend" who was able to explain something specific to me. I would have a problem if I was using it somewhere where the absolute accuracy was critical because it made such a huge mistake throughout its explanation. [1] https://en.wikipedia.org/wiki/Notation_for_differentiation#N... |
![]() |
| Yes, I like to think of LLM's as hint generators. Turns out that a source of hints is pretty useful when there's more to a problem than simply looking up an answer. |
![]() |
| gpt: give me working html example of javascript beforeunload event, and onblur, i want to see how they work when i minimize a tab.
10 seconds later, I am playing with these out. |
![]() |
| Most people only have heard "Didn't an IBM computer beat the world champion", and don't know that Kasparov pysched himself out when Deep Blue had actually maken a mistake. I was part of the online analysis of the (mistaken) engame move at the time that were the first to reveal the error. Kasparov was very stressed by that and other issues, some of which IBM caused ("we'll get you the printout as promised in the terms" and then never delivered). My friend IM Mike Valvo (now deceased) was involved with both matches. More info: https://www.perplexity.ai/search/what-were-the-main-controve...
|
![]() |
| I don't worry about that much at all, because my experience of learning is that you inevitably have to reconsider the fundamentals pretty often as you go along.
High school science is a great example: once you get to university you have to un-learn all sorts of things that you learned earlier because they were simplifications that no longer apply. Terry Pratchett has a great quote about this: https://simonwillison.net/2024/Jul/1/terry-pratchett/ For fields that I'm completely new to, the thing I need most is a grounding in the rough shape and jargon of the field. LLMs are fantastic at that - it's then up to me to take that grounding and those jargon terms and start building my own accurate-as-possible mental model of how that field actually works. If you treat LLMs as just one unreliable source of information (like your well-read friend who's great at explaining things in terms that you understand but may not actually be a world expert on a subject) you can avoid many of the pitfalls. Where things go wrong is if you assume LLMs are a source of irrefutable knowledge. |
![]() |
| I had a brain haemorrhage so I can not form sentences as well as I could. I nudge my AI to more accurately explain what I mean.
It is strange, I can read okay - by formations of words is challenging. |
![]() |
| GP claims in profile to be Dan Green, and implies the username is to be read Dan G's UX... If that's true and just a coincidence, it's pretty dang funny. (I do doubt it though.) |
![]() |
| This perfectly echoes my experience with AI.
It’s not perfect, but AI for working with code has been an absolute game changer for me. |
![]() |
| I don't think you are right. If you test LLM text and random internet text for inaccuracies and utility, you'd probably find more luck with LLM text.
For example, if you use a LLM to summarize this whole debate, you would get a decent balanced report, incorporating many points of view. Many times the article generated from the chat thread is better than the original one. Certainly better grounded in the community of readers, debunks claims, represents many perspectives. (https://pastebin.com/raw/karBY0zD) |
![]() |
| What is very useful for me is when I conduct research outside of my field of expertise, I do not even know what keywords to look for. An LLM can help you with this. |
![]() |
| What’s everyone’s coding LLM setup like these days? I’m still paying for Copilot through an open source Xcode extension and truthfully it’s a lot worse than when I started using it. |
![]() |
| This article is about how someone who DOES know a lot about what they’re trying to do can get huge value out if them, despite their frequent mistakes. |
![]() |
| "I understand this better than you do" twice in about 30 lines. Okay then.
I mean, sure, you do, but there's less off-putting ways to display your credentials... |
![]() |
| Your comment is a not a good representation of how the experience actually is. Nothing painful or annoying about it. If anything, it's a relief. |
![]() |
| You're gonna get left in the dust by everyone else embracing LLMs.
I am ecstatic about LLMs because I already practice documentation-driven development and LLMs perfectly complement this paradigm. |
![]() |
| And the funny part is, these LLMs are amazing at writing YAML config files.
I always just let it write my first draft of docker, k8s and terraform configs. |
![]() |
| Sounds like the author is trying really hard to find an edge use case for an LLM. Meanwhile on YouTube... "I Made 100 Videos In One Hour With Ai - To Make Money Online" |
![]() |
| The article goes through a few use cases where LLMs are especially good. Your examples are very different, and are the cases where they perform especially poorly.
Asking a pure (ie no internet/search access) LLM for papers on a niche subject is doubling down on their weaknesses. That requires LLMs to have very high resolution specific knowledge, which they do not have. They have more coarse/abstract understanding from their training data, so things like paper titles, DOIs, etc are very unlikely to persist through training for niche papers. There are some LLMs that allow searching the internet; that would likely be your best bet for finding actual papers. As an experiment I tried your exact prompt in ChatGPT, which has the ability to search, and it did a search and surfaced real papers! Maybe your experiment was from before it had search access. https://chatgpt.com/share/a1ed8530-e46b-4122-8830-7f6b1e2b1c... I also tried approaching this problem with a different prompting technique that generally tends to yield better results for me: https://chatgpt.com/share/9ef7c2ff-7e2a-4f95-85b6-658bbb4e04... I can't really vouch how well these papers match what you're looking for since I'm not an expert on Hugh transforms (would love to know if they are better!). But my technique was: first ask it about Hugh transforms. This lets me (1) verify that we're on the same page, and (2) loads a bunch of useful terms into the context for the LLM. I then expand to the example of using Hugh transforms for audio, and again can verify that we're on the same page, and load even more terms. Now when I ask it to find papers, it had way more stuff loaded in context to help it come up with good search terms and hopefully find better papers. With regards to your criteria: 1. The code from an LLM should never be considered final but a starting point. So the correctness of the LLM's output isn't super relevant since you are going to be editing it to make it fully correct. It's only useful if this cleanup/correction is faster than writing everything from scratch, which depends on what you're doing. The article has great concrete examples of when it makes sense to use an LLM. 2. Yep , although asking questions/generating generic code would still be fine without confidentiality concerns. Local LLMs though do exist, but I personally haven't seen a good enough flow to adopt one. 3. Strong disagree on this one. I find LLMs especially useful when I am learning. They can teach me eg a new framework/library incredibly quickly, since I get to learn from my specific context. But I also tend to learn most quickly by example, so this matches my learning style really well. Or they can help me find the right terms/words to then Google. 4. +1 I'm not a huge fan of having an LLM write for me. I like it more as a thinking tool. Writing is my expression. It's a useful editor/brainstormer though. 5. +1 |
![]() |
| Also agree that asking for academic papers seems to increase the potential for hallucination. But, I don't know if I am prompting it the best way in these scenarios.. |
I'm a C programmer, so I can with some effort gradually work my way through random Linux kernel things. But what I can do now instead is take a random function, ask GPT4 what it does and what subsystem it belongs to, and then ask GPT4 to write me a dummy C program that exercises that subsystem (I've taken to asking it to rewrite kernel code in Python, just because it's more concise and easy to read).
I don't worry at all about GPT4 hallucinating stuff (I'm sure it's doing that all the time!), because I'm just using its output as Cliff's Notes for the actual kernel code; GPT4 isn't the "source of truth" in this situation.