(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43619768

Hacker News 用户正在讨论从传统搜索引擎转向 ChatGPT 和 Perplexity 等大型语言模型 (LLM) 进行信息检索的转变。原帖作者已经完全转向使用 ChatGPT。一些人发现 LLM 更方便进行开放式研究,并欣赏它们提供的交互式、非评判性讨论。然而,许多人仍然更喜欢 Google、DuckDuckGo 和 Kagi 等搜索引擎,尤其是在查找特定网站、文档、产品评论和权威来源时。 人们经常关注 LLM 的准确性、幻觉以及缺乏验证信息的上下文。用户重视评估来源和浏览传统搜索提供的更广泛内容的能力。一些人使用 LLM 生成关键词来改进传统的搜索查询或获取快速摘要,而另一些人则认为它们对批判性思维和深入理解构成威胁。这场辩论突出了这两种方法的优缺点,具体取决于特定的信息需求和用户偏好。

Hacker News 用户正在讨论从传统搜索引擎转向 ChatGPT 和 Perplexity 等大型语言模型 (LLM) 进行信息检索的转变。原帖作者已经完全转向使用 ChatGPT。一些人发现 LLM 更方便进行开放式研究,并欣赏它们提供的交互式、非评判性讨论。然而,许多人仍然更喜欢 Google、DuckDuckGo 和 Kagi 等搜索引擎,尤其是在查找特定网站、文档、产品评论和权威来源时。 人们经常关注 LLM 的准确性、幻觉以及缺乏验证信息的上下文。用户重视评估来源和浏览传统搜索提供的更广泛内容的能力。一些人使用 LLM 生成关键词来改进传统的搜索查询或获取快速摘要,而另一些人则认为它们对批判性思维和深入理解构成威胁。这场辩论突出了这两种方法的优缺点,具体取决于特定的信息需求和用户偏好。
相关文章
  • (评论) 2025-03-26
  • (评论) 2025-03-29
  • (评论) 2025-03-27
  • (评论) 2025-03-29
  • (评论) 2025-03-23

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Ask HN: Do you still use search engines?
    25 points by davidkuennen 6 hours ago | hide | past | favorite | 69 comments
    Today, I noticed that my behavior has shifted over the past few months. Right now, I exclusively use ChatGPT for any kind of search or question.

    Using Google now feels completely lackluster in comparison.

    I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.

    How about you?











    Yes, I'm still using Google as I haven't found LLMs useful as a search engine replacement.


    Search is primarily a portal - you know a particular resource exists, you just don't know its exact URL.

    You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."

    LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

    Where LLMs will take over from search is when it come to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.



    Yes, but increasingly rarely.

    I mostly use Perplexity for search, sometimes ChatGPT. Only when I am looking for something _very_ specific do I use a traditional search engine.

    Dropping usage of search engines compounded by lack of support led to me cancelling my Kagi subscription and now I just stick with Google in the very rare occasions that I use a search engine at all. For a dozen searches or so a month, it wasn't worth it to keep paying for Kagi.



    I still prefer tranditional search engines over LLMs but I admit, its results feels worse than it has traditionally.

    I don't like LLMs for two reasons:

    * I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.

    * I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.

    If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.



    Am I the only one who double checks all of the information presented to me, from any source?


    Unless someone's life is on the line, usually eyeballing the source URL is enough for me. If I'm looking for API documentation, there are a few well-known URLs I trust as authoritative. If I'm looking for product information, same thing. If the search engine points me to totallyawesomeproductleadgen19995.biz, I'm probably not getting reliable information.

    An LLM response without explicit mention of its provenance... There's no way to even guess whether it is authoritative.



    How deep do you go? Where do you stop?

    Just because you can find multiple independent sources saying the same thing doesn't mean it's correct.



    You evaluate the credentials and authenticity of the sources you're reading and judge accordingly.


    Nothing means anything then.


    Overall, GOOD! But LLMs don't work as expected in some cases. For technical solutions, they usually don't take software version as a parameter which may cause issues. So I always had to cross-check the solution in forums and documentations


    The potential of AI to augment human creativity is immense, but we must thoughtfully consider the implications. While AI-generated content is impressive, it's crucial to establish clear boundaries, maintain human oversight, and ensure AI is used ethically to enhance, rather than replace, human artistry and originality.


    Yes, LLMs are no match for my decades of search skills.


    Yes. Why would I use AI to find information?


    It’s tough to find anything useful these days because of all the spam - especially due to AI, content. If I do use it, I usually use it to find something on Reddit.


    Depends on content, sometimes i use GPT to find stuff im lazy for and i know google would waste my time more likely, but generally i still use google, there are a lot of miscellaneous searches where an LLM would do worse than a search engine (currency exchange rate, stock price, quick facts etc..) tho I wish google had an option to block some sites from showing up, some searches are just filled with garbage - and i would like to block the whole domain from ever showing up


    Actually "quick facts", as I define them, are much better with an LLM for me. I prefer not having to wade through pages of links to sites trying to sell me something.


    I use Kagi, but I will say, the Quick Answer (Place an interrogation after your query for an LLM based answer) has been way more useful than I initially thought


    Yes, Kagi. Don’t use ChatGPT at all. Sometimes use Claude


    I use Kagi and sometimes DDG. When I do a search I'd rather do my own reading than be lied to. It's not even like using it for code, when you can quickly iterate if needed-there is no way to verify the information you got is correct and that is a major problem imo.


    I'll use Claude about 75% of the time, and then a search engine about 25% of the time. That 25% of the time, I'm usually looking for:

    - Specific documentation

    - Datasets

    - Shopping items

    - Product reviews

    But for the search engines I use, their branded LLM response takes up half of the first page. So that 25% figure may actually be a lot smaller.

    It's important to note that these search engine LLM responses are often ludicrously incorrect -- at least, in my experience. So now I'm in this weird phase where I visit Google and debate whether I need to enter search terms or some prompt engineering in the search box.



    I use DuckDuckGo, with the occasional reddit !g appended if I'm looking for something experience-based.

    For me, searches fall into one of three categories, none of which are a good fit for LLMs:

    1. A single business, location, object, or concept (I really just want the Google Maps or Wikipedia page, and I'm too lazy to go straight to the site). For these queries, LLMs are either overkill or outdated.

    2. Product reviews, setup instructions, and other real-world blog posts. LLMs want to summarize these, and I don't want that.

    3. Really specific knowledge in a limited domain ("2017 Kia Sedona automatic sliding door motor replacement steps," "Can I exit a Queue-Triggered Azure Function without removing it from the queue?"). In these cases, the LLMs are so prone to hallucination that I can't trust them.



    In response to the final sentence, you can work around this by breaking your problem or question down into smaller pieces. Essentially forcing it to reason, manually.


    100% still search first. If I am not super knowledge on the domain I am searching for I use an AI to get me keywords and terminology and then search.

    At most I use AI now to speed up my research phase dramatically. AI is also pretty good at showing what is in the ballpark for more popular tools.

    However I am missing forum style communities more and more, sometimes I don't want the correct answer, I want to know what someone that has been in the trenches for 10 years has to say, for my day job I can just make a phone call but for hobbies, side projects etc I don't have the contacts built up and I don't always have local interest groups that I can tap for knowledge.



    Nah. I'm perfectly conscious of the fact that ChatGPT can't be trusted with searches. Google is still my daily driver.


    My ChatGPT “searches the web” and provides URL of the sources as well.


    True, but that doesn't mitigate the problem I have with using LLMs as a search engine replacement. The issue I have is that LLMs "predigest" things and present you with the sources that are relevant to its response.

    However, it still blinds you to the larger picture. Providing supporting sources is all well and good, but doesn't help you with the larger view. I want the larger view.



    No, I go straight to GPT. Because I’m not usually searching for a webpage. What I’m really looking for is to learn through the course of an interactive discussion. Where I can ask any question no matter how stupid it is. Imagine a patient elderly colleague who will never lose their temper or mock you. Sometimes they get things wrong, but that’s where critical thinking comes in.


    I thought of one more thing I want to add. GPT listens to you. It makes you feel heard. Let’s say I ask a question but I have a strong bias. For example, what if I said “JavaScript is stupid, why can’t we go back to using Java Server Pages?”

    Instead of clowning me or making me feel invalidated it would present an argument that covers both sides and would probably start with “JSPs have certain advantages, and I understand why you would feel that way. Here is a list of pros and cons…”



    It depends on what I'm after. I still use regular searches quite a bit.

    But a lot of my classic ADHD "let's dive into this rabbit hole" google sessions have definitely been replaced by AI deep searches like Perplexity. Instead of me going down a rabbit hole personally for all the random stuff that comes across my mind, I'll just let perplexity handle it and I come back a few minutes later and read whatever it came up with.

    And sometimes, I don't even read that, and that's also fine. Just being able to hand that "task" off to an AI to handle it for me is very liberating in a way. I still get derailed a bit of course, but instead of losing half an hour, it's just a few seconds of typing out my question, and then getting back to what I've been doing.



    Yes, DDG for 95% of issues. Using an AI to search seems really, really, really dumb to me.


    That’s a perfectly fine answer, but providing no supporting arguments makes this a very difficult conversation.


    I’d say come back in a few years for a bad take. But this is already a bad take.

    A query in a regular search engine can at best perform like an LLM-based provider like Perplexity for simple queries.

    If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.



    I like to see multiple ideas or opinions on a subject. LLMs seem to distill the knowledge and opinions in ways that are more winner-take-all, or at most only the top few samples. Even if you prompt for a deeper sampling it seems it seems the quality drops (like resolution reduces for each) and its still based on popularity vs merits for some types of data.


    > If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.

    I just searched for "What is inherit_errexit?" at Perplexity. Eight sources were provided and none of them were the most authoritative source, which is this page in the Bash manual:

    https://www.gnu.org/software/bash/manual/html_node/The-Shopt...

    Whereas, when I searched for "inherit_errexit" using Google Search, the above page was the sixth result. And when I searched for "inherit_errexit" using DuckDuckGo, the above page was the third result.

    I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.



    Why would you even search for that out of the context of the IDE where you're coding or writing documentation? If you're writing bash you'd have all those man pages loaded in context for it to answer questions and generate code properly.


    > I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.

    This makes a lot of sense to me. As a young guy in the 90's I was told that some day "everyone will be fluent in computers" and 25 years later it's just not true. 95% of my peers never developed their fluency, and my kids even less so. The same will hold try for AI, it will be what smartphones were to PCs: A dumbed down interface for people who want to USE tech not understand it.



    I 100% use search engines, especially to find doc that I know exists. Google/DDG are so fast.

    If it is more of an open ended question that I am not sure there'll be a page with an answer for, I am more likely to use ChatGPT/Claude.



    Yes, I still use search engines. So do all but one of my friends, both technical and not. I have not found LLMs to be anything close to a good replacement for them.


    I'm using ChatGPT or Perplexity as my defaults for any research/questions I have (open research). I do go to Google when I have a specific company I want to quickly check some details (close research).


    I’m a Kagi subscriber; I like it but I use it less and less.

    The more times goes by, the more I use both ChatGPT and Claude to search (at the same time, to cross-check the results) with Kagi used to either check the results when I know strictly nothing of the subject or for specific searches (restaurants, movie showings…).

    I’ve almost completely stopped using Google.



    A question mark after a search on Kagi gives you an AI summary, the latency is good.

    If you go for the highest tier subscription on kagi, you get https://kagi.com/assistant which gives you a huge swath of AI models to handle your searching.



    True, but I still pay for both Claude and ChatGPT so that I can use the latest models.


    Considering I have been using different search engines since Excite and Alta Vista, the state of modern search is worse than when crawlers were in their infancy. It is so front loaded with SEO, a search for a simple doc reference gives you ten pages of links back to the sales and marketing pages for 12 applications that do something similar.

    AI is a better search for now because SEO and paid prioritization in search hasn't infested that ecosystem yet but it's only a matter of time.

    I dropped Google search years ago but every engine is experiencing enshitification.



    ChatGPT was 12 months ago for me

    I use Claude pretty exclusively, and GPT as a backup because GPT errors too much and tries to train on you too much and has a lackluster search feature. The web UIs are not these company’s priority, as they focus more on other offerings and API behavior. Which means any gripe will not be addressed and you have to just go for the differentiating UX.

    For a second opinion from Claude, I use ChatGPT and Google pretty much the same amount. Raw google searches are just my glorified reddit search engine.

    I also use offline LLM’S a lot. But my reliance on multimodal behavior brings me back to cloud offerings.



    Accuracy issues aside, a draw that I feel towards using e.g. ChatGPT is that the information is displayed in a more consistent way. When using a search engine and opening a bunch of the results in tabs, I have to reorient myself to each site because they all have different visual designs.


    I subscribe to https://kagi.com/. I use search to find expert and authoritative sources of information with human authors who can be held responsible for their contents, and that I can cite in my own work. I’m not interested in the output of a copy-paste machine that steals others’ work, makes things up, and spits out prose worse than a politician’s.


    Nope. I’ve despised using Google Search for years, and thought it would eventually be replaced with another better search engine. At one point I even switched to a paid Kagi subscription for a few months and it was sooo much better than Google. I only stopped using Kagi because I’ve completely switched to ChatGPT now. It’s a really great search engine but for my daily use ChatGPT is more convenient and faster to use.


    Youtube's search engine is still good for finding songs I want to listen to.


    I used an AI tool for the first time this weekend to get a military CAC to authenticate to websites through Firefox on Arch. It took more than half a dozen uses of the AI tool to get what I was looking for though. Super edge case and even the AI struggled like a human.

    Yes, I still use search engines and almost always find what I need in long form if I can’t figure it out on my own.



    How do you know the information it generates is correct?

    Just now for example I wanted to know how Emma Goldman was deported despite being a US citizen. Or whether she was a citizen to begin with. If an LLM gave me an answer I for sure would not trust it to be factual.

    My search was simple: Emma Goldman citizenship. I got a wikipedia article, claiming it was argued that her citizenship was considered void after her ex husband’s citizenship was revoked. Now I needed to confirm it from a different source and also find out why her ex’s citizenship was revoked. So I searched his name + citizenship and got an New Yorker article claiming it was revoked because of some falsified papers. Done

    If an LLM told me that, I simply wouldn’t trust it and would need to search for it anyway.



    Switched over to DuckDuckGo a month ago. Results aren’t always great but it works 90% of the time.

    I use perplexity pro + Claude a lot as well. Maybe too much but mostly for coding and conversations about technical topics.

    It really depends on intent.

    I have noticed that I’ve started reading a lot more. Lots of technical books in the iPad based on what I’m interested in at the moment.



    I'm also using chatgpt with its search enabled or perplexity for searching almost anything. Way more accurate and to the point.

    I feel like the google search will become obsolete in a short time and they have to make big changes to their UX and search engine.

    Although I guess most of its user base are still relying on the old ways so changing it right now has huge impacts on older users.



    Interested in what your benchmark for accuracy is. I feel like for my searches that I am normally looking at a few different sources and cross referencing them to come to a conclusion about what is best for me. Do you find that AI is good at automatically figuring out what is best for you?

    For instance I wanted help cooking Coq au vin yesterday. I’ve cooked it before but I couldn’t remember what temperature to set the oven to. I read about five recipes (which were all wildly different) and choose the one that best suited the ingredients and quantities I was already using.

    I asked chat gpt for a coq au vin recipe, and I’ll just say I won’t be opening a restaurant using ChatGPT as my sous chef anytime soon.



    I still use google scholar, right dao for deep search (tens of thousands of results), searx instances, and kagi for now but it's not worth the $10/mo for only ~200 results per search.

    The serendipity of doing search with your own eyes and brain on page 34 of the results cannot be understated. Web surfing is good and does things that curated results (ie, google's <400, bing's <900, kagi's <200, LLM's very limited single results) cannot.



    Whether I reach for AI or Search depends on two questions. Am I looking for a site or information? If I'm looking for information, how easily can I verify it?

    Websites have all kinds of extra context and links to other stuff on them. If I want to learn/discover stuff then they are still the best place to go.

    For simple informational questions, all of that extra context is noise; asking gpt "what's the name of that cpp function that does xyz" is much faster than having to skim over several search results, click one, wait for 100 JavaScript libraries to load, click no on a cookies popup and then actually read the page to find the information only to realise the post is 15 years old and no longer relevant.

    There are times where I know exactly what website to go to and where information is on that site and so I prefer that over AI. DDGs bangs are excellent for this: "!cpp std::string" and you are there.

    Then there's the verifiability thing. Most information I am searching for is code which is trivial to verify: sometimes AI hallucinates a function but the compiler immediately tells me this and the end result is I've wasted 30 seconds which is more than offset by the time saved not scrolling through search.

    Examples of things that aren't easy to verify: when's this deprecated function going to be removed, how mature is tool xyz.

    Of course, there's also questions about things that happened after the AI's knowledge cutoff date. I know there are some that can access the internet now but I don't think any are free



    Way less than I used to. I have been a pretty advanced user since before google. The combination of AI and quick auto links to wikipedia articles on iOS have replaced much of it. The one place I still use it extensively is in local searches for businesses and when trying to find a brand or business that I know if they don't have an app.


    I use DDG for normal stuff, many times a day. I use LLMs for difficult to find stuff or to discover keywords.

    They can be very useful, especially when looking for something closely adjacent to a popular topic, but you got to check carefully what they say.



    Yes, all my searching is using Google and I haven't had any issues with the results or finding what I want.


    Sure. It's sometimes faster to do "allowable attributes for CSS visibility" and visually scan the results for the keywords.


    I mainly use search engines indirectly via Copilot (the app). It uses Bing in the background to give current results, so I can ask it about what happened yesterday.


    I fundamentally cannot trust a searching system that includes a disclaimer that it can make stuff up (hallucinate) and there's nothing you can do about it.


    About a year ago it was 100% search engine. Today it's closer to 50/50 search/Chatgpt


    Not really, no. My peers and I were constantly opening ChatGPT when probing new topics. But now, with Gemini integrated within Google and the hallucinations of LLMs, seeing the SEO result along with the AI summary has become my goto choice. The one thing I find extremely frustrating (which I hope Google fixes) is not being able to continue the conversation with Gemini if I have follow-up questions.

    I think this also stems from a new design paradigm emerging in the search domain of tech. The content results and conversational answers are merging – be it Google or your Algolia search within your documentation, a hybrid model is on the rise.



    I usually do both at the same time. Ironically because Google.com is the shortest path to Gemini.

    The other day I was also searching for something dumb: how to hammer a nail into concrete.

    Google will find me instructions for a hammer-drill... no I just have a regular hammer. There's a link from wikiHow, which is okay, but I feel like it hallucinates as much as AI. Actually I just opened the link and the first instruction involves a hammer drill too. The second one is what I wanted, more wordy than ChatGPT.

    Google then shows YouTube which has a 6 minute video. Then reddit which has bad advice half the time. I'm an idiot searching for how to hammer nails into a wall. I do not have the skill level to know when it's BS. Reddit makes me think I need a hammer drill and a fastener. Quora is next and it's even worse. It says concrete nails bend when hit, which even I know is false. It also convinces me that I need safety equipment to hit a nail with a hammer.

    I just want a checklist to know that I'm not forgetting anything. ChatGPT gives me an accurate 5-step plan and it went perfectly.



    I am frequently disappointed with the results I obtain from search engines, but in some of these cases I can find the things I'm looking for by tweaking the advanced search settings.

    On the other hand every time I've used language models to find information I've gotten back generic or incorrect text + "sources" that have nothing to do with my query.



    I use searxng


    Yes, because Google also has AI and it’s integrated into my browser bar, Chat-Gpt is just secondary tool to me.

    If I need something more complex like programming, talk therapy, or finding new music then I’ll hop on over to Chat.



    Almost all of my “searches” are now done by either ChatGPT or Claude.

    I'm still using Google for searches on Reddit these days because Reddit's own search engine is terrible.



    The idea of taking an answer from any black box is profoundly unacceptable. Even if the black box didn't hallucinate. Why wouldn't I prefer to follow a link to a site so that I can evaluate its trustworthiness as a source?

    Why would I want to have a conversation in a medium of ambiguity when I could quickly type in a few keywords instead? If we'd invented the former first, we'd build statues of whoever invented the latter.

    Why would I want to use a search service that strips privacy by forcing me to be logged in and is following the Netflix model of giving away a service cheap now to get you to rely on it so much that you'll have no choice but to keep paying for it later when it's expensive and enshittified?







    Join us for AI Startup School this June 16-17 in San Francisco!


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com