Ask HN: Do you still use search engines?

340 points by davidkuennen 5 days ago

Today, I noticed that my behavior has shifted over the past few months. Right now, I exclusively use ChatGPT for any kind of search or question.

Using Google now feels completely lackluster in comparison.

I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.

How about you?

wavemode 5 days ago

Search is primarily a portal - you know a particular resource exists, you just don't know its exact URL.

You hear about this new programming language called "Frob", and you assume it must have a website. So you google "Frob language". You hear that there was a plane crash in DC, and assume (CNN/AP/your_favorite_news_site) has almost certainly written an article about it. You google "DC plane crash."

LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

Where LLMs will take over from search is when it comes to open-ended research - where you don't know in advance where you're going or what you're going to find. I don't really have frequent use cases of this sort, but depending on your occupation it might revolutionize your daily work.

  • Modified3019 5 days ago

    IMO an example of a good use case for an LLM, which would be otherwise very hard to search for, is clarifying vague technical concepts.

    Just yesterday I was trying to remember the name of a vague concept I’d forgotten, with my overall question being:

    “Is there a technical term in biology for the equilibrium that occurs between plant species producing defensive toxins, and toxin resistance in the insect species that feed on those plants, whereby the plant species never has enough evolutionary pressure to increase it’s toxin load enough to kill off the insect that is adapting to it”

    After fruitless searching around because I didn’t have the right things to look for, putting the above in ChatGPT gave an instant reply of exactly what I was looking for:

    “Yes, the phenomenon you're describing is often referred to as evolutionary arms race or coevolutionary arms race.”

    • tofof 5 days ago

      While I do like LLMs for these tasks, unfortunately this one failed you but was a near enough miss that you couldn't see it. What you were really looking for is the Red Queen problem/hypothesis/race, named after a quote from Through the Looking Glass, with the Queen explaining to Alice: "Now, here, you see, it takes all the running you can do, to keep in the same place." In particular, the Red Queen term is specifically the equilibrium you inquired about, where relative fitness is unchanging, rather than the more general concept of an evolutionary arms race in which there can be winners and losers. The terms 'evolutionary equilibrium' and 'evolutionary steady state' are also used to capture the idea of the equilibrium, rather than just of competition.

      Evolutionary arms race is somewhat tautological; an arms race is the description of the selective pressure applied by other species on evolution of the species in question. (There are other, abiotic sources of selective pressures, e.g. climate change on evolutionary timescales, so while 'evolution' at least carries a broader meaning, 'arms race' adds nothing that wasn't already there.)

      That said, using your exact query on deepseek r1 and claude sonnet 3.7 both did include red queen in their answers, along with other related concepts like tit for tat escalation.

      • margalabargala 5 days ago

        This is an incorrect response.

        Firstly, "Evolutionary Arms Race" is not tautological, it is a specific term of art in evolutionary biology.

        Secondly, "evolutionary arms race" is a correct answer, it is the general case of which the Red Queen hypothesis is a special case. I do agree with you that OP described a Red Queen case, though I would hesitate to say it was because of "equilibrium"; many species in Red Queen situations have in fact gone extinct.

        https://en.wikipedia.org/wiki/Evolutionary_arms_race

        https://en.wikipedia.org/wiki/Red_Queen_hypothesis

        • tofof 5 days ago

          I disagree that evolutionary arms race is a specific term of art; we have many specific terms of art but 'arms race' is a broad generalization popularized by Dawkins as a pop science writer addressing a lay audience. Actual terms of art in this area would include Red Queen, the many individually termed coevolutions (antagonistic, mosaic, host-parasite, plant-herbivore, predator-prey etc), coadaptation, coextinction, the escalation hypothesis, frequency-dependent selection, reciprocal selection, asymmetric selection, the evolutionary lag, evolutionary cycling, character displacement, Fisherian runaway, evolutionary mismatch/trap, (phylogenetic) niche conservatism, fitness landscape, Grinnellian vs Eltonian niches, the competitive exclusion principle, and on and on. All of these actual terms of art fit under the broad, general umbrella of an 'arms race' with other species, which is really nothing more than a restatement of Spencer's unfortunate phrase. The latter is so widely 'known' that it is to the point that I and many of my peers try not to utter it, in an effort to reduce the work refuting the same tired misunderstandings that arise from that verbiage.

          At any rate, almost NONE of these actual terms of art are about the sort of equilibrium that was the exact heart of the OP's query to the LLM, and thus nearly none of the broader umbrella 'arms race' is about why the plant doesn't have the evolutionary pressure to actually drive the parasite extinct. An arms race doesn't have to be in equilibrium. Armor vs weapons were in an arms race and indeed at equilibrium for millenia, but then bullets come along and armor goes exinct almost overnight and doesn't reappear for 5 centuries. Bullets win the arms race. Arms races have nothing to do, inherently, with equilibrium.

          You seem to have misunderstood the nature of the equilibrium in a Red Queen scenario, which is the fundamental effect that the hypothesis is directly named for. That species that are in Red Queen relationships can go extinct is in no way a counterargument to the idea that two (or more) species tend to coevolve in such a way that the relative fitness of each (and of the system as a whole) stays constant. See, for example, the end of the first paragraph on the origin of Van Valen's term at your own wiki link.

          Evolutionary steady-state is a synonymous term without the baggage of the literary reference and also avoids the incorrect connotation suggested by arms race that leads people to forget the abiotic factors that are often a dominant mechanism in extinctions as the realized niche vs the fundamental niche differ. Instead, Van Valen was specifically proposing the Red Queen hypothesis as an explanation of why extinction appears to be a half-life, i.e. of a constant probability, rather than a rate that depends on the lifetime of the taxa. This mechanism has good explanatory power for the strong and consistent evidence that speciation rate (usually considered as the log of the number of genera, depending on definition, see Stanley's Rule) has a direct and linear relation with the extinction rate. If Red Queened species didn't go exinct, Van Valen wouldn't have needed to coin the term to explain this correlation.

          Or were you deliberately invoking Cunningham's Law?

      • bmacho 5 days ago

        > this one failed you but was a near enough miss that you couldn't see it. What you were really looking for is the Red Queen

        GP was looking for a specific term that they had heard before. It was co-/evolutionary arms race, and ChatGPT guessed it correctly.

        Also GPT-4o elaborated the answer (for me at least) with things like:

            > However, the specific kind of equilibrium you're referring to—where neither side ever fully "wins", and both are locked in a continuous cycle of adaptation and counter-adaptation—is also captured in the idea of a “Red Queen dynamic”.
        
            > You could refer to this as:
              * Red Queen dynamics in plant-insect coevolution
              * A coevolutionary arms race reaching a dynamic equilibrium
              * Or even evolutionary stable strategies (ESS) applied to plant-herbivore interactions, though ESS is more game-theory focused.
      • ern 4 days ago

        I tested that prompt with multiple chatGPT models, Claude Sonnet 3.7 and Deepseek and all mentioned the red queen. Just saying.

    • Cthulhu_ 4 days ago

      I just want to say that this thread / the responses to your question are better than either search engines or LLMs can ever come up with, and shows the truth of Cunningham's Law: "The best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer"

      Or updated for the LLM age, "the best way to get the right answer from an LLM is not to ask it a question and use its answer; it's to post its response on a site of well-educated and easily nerdsniped people"

      • inanutshellus 4 days ago

        The responses to GP's comment are surprisingly entertaining to read. I'm entirely engaged in this nerdy semantics war on a topic I know nothing about and loving it.

        Time to pop some popcorn and hit refresh.

    • Slow_Hand 5 days ago

      Same. This is one of the few uses for LLMs that I actually find useful and that I trust.

      They’re very helpful for helping me ask more refined questions by getting the terminology correct.

      • Cthulhu_ 4 days ago

        That's the truth behind finding answers; it's not about finding the answer, it's about asking the right question.

    • RyanOD 5 days ago

      Agreed. I'm increasingly using ChatGPT to research topics. In that way, I can refine my question, drill down, ask for alternatives, supply my own supplementary information, etc.

      I think of AI as an intelligent search engine / assistant and, outside of simple questions with one very specific answer, it just crushes search engines.

      • Shorel 4 days ago

        I don't see it as crushing the search engines.

        I use the LLMs to find the right search terms, and that combination makes search engines much more useful.

        LLM by themselves give me very superficial explanations that don't answer what I want, but they are also a great starting point that will eventually guide me to the answers.

        • RyanOD 4 days ago

          Likely because I'm asking specific Python development questions, I'm getting specific answers (and typically 2-3 variations) that I can drill down into. That response is likely different than for a more general question with a wider array of possible answers.

      • schrodinger 5 days ago

        it's especially great in voice mode. I love to take a long walk or jog with earbuds in and use it to learn a new topic. I don't well through indoctrination, I learn much better by asking it to give a very big picture overview, start to build an intuitive understanding, and then start asking pointed questions to fill in the blanks and iterate towards a deeper understanding. I find ChatGPT's voice mode to be very engaging in that use case—far more so than Gemini's actually—and I've used it to learn a bunch of new technologies. It's like having a never-bored professor friend you can call and ask endless questions and keep getting answers without annoying it!

        • jimmySixDOF 4 days ago

          Yeah, I find setting it in voice mode and then directing it to a specific site that I'm interested in discussing is super useful, so I start off in ChatGPT*, then I switch windows to what I'm interested in looking at. For instance, I want to go over a GitHub repo. I can ask it to verbally ask it to go there and then chat with it as I read through the README file, and it provides great a soundboarding experience. No pun intended. Heck, I'm even dictating this response through Wispr Flow, which I have found to be more useful than I had anticipated.

          * Gemini lets you do this by actually streaming your screen through and verbally chatting about whatever is on screen. While interesting, I find the responses to be a little less thorough. YMMV.

      • adrian_b 4 days ago

        It is extremely dangerous to believe that anything said by an AI assistant is correct.

        Even with supposedly authoritative peer-reviewed research papers it is extremely frequent to find errors whenever the authors claim to quote earlier work, because the reality is that most of them do not bother to read carefully their claimed bibliography.

        When you get an answer from an AI, the chances greatly increase that the answer regurgitates some errors present in the publications used for training. At least when you get the answer from a real book or research paper, it lists its sources and you can search them to find whether they have been reproduced rightly or wrongly. With an AI-generated answer it becomes much more difficult to check it for truthfulness.

        I will give an example of what I mean, on which I happened to stumble today. I have read a chemistry article published in 2022 in a Springer journal. While the article also contained various useful information, it happened to contain a claim that seemed suspicious.

        In 1782, the French chemist Guyton de Morveau has invented the word "alumine" (French) = "alumina" (Latin and English), to name what is now called oxide of aluminum, which was called earth of alum at that time ("terra aluminis" in Latin).

        The article from 2022 claimed that the word "alumina" had already been used earlier with the same sense, by Andreas Libavius in 1597, who has been thus the creator of this word.

        I have found this hard to believe, because the necessity for such a word has appeared only during the 18th century, when the European chemists, starting with the Swedish chemists, have finally gone beyond the level of chemical classification inherited from the Arabs and they have begun to classify all known chemical substances as combinations of a restricted set of primitive substances.

        Fortunately, the 2022 article had a detailed bibliography, and using it I was able to find the original work from 1597 and the exact paragraph in it that was referred to. The claim of the 2022 article was entirely false. While the paragraph contained a word "alumina", that was not a singular feminine adjective (i.e. agreeing with "terra") referring to the "earth of alum". Instead of this, it was not a new word, but just the plural of the neuter word "alumen" (= English alum), in the sentence "alums or salts or other similar sour substances can be mixed in", where "alums" meant "various kinds of alum", like "salts" meant "various kinds of salt". Nowhere in the work of Libavius there was any mention of an earth that is a component of alum and that could be extracted from alum (in older chemistry, "earth" was the term for any non-metallic solid substance that neither dissolves in water nor burns in air).

        I have given in detail this example, in order to illustrate the kinds of errors that I very frequently encounter whenever some authors claim to quote other works. While this was an ancient quotation, lots of similar errors appear when quoting more recent publications, e.g. when quoting Einstein, Dirac or the like.

        I am pretty sure that if I would ask an AI assistant something, the number of errors in the answers will not be less than when reading publications written by humans, but the answers will be more difficult to verify.

        Whoever thinks that they can get a quick answer to any important question in a few seconds and be done with it, are naive because the answer to any serious question must be verified thoroughly, otherwise there are great chances that those who trust such answers will just spread more disinformation, like the sources on which the AI has been trained.

        • RyanOD 4 days ago

          Appreciate your perspective. To be clear, I'm using it to become a better small game developer and not relying on it to answer anything I would classify as an "important question". Moreover, I don't take everything AI tells me to be 100% accurate (and I never copy/paste the response). Rather, I use it as an assistant with which I can have a back and forth "conversation" to acquire other perspectives.

          Despite a lot of effort, I'm just not a highly skilled developer and I don't have any friends / colleagues I can turn to for assistance (I don't know a single software developer or even another person who enjoys video games). While resources like StackOverflow are certainly useful, having answers tailored to my specific situation really accelerates progress.

          I'm not trying to cure cancer here and much of what would be considered the "best approach" for a small game architecture is unique to the developer. As such, AI is an incredible resource to lean on and get information tailored to my unique use case (here is my code...how does {topic} apply to my situation?")

          And yes, I find errors from time to time, but that is good. It keeps me on my toes and forces me to really understand the response / perspective.

    • paul7986 5 days ago

      Present day...

      Google 55% as GPT is not a local search engine

      GPT 45% but use it for more intelligent learning/conversations/knowledgebase.

      If I had a GPT phone ... sorta like H.E.R. the movie I would rarely leave my phone's lockscreen. My AI device / super AI human friend would do everything for me including get me to the best lighting to take the best selfies...

    • throwaway422432 5 days ago

      Synthesizing what would be multiple searches into one prompt is where they can be really useful.

      For example: Take the ingredient list of a cosmetic or other product that could be 30-40 different molecules and ask ChatGPT to list out what each of them is and if any have potential issues.

      You can then verify what it returns via search.

    • exe34 5 days ago

      that sounds like a really great example of..... searching through vector embedding. I don't think the LLM part was technically necessary.

      • brokencode 5 days ago

        Okay, so how does the average person search through vector embedding? I would like to try this out.

        • exe34 5 days ago

          I didn't say the average person should use it, I was thinking the search functionality could be implemented that way and save on burning up the planet to tame Moloc.

          • brokencode 4 days ago

            If it could be implemented that way and would be helpful where Google fails, then why has nobody done this? I’d love to try this out if you can point to a product that does this.

            You can criticize LLMs all you want, but the fact is that they provide value to people in ways that alternatives simply don’t. The energy consumption is a concern, but don’t pretend there are viable alternatives when there aren’t.

            • exe34 4 days ago

              I don't have insight on what goes on inside Google - they may well be doing this at some level, but their business isn't finding stuff, it's selling ads, so getting the search right is a very low priority.

              The LLM people are heavily invested in ever bigger models to keep the research money flowing in, it wouldn't make sense to release a service that undercuts that.

              that leaves independent actors - presumably building and maintaining an up to date database is difficult, so only the big search engines do.

        • ForceBru 5 days ago

          I don't think this is _literally_ a search through vector embeddings.

          LLMs store embeddings of individual tokens (usually parts of words), so a result of an actual search will be top-k embeddings and the corresponding tokens, similar to the output of a Google search. You could extract the initial matrix of embeddings from some open-weights model and find tokens closest to your query. However, it's not clear why do this. OP got coherent text, so that's not search.

          It's _similar_, though, because attention in LLMs basically looks for most similar tokens. So to answer the question about the term, the LLM had to create a stream of tokens that's semantically closest to the given description. Well, this is somewhat like a search, but it's not exactly the same.

      • jacobr1 5 days ago

        And the LLM won't be necessary either for the common search case of "dc plane crash" or whatever, but increasingly that will be asked of general assistance agent, and it will dispatch an internet search of some kind, and return the result. Even if it provides 0 additional benefit over google, switching the default search location might still occur, especially if it does provide some benefits (such as summarization)

      • agubelu 5 days ago

        It's all about the UI/UX. Same as in Dropbox/rsync

        • udev4096 4 days ago

          No one who extensively uses dropbox uses their UI. It's mostly either the API or rclone. It's inefficient and a complete waste of time to use any UI at all

  • rstuart4133 5 days ago

    I'd go further, and say I use search when I'm pretty confident I know the right search terms. If I don't, I'll type some wordy long explanation of what I want into an LLM and hope for the best.

    The reason is pretty simple. If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched.

    If you aren't confident of the search times, it can take 1/2 an hour of dicking around with different terms, clicking though a couple of pages of search results for each set of term, until you finally figure out the lingo to use. Figuring out what you are really after from that wordy description is the inner magic of LLM's.

    • Al-Khwarizmi 4 days ago

      If the result you want is in the first few search hits, it's always better. Your query is shorter so there is less typing, the search engine is always faster, the results are far better because you side step the LLM hallucinating as it regurgitates the results it remembers on the page your would have read if you searched

      Most often not true in the kind of searches I do. Say, I search for how to do something in the Linux terminal (not just the command, but the specific options to achieve a certain thing). Google will often take me to pages that do have the answer, but are full of ads and fluff, and I have to browse through several options until I find the ones I want. ChatGPT just gives me the answer.

      And with any halfway decent model, hallucination only seems to be a problem in difficult or very specialized questions. Which I agree shouldn't be asked to LLMs (or not without verifying sources, at least). But over 90% of what I search aren't difficult or specialized questions, they're just things I have forgotten, or things that are easy but I don't know just because they're not in my area of expertise. For example as a learner of Chinese, I often ask it to explain sentences to me (translate the sentence, the individual words, and explain what a given word is doing in the sentence) and for that kind of thing it's basically flawless, there's no reason why it would hallucinate as such questions are trivial for a model having tons of Chinese text in its training set.

    • jfim 5 days ago

      It depends. Sometimes webpages are useful, but at the same time navigating the amount of fluff on webpages nowadays takes longer than asking a LLM.

      I asked Claude to give me a recipe that uses mushrooms and freezes well and it give me a decent looking soup recipe. It might not be the best soup ever, but it's soup, kinda hard to mess up. The alternative would be to get a recipe from the web with a couple dozen paragraphs about how this is the bestest soup ever and it comes from their grandma and reminds them of summer and whatnot.

      • rstuart4133 5 days ago

        > I asked Claude to give me a recipe that uses mushrooms and freezes well and it give me a decent looking soup recipe.

        It didn't suggest adding glue? I imagine it would freeze real well if you did that. /s

    • tasuki 4 days ago

      > I'll type some wordy long explanation of what I want into an LLM and hope for the best.

      Interesting, I just random words. LLM not care sentence.

  • deadbabe 5 days ago

    We’re currently in the golden age of LLMs as search engines. Eventually they’ll subtly push products and recommendations in their output to steer you toward specific things.

    • HellDunkel 5 days ago

      You mean like the golden age of speech recognition a couple of years ago when they claimed 80% of computer interfacing will be voice only?

      • NBJack 5 days ago

        Golden age as in, "this is as good as it gets", rather than "it will be better in the future".

      • II2II 5 days ago

        Or it could be more like the golden age of search engines. It's hard to tell which way it will go because LLMs are a new technology. Some people see only its strengths. Some focus upon its weaknesses. What matters though is the long run, when we get over our initial reactions.

      • spudlyo 5 days ago

        More like as in the Golden Age of Google search, where the product optimized for returning the best, most relevant, most reputable results. As in, pre enshitification.

        • HellDunkel 5 days ago

          Its already stained. And it was never golden so why care?

          • stavros 5 days ago

            I assume you didn't try Google in the late 90s?

            • HellDunkel 4 days ago

              I did and calling it a golden age is not wrong compared to what we have now. I was referrimg to the golden age of „voice only“ here

    • red-iron-pine 5 days ago

      by "eventually" they mean "at this rate, mid-next year, if not already"

      • oofbey 5 days ago

        No - they are all competing for eyeballs right now. They need to solidify their positions in terms of user habits and behaviors before they start pissing people off.

        • Cthulhu_ 4 days ago

          Voice interfaces have become ingrained in some people's lives thanks to either their on-device thing (siri) or there being a permanent listener in their house (alexa); I suspect the same thing will happen with LLMs when they get integrated like that, which is already happening with Copilot in every editor, Gemini in every Google product, and soon enough (if not already) Apple Intelligence and co in phones.

  • keithnz 5 days ago

    have you tried chatgpt search? you can do "DC plane crash" or "Frob" it will come up with links to the story, but it will quickly give you a summary with links to its sources. Best thing is you can follow up with questions.

    • wavemode 5 days ago

      Yes, I have. If I want something to read the page for me, then that's where LLMs come in.

      But what I'm talking about is when I want to read the page for myself. Waste of time to have to wait for an LLM to chew on it.

      • octernion 5 days ago

        hah i use llms for that now too - "option-space 'link to <foo> lang'" and chat returns faster than the whole endeavor of opening google or putting stuff into the nav bar.

        • brookst 4 days ago

          That’s my experience too. I don’t find Google or even Kagi faster for retrieving a link. All of the major LLMs can pull search results faster than I can through search websites, and then I’ve got a conversation where I can dive deeper if I want.

  • npilk 5 days ago

    Agreed. I think of these as two different types of searches - “page searches” where you know a page exists and want to get to it, and “content searches” where you have a more open-ended question.

    Really, for many “page searches”, a good search engine should just be able to take you immediately to the page. When I search “Tom Hanks IMDB”, there’s no need to see a list of links - there’s obviously one specific page I want to visit.

    https://notes.npilk.com/custom-search

    • Y_Y 5 days ago

      > Really, for many “page searches”, a good search engine should just be able to take you immediately to the page.

      Are you feeling lucky?

      • npilk 5 days ago

        Exactly :-)

        Unfortunately you can’t really show ads if you take someone directly to the destination without any interstitial content like a list of links…

        • exe34 5 days ago

          wrong - that's how YouTube works. 30s advert before a video on CPR.

          • kbutler 5 days ago

            I'd rather have a list of links with text ads I can ignore...

    • sodality2 5 days ago

      This is one of my favorite duckduckgo features - adding a single exclamation point after a search (“Tom Hanks IMDB !”) does exactly this

  • dumbfounder 4 days ago

    Yes they will. Why do you think they won’t? They certainly can. You just use RAG to look up the latest news based on the keywords you are using. You can use search on the back end and never surface a list of results unless the LLM decides that is a good idea. It curates that the reusits for you. Or gives you the singular site you need with context. That is better for most searches.

  • desipenguin 5 days ago

    Agree 100% I tried perplexity to "search" My use case was similar to one described above.

    I know what I'm looking for. I just need exact URL.

    Perplexity miserably fails at this.

  • generalizations 5 days ago

    It's been pretty cool to realize that Grok 3 actually prioritizes up-to-date information: I have actually used it for both kinds of your examples, and it worked.

    • exodust 4 days ago

      Still use Google for quick generic lookups, spell-checks/definitions, shopping stuff, products, or things I know will return a good result.

      Grok is great for finding details and background info about recent news, and of course it's great for deep-diving on general knowledge topics.

      I also use Grok for quick coding help. I prefer to use AI for help with isolated coding aspects such as functions and methods, as a conversational reference manual. I'm not ready to sit there pretending to be the "pilot" while AI takes over my code!

      For the record, I do not like Google's AI generated results it spams at me when I search for things. I want AI when I choose to use AI, not when I don't choose it. Google needs a way to switch that off on the web (without being logged in).

  • bmcahren 4 days ago

    I am going to cite you in a decade. Already today ChatGPT is _far_ better than Google. Instead of finding a keyword optimized page for "frob language", I can get the objectively best sources for frob language and even find the best communities related to it. Zero frob ads, zero frob-optimized pages that are designed to trick google, etc.

    Traditional search is dead, semantic search through AI is alive and well.

    I can't yet count once AI misunderstood the meaning of my search while Google loves to make assumptions, rewrite my search query, and deliver the results that pay it the best which have the best ads (in my opinion as a lifetime user).

    Lets not even mention how they willingly accept misleading ads atop the results which trick the majority of common users into downloading malware and adware on the regular.

  • kiney 3 days ago

    LLMs already replaced that news example for me. Especially grok is really good at summarizing the state of reporting for current events like plane crashes

  • crowcroft 4 days ago

    Yea 'needle in a haystack' style search is something that LLM based search is simply not as good at.

    The reason Google is still seeing growth (in revenue etc.) is that for a lot 'commercial' search still ends with this kind of action.

    Take purchasing a power drill for example, you might use an LLM for some research on what drills are best, but when you're actually looking to purchase you probably just want to find the product on Home Depot/Lowe's etc.

  • FloorEgg 5 days ago

    Except when search engines bury the thing you're obviously looking for under an entire page of sponsored ads, then that convenience argument starts to not hold up as well...

    • ptmcc 5 days ago

      If LLMs aren't already doing this, they certainly will soon. And it'll be even more insidious and "invisible" than sponsored search results.

      • ethbr1 5 days ago

        This is the biggest argument in favor of subscription-based funding models for search I've heard. (Kagi et al.)

        Ad-sponsored models are going to be dead as soon as people realize they can't trust output.

        And because the entire benefit to LLM search is the convenience of removing a human-in-the-loop step (scanning the search results), there won't be a clear insertion/distinction point for ads without poisoning the entire output.

        • Kerrick 5 days ago

          You'd hope so, but it's still an uphill battle to get people to only work with financial advisors who have a fiduciary duty to their client, rather than working with advisors who get kickbacks.

          • ethbr1 5 days ago

            Granted, most markets are bifurcated.

            Those who can afford, buy unbiased.

            Those who cannot, accept biased free services.

            I suppose that's Google's hope for how future search turns out.

        • xorcist 5 days ago

          There is no exclusive-or between business models, there is an inclusive-or.

          Over time subscription models will converge to subscription with advertisements. Like newspapers did.

          • wavemode 5 days ago

            And streaming services. People really thought streaming would be the end of TV commercials. Ha!

            • yellowapple 5 days ago

              It absolutely has been for me. It'll be a cold day in Hell before I pay money for something that's ad-supported. It's why I won't touch Hulu with a 10-foot pole, it's one of many reasons why I will always pirate Windows, and it's why I laugh at cable Internet salesmen when they sing their praises for bundling in TV.

              Ads xor payment, or else you can fuck all the way off.

            • kbutler 5 days ago

              Define ad infinitum

              What you don't get with pay TV

                Originally for cable, but then you got ads with cable. 
                Then for streaming, but then you got ads with streaming. 
                Physical DVDs? Ads. 
                Paid Roku device? Ads. 
                Paid windows installation? Ads.
              
              Available eyeballs will be sold.
        • handfuloflight 5 days ago

          What if the advertiser's product or service is objectively (according to either a prompt defined by the user or by the UI operator, transparently provided) the answer to the query?

          • crabmusket 5 days ago

            Then the response will also contain ads for things that are not the answer to the query. This is how search engines currently behave.

            Example from my work. Many of our customers will search for our company name in order to find the login page. I've watched them do this over screen share.

            When they do that, the top search result is an ad slot. The second search result is our login page.

            We buy ads against out own company name so that we can be the ad slot as well as the first result. Otherwise a competitor who buys ads against our company name could be the top slot.

            • SoftTalker 5 days ago

              Or a phisher could buy the ad slot or game the search results to publish a fake login page to phish credentials. This is why 2FA is not really optional anymore for anything of value.

      • exe34 5 days ago

        I'm actually surprised that llms aren't pushing us towards products yet.

    • Rayhem 5 days ago

      Except when LLM providers bury the thing you're obviously looking for under an entire page of sponsored ads (buy Diet Coke™!), then that convenience argument starts to not hold up as well...

    • krferriter 5 days ago

      When I search 'dc plane crash' in Google, Bing, and DuckDuckGo I don't get results buried under ads. When I search 'air conditioner for sale' I do get ads at the top of each of those, but that's more okay because I am looking for things to buy. And it's easy to look past the ads to get to sites like home depot or other traditional retailers that come up not just because they purchased ad space.

      • yellowapple 5 days ago

        It helps that decent ad-blockers also tend to be able to block sponsored results.

    • piva00 5 days ago

      Paying for ad-free search engines do exist as an alternative, sucks a lot for the ones who cannot afford such a luxury but at some point I noticed that for my life search is quite important, both personally and professionally, so I haven't minded paying for it after the free experience provided by Google, Bing, etc. started worsening.

    • coryrc 5 days ago

      Do you not use an ad blocker?

    • shpx 5 days ago

      install uBlock Origin

    • 0x0203 5 days ago

      And most of the actual results from said search are nothing but LLM generated slop content that provide zero utility to the user and only exist to capture as many clicks and traffic as possible so they can shovel as many ads as possible. The current web is practically unusable.

  • moralestapia 4 days ago

    >LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

    What? On Planet Earth, this is already a thing.

  • mr_toad 5 days ago

    > Search is primarily a portal - you know a particular resource exists, you just don't know its exact URL.

    Kind of like a manual, with an index.

    RTFM people.

  • coldtea 5 days ago

    >LLMs aren't ever going to replace search for that use case, simply because they're never going to be as convenient.

    Sounds trivial to integrate an LLM front end with a search engine backend (probably already done), and be able to type "frob language" and it gives you a curated clickable list of the top resources (language website, official tutorial, reference guide, etc) discarding spam and irrelevant search engine results in the process.

    • coldpie 5 days ago

      That's called a search engine. We've had them for 30 years. Ahhhhhhhhhhhghh. It's blockchain all over again.

      • coldtea 4 days ago

        >That's called a search engine. We've had them for 30 years

        https://news.ycombinator.com/item?id=9224

        The LLM could "intelligently" pick from the top several pages of results, discard search engine crap results and spam, summarize each link for you, and so on.

        We don't have that now (or for 30 years - I should know, I was there, using Yahoo!, and Altavista, and Lycos and such back in the day).

      • toyg 5 days ago

        I think you mean it's SOAP all over again.

  • tremarley 5 days ago

    If you wanted to know more about a new programming language named “Frob” or a plane crash that happened today, couldn’t you use an LLM like grok?

    Or any other LLM that’s continuously trained on trending news?

    • coldpie 5 days ago

      How do I know the LLM isn't lying to me? AIs lie all the time, it's impossible for me to trust them. I'd rather just go to the actual source and decide whether to trust it. Odds are pretty good that a programming language's homepage is not lying to me about the language; and I have my trust level for various news sites already calibrated. AIs are garbage-in garbage-out, and a whole boatload of garbage goes into them.

      • robrenaud 5 days ago

        They could provide verbatim snippets surrounded by explanations of relevance.

        Instead of the core of the answer coming from the LLM, it could piece together a few relevant contexts and just provide the glue.

        • nicksrose7224 5 days ago

          They do this already, but the problem is it takes me more time to verify if what they're saying is correct than to just use a search engine. All the LLMs constantly make stuff up & have extremely low precision & recall of information

        • coldpie 5 days ago

          I don't understand how that's an improvement over a link to a project homepage or a news article. I also don't trust the "verbatim snippet" to actually be verbatim. These things lie a lot.

      • cookiemonsieur 5 days ago

        > How do I know the LLM isn't lying to me?

        How do you know the media isn't lying to you ? It's happened many times before (think pre-war propaganda)

        • xigoi 4 days ago

          We’re talking about the official website for a programming language, which has to reason to lie.

      • coldtea 5 days ago

        >Odds are pretty good that a programming language's homepage is not lying to me about the language

        Odds are pretty good that, at least for not very popular projects, the homepage's themselves would soon be produced by some LLM, and left at that, warts and all...

    • simonw 5 days ago

      None of the LLMs (not even Grok) are "continuously trained" on news. A lot of them can run searches for questions that aren't handled by their training data. Here's Grok's page explaining that: https://help.x.com/en/using-x/about-grok

      > In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.

    • lurking_swe 5 days ago

      i can also use my human brain to read a webpage from the source, as the authors intended. not EVERY question on this planet needs to be answered by a high resource intensive LLM. Energy isn’t free you know. :)

      Other considerations:

      - Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.

      - The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).

      - The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.

      - i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.

      Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.

    • eddd-ddde 5 days ago

      It's just a different kind of data. Even without LLMs, sometimes I want a tutorial, sometimes I want the raw API specification.

      For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.

      • ethbr1 5 days ago

        Search is best viewed as a black box to transform {user intention} into {desired information}.

        I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:

           1. User types in query
           2. Search returns hits
           3. User selects a hit
           4. User looks for information in hit
           5. User has information
        
        Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.

        F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?

        • skydhash 5 days ago

          I don't know the exact word for this case, but sometimes you want the surrounding information to what you're looking for. Often I skim documentations, books, articles,... not in search for a specific answer but to get the overview of what it discusses. I don't need a summary of a table of contents. But it's a very good tool for quickly locating some specific information. Something like

            Language Implementation Patterns (the book) |> Analyzing Languages (the part) |> Tracking and Identifying Program Symbols (the chapter) |> Resolving Symbols (the section)
          
          or

            Unit Testing: Principles, Practices,and Patterns (the book) |> Making your tests work for you (the part) |> Mocks and test fragility (the chapter) |> The relationship between mocks and test fragility (the section) |> Intra-system vs. inter-system communications
          
          or

            Python 3.13.3 Documentation (docs.python.org) |> The Python Standard Library |> Text Processing Services |> string
        • theamk 5 days ago

          Could never understand that obsession with summmarization. Sure, it may be be useful for long-form articles or for low-quality content farms, but most of the time you are not reading those.

          And if you are reading technical docs, especially good ones, each word is there for a reason. LLM throw some that information away, but they don't have your context to know if the stuff they throw away is useful or not. The text the summary omitted may likely contain an important caveat or detail you really should have known before starting to use that API.

          • skydhash 5 days ago

            And if you go to a nicely formatted doc page (laravel) or something with diagrams (postgres), it throws all of these away too.

    • lcnPylGDnU4H9OF 5 days ago

      Yes, you can use grok but you could also use a search engine. Their point is that grok would be less convenient than a search engine for the use case of finding Frob's website's homepage.

    • oofbey 5 days ago

      Perplexity solves this problem perfectly for me. It does the web search, reads the pages, and summarizes the content it found related to my question. Or if it didn't find it, it says that.

      I recently configured Chrome to only use google if I prefix my search with a "g ".

Okawari 5 days ago

I still prefer tranditional search engines over LLMs but I admit, its results feels worse than it has traditionally.

I don't like LLMs for two reasons:

* I can't really get a feel for the veracity of the information without double checking it. A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.

* I'm missing out on learning opertunities that I would usually get otherwise by reading or skimming through a larger document trying to find the answer. I appreciate that I skim through a lot of documentation on a regular basis and can recall things that I just happened to read when looking for a solution for another problem. I would hate it if an LLM would drop random tidbits of information when I was looking for concrete answers, but since its a side effect of my information gathering process, I like it.

If I were to use an AI assistant that could help me search and curate the results, instead of trying to answer my question directly. Hopefully in a more sleek way than Perplexity does with its sources feature.

  • SoftTalker 5 days ago

    One thing I don't like about LLMs is that they vomit out a page of prose as filler around the key point which could just be a short sentence.

    At least that has been my experience. I admit I don't use LLMs very much.

    • wruza 5 days ago

      It's time to bind "Please be concise in your answer and only mention important details. Use a single paragraph and avoid lists. Keep me in the discussion, I'll ask for details later." to F1.

      • telchior 5 days ago

        You've just made me realize that I actually do need that as a macro. Probably type that ten times per day lately. Others might include "in one sentence" or "only answer yes or no, and link sources proving your assertion".

        • atonse 5 days ago

          If you’re using ChatGPT, add it to your memory so it always remembers that you prefer that.

          • Hard_Space 5 days ago

            No matter how many times I get ChatGPT to write my rules to long-term memory (I checked, and multiple rules exist in LTM multiple times), it inevitably forgets some or all of the rules because after a while, it can only see what's right in front of it, and not (what should be) the defining schema that you might provide.

            • nickpsecurity 4 days ago

              I haven't used ChatGPT in a while. I used to run into a problem that sounds similar. If you're talking about:

              1. Rules that get prefixed in front of your prompt as part of the real prompt ChatGPT gets. Like what they do with the system prompt.

              And

              2. Some content makes your prompt too big for the context windows where the rules get cut off.

              Then, it might help to measure the tokens in the overall prompt, have a max number, and warn if it goes over it. I had a custom, chat app that used their API's with this feature built in.

              Another possibility is, when this is detected, it asks you if you want to use one with a larger, context window. Those cost more. So, it would be presented as an option. My app let me select any of their models to do that manually.

          • protocolture 5 days ago

            Yep. Somehow I made mine noticably grumpier but I dont know which setting or memory piece did the job.

            I really like not being complimented on literally everything with a wall of text anymore.

      • mips_avatar 5 days ago

        Yeah but it kind of kneecaps the model. They need tokens to "think". It's better to have them create a long response then distill it down later.

        • wruza 5 days ago

          Is there a well-known benchmark for this? I don't feel that short vs long answers make any difference, but ofc feelings aren't what we can measure.

          Also, if that works, why doesn't copilot/cursor write lots of excessive code mixed with lots of prose only to distill it later?

          • teruakohatu 4 days ago

            > don't feel that short vs long answers make any difference

            The “thinking” models are really verbose output models that summarise the thinking at the end. These tend to outperform non-thinking models, but at a higher cost.

            Anthropic lets you see some/all of the thinking so you can see how the model arrived at the answer.

            • wruza 4 days ago

              So if I replace "answer" with "summarize" that should work then?

        • dailykoder 5 days ago

          You need tokens to create more revenue for the company that is running the LLM. Nothing more, nothing less

    • mips_avatar 5 days ago

      One problem with LLMs is that the amount of "thinking" they do when answering a question is dependent on how many tokens they use generating the answer. A big part of the power of models like deepseek R1 is they figured out how to get a model to use a lot of tokens in a logical way to work towards solving a problem. The models don't know the answer they come to it by generating it, and generating more helps them. In the future we'll probably see the trend continue where the model generates a "thinking" response first, then the model summarizes the answer concisely.

  • graemep 4 days ago

    > I can't really get a feel for the veracity of the information without double checking it.

    This is my main reason for not using LLMs as a replacement for search. I want an accurate answer. I quote often search for legal or regulatory issues, health, scientific issues, specific facts about lots of things. i want authoritative sources.

    • Froedlich 4 days ago

      LLMs remind me of the children's game "Telephone."

  • supportengineer 5 days ago

    Am I the only one who double checks all of the information presented to me, from any source?

    • da_chicken 5 days ago

      No you don't. If you were doing that you wouldn't have time to eat, let alone sleep.

      You check the information you decide should be verified.

    • ryandrake 5 days ago

      Unless someone's life is on the line, usually eyeballing the source URL is enough for me. If I'm looking for API documentation, there are a few well-known URLs I trust as authoritative. If I'm looking for product information, same thing. If the search engine points me to totallyawesomeproductleadgen19995.biz, I'm probably not getting reliable information.

      An LLM response without explicit mention of its provenance... There's no way to even guess whether it is authoritative.

    • pdabbadabba 5 days ago

      If what you say is literally true: yes, I think you probably are the only one!

      • saltcured 5 days ago

        Yeah, I need more coffee to decide for myself if double checking all sources is linear or exponential as it progresses to check the checks.

        • BobaFloutist 5 days ago

          It might even be factorial since you also need to check the checks of the checks!

          Actually, it might be fully unbounded even for an n of 1.

          • jeffhuys 5 days ago

            Everything reminds me of her… and she’s called Factorio. We’re on a break. She’s not good for me, but oh my do I love her.

          • margalabargala 5 days ago

            Information cannot be destroyed, so for an n of 1 the bounds are that of the universe.

        • smallerize 4 days ago

          The sources will start to be redundant eventually. It's actually O(1) once you have looked at all the sources... that there are... in the world. Trivial!

          • saltcured 4 days ago

            I'm not sure. In this context, sources are utterances rather than speakers. So they're only finite if we limit ourselves to a snapshot of past utterances while doing our checking.

    • theamk 5 days ago

      Wait, so if you go to python.org and the doc page says, "Added in version 3.11", you double-check this?

      What do you even use for double-check? Some random low-quality content farm? A glitchy LLM? An dodgy mirror of official docs full of ads? Or do you actually dig the source code for this?

      And do you keep double-checking with all other information on the page... "A TOMLDecodeError will be raised on an invalid TOML document." - are you going to start an interactive session and check which error will be raised?

    • npoc 5 days ago

      How deep do you go? Where do you stop?

      Just because you can find multiple independent sources saying the same thing doesn't mean it's correct.

      • debaserab2 5 days ago

        You evaluate the credentials and authenticity of the sources you're reading and judge accordingly.

      • spookie 5 days ago

        It's done on a case by case basis.

        In all honesty doing this for news and such brings me comfort. Because the truth is usually pretty vanilla.

    • worik 5 days ago

      Are you sure? If you only say it once...

      "What I tell you three times is true"

    • __d 5 days ago

      No.

      Part of why I prefer to use a search engine is that I can see who is saying it, in what context. It might be Wikipedia, but also CIA world fact book. Or some blog but also python.org.

      Or (lately) it might be AI SEO slop, reworded across 10 sites but nothing definitive. Which means I need to change my search strategy.

      I find it easier (and quicker) to get to a believable result via a search engine than going via ChatGPT and then having to check what it claims.

  • leptons 5 days ago

    >A lot of context I get from just reading results from a traditional search engine is lost when I get an answer from a LLM. I find it somewhat uncomfortable to just accept the answer, and if I have to double check it anyways, the LLM's answer is kind of meaningless and I might as well use a traditional search engine.

    And this is how LLMs perform when LLM-rot hasn't even become widely pervasive yet. As time goes on and LLMs regurgitate into themselves, they will become even less trustworthy. I really can't trust what an LLM says, especially when it matters, and the more it lies, the more I can't trust them.

  • bluGill 5 days ago

    I find LLMs useful for the case where I'm not sure what the right terms are. I can describe something and the LLM gives me a term which I then type into a search engine to get more information. I'm only starting to use LLMs though, so maybe I'll use them more in the future? - only time will tell.

miloignis 5 days ago

Yes, I use search engine(s) constantly - namely Kagi, which really does feel like Google used to. I tried using LLMs for a recent project of mine when I was trying to figure out if something was possible, and they were actively misleading, every time. My issue for this project was that what I was asking for did end up not being currently possible, but LLMs wouldn't tell me that and would make up incorrect ways to solve my problem, since they didn't want to tell me it couldn't be done.

Really, these days, either I know some resource exists and I want to find it, in which case a search engine makes much more sense than an LLM which might hallucinate, or I want to know if something is possible / how to do it, and the LLM will again hallucinate an incorrect way to do it.

I've only found LLMs useful for translation, transcription, natural language interface, etc.

  • NelsonMinar 5 days ago

    My experience too. The problem isn't search, it's Google. Kagi really is very useful. I use LLMs for some things but still lots of Kagi search.

  • marvinblum 4 days ago

    It's the same for me. I've switched to DuckDuckGo about 2 or 3 years ago and it feels like Google used to. I'm always shocked to see how bad the results are and how cluttered the top section is on Google if I happen to search there on someone else's computer.

    LLMs have mostly been useful for three things: single line code completion (in GoLand), quickly translating JSON, and generating/optimizing marketing texts.

  • averageRoyalty 5 days ago

    Agreed, for resource location, Kagi feels like Google did 20 years ago.

    I use LLMs as a sounding board. Often if I'm trying to tease out the shape of a concept in my head, it's best to write it out. I now do this in the form of a question or request for information and dump it into the LLM.

star-glider 5 days ago

This is my favorite thing about Kagi; you can do both. If you just append a question mark, it'll run the search through a simple LLM and give you those results (with citations) right before standard search. From there, you can proceed into a more sophisticated frontier model if that's more effective.

"Search" can mean a lot of things. Sometimes I just want a website but can't remember the URL (traditional); other times I want an answer (LLMs); and other times, I want a bunch of resources to learn more (search+LLMs).

  • sshine 5 days ago

    And sometimes you have all the data, but it’s too much, so you ask for a summary and ask elaborating questions.

    • dharmab 5 days ago

      I've found this ineffective for anything where I need a factual answer and brilliantly where I need the vibe of subjective or fictional things.

      Bad: summarizing scientific research or technical data

      Great: finding travel ideas or clarifying aspects of a franchise's fictional universe.

      • sshine 4 days ago

        I've found Kagi's Universal Summarizer useful for:

          - Long reads when I just want to know the conclusion
          - Reading terms of services that I would otherwise have blindly accepted
          - Summarizing key requirements in large tender documents prior to bidding on work
        
        As for summaries in general, FastGPT is so useful I apply it to most of my searches.

        Especially if I'm looking for a small fact buried in the first results.

bayindirh 5 days ago

I use Kagi exclusively and refuse to offload my brain to a thing which has no accuracy guarantee ever. The emitted answers to the queries it has given can be completely bogus, and developers of these things low key expect me to believe what their black box say? Nah, never.

Instead I use a search engine and do my own reading and filtering. This way I learn what I'm researching, too, so I don't fall into the vicious cycle of drug abu ^H^H^H^H^H laziness. Otherwise I'll inevitably rely more on more on that thing, and be a prisoner of my own doing by increasingly offloading my tasks to a black box and be dependent on it.

  • drpixie 5 days ago

    100% agree.

    Google recently (unrequested) provided me with very detailed AI generated instructions for server config - instructions that would have completely blown away the server. There will be someone out there who just follows the bouncing ball, I hope they've got good friends, understanding colleagues, and good backups!

  • tasuki 4 days ago

    > I use Kagi exclusively and refuse to offload my brain to a thing which has no accuracy guarantee ever.

    What a weird sentence. What accuracy guarantees does Kagi have? Or, if you're not "offloading your brain to it", can't you do the same with an LLM?

    • bayindirh 4 days ago

      Kagi is a search engine, which is developing its own index, and use the other indexes to give a more comprehendsive result if their own index doesn't fulfill the query you made.

      Moreover, Kagi is a paid service. It has no ads, no hidden ranking, nothing to earn money by manipulating you. On the contrary you, the user, can add filters and ranking modifiers to promote the sites you find to be useful/truthful and demote others which push slop and SEO optimized content to your eyeballs. This is per user, and is not meddled with.

      This makes Kagi very deterministic (unlike LLMs), very controllable (unlike LLMs), and very personalized (unlike LLMs). Moreover, Kagi gives you ~20 results or so per search, and no fillers (again, unlike LLMs).

      I don't use Kagi's AI assistance features, and I don't pay for the "assistant" part of it, either.

      I don't offload my brain to Kagi, because I don't prompt it until it gives me something I like. Instead, I get the results, read them, learn what I'm looking for, and possibly document what I got out from that research. This usage pattern, is again very different than prompting an LLM until it gives you something somewhat works or sounds plausible.

      I do the hard work of synthesizing and understanding the answer. Not reading some slop and accepting it at face value.

      • tasuki 4 days ago

        > I don't offload my brain to Kagi, because I don't prompt it until it gives me something I like.

        Similarly, I don't offload my brain to LLMs.

        > I do the hard work of synthesizing and understanding the answer. Not reading some slop and accepting it at face value.

        Again, it's not necessary to accept LLM output at face value.

        Use tools, think for yourself, sure. This applies to various tools: Kagi, LLMs, and others. None of these give you "accuracy guarantees". You usually have to think for yourself.

        My favourite example of a situation where you don't have to think for yourself is asking an LLM to implement a function in a very strongly typed language. There only is one implementation of `a -> a`. For `(a -> b) -> List a -> List b`, you could return an empty list instead of performing map. There aren't that many implementations of `(a -> b -> b) -> b -> List a -> b` (three as far as I can see: left/right fold and a function which just returns the accumulator). It's easier to verify the LLM solution than to implement it yourself!

EliasWatson 5 days ago

Google results have gotten so terrible over the years. I switched to Kagi long ago and haven't looked back. Whenever I use Google on another computer, I'm shocked by how awful the results are compared to Kagi.

As for AI search, I do find it extremely useful when I don't know the right words to search for. The LLM will instantly figure out what I'm trying to say.

  • sshine 5 days ago

    Probably 70% of my searches are FastGPT searches, meaning I end my search query with a ‘?’ and Kagi summarises the results, so I don’t need to click.

    And the ratio between using search engine and Kagi’s LLM agent with search is still 70% search. Sometimes, searching is faster, sometimes asking AI is faster.

  • jacobmarble 5 days ago

    Same. I switched to Kagi over a year ago, and now every other search engine looks like a steaming pile of ads and slop.

tiborsaas 5 days ago

I'm the inverse, I still 90% of the time use search engines, mostly Google. LLM-s can't help me with researching Hungarian companies offering screws, furniture, TV-s etc I need for my home renovation. It can't find me the best route to go to a cafe, lookup users, find information on famous people. Google is also faster than me typing a good prompt.

I use LLM-s for what they are good at, generative stuff. I know some task take me a long time and I can shortcut with LLM-s easily.

So here's a ChatGPT example query* which is completely off:

https://chatgpt.com/share/67f5a071-53bc-8013-9c32-25cc2857e5...

* It's intentionally bad be able to compare with Google.

And here's the web result, which is spot on:

https://imgur.com/a/6ELOeS1

  • zer00eyz 5 days ago

    My take is close to yours...

    LLM's are great when you want AN answer, and not get side tracked.

    Search is great when you want to know what answers are out there. The best example is Recipes... From what spices go into chai to the spice mix in any given version of chili (let's not start on beans).

    The former is filling in missing knowledge the latter is learning.

  • keithnz 5 days ago

    this is what I put in chat gpt..... ie, exactly what you put in google.

    https://imgur.com/a/boNS2YZ

    https://chatgpt.com/share/67f5a9f9-f0a8-800d-9101-aafb88e455...

    which I think is way better than google.

    • tiborsaas 5 days ago

      It does a web search so you got more lucky. I searched for it because I found it on the product page of the manufacturer and I was interested in finding a place to buy it.

      Google offered me a few hits with existing businesses, with ChatGPT I need to do another query.

      Out of curiosity I tried it and it did take me to a wholesale company (single result), but the Google results are better with cheaper options (multiple good results), I can also parse the list faster with my eye.

      Sure, I can just write a better prompt:

      https://chatgpt.com/share/67f5b09b-c154-8013-840f-934af8302f...

      This is my third attempt to get it right, but it found me one which I haven't seen before. However I would still do a Google search to be thorough and get the best deal.

yellowapple 5 days ago

LLMs are still notorious for hallucination; last I checked ChatGPT in particular still hallucinates about 1/3rd of the time.

So yeah, I do still use search engines, specifically Kagi and (as a fallback) DuckDuckGo. From either of them I might tack on a !g if I'm dissatisfied with the results, but it's pretty rare for Google's results to be any better.

When I do use an LLM, it's specifically for churning through some unstructured text for specific answers about it, with the understanding that I'll want to verify those answers myself. An LLM's great for taking queries like "What parts of this document talk about $FOO?" and spitting out a list of excerpts that discuss $FOO that I can then go back and spot-check myself for accuracy.

mepian 5 days ago

Yes, I'm still using Google as I haven't found LLMs useful as a search engine replacement.

  • dowager_dan99 5 days ago

    but the AI responses from google that dominate the screen real estate are terrible. When you repeat the exact same query and get changing (but all wrong) answers, something is broken. I've resorted to including profanity in all my searches to prevent the AI responses, which is suboptimal at work...

    • nvllsvm 5 days ago

      Change your browsers/search URL templates to include the udm=14 query parameter. It eliminates a whole bunch of the modern noise like AI.

      Ex. https://www.google.com/search?udm=14&q=your%20query

      • addaon 5 days ago

        Is there any way to do this in current (desktop) Chrome? Instructions I can find all suggest that the query string should be editable, but looks like it's locked on v134.

        • nvllsvm 5 days ago

          It looks like the default search providers are not editable. I worked around with with Chromium v135 by creating a new one and deleting the old one.

    • matthewkayin 5 days ago

      If you're using google you can just specify "-ai" at the end of your search, which I've started doing ever since Google's AI tried to tell me that the American Civil War was that time when the North and South fought over gold in California.

      DuckDuckGo also has an option where you can turn off the AI search so that you don't have to specify every time. I've found DDG sometimes gives me better results than Google and sometimes doesn't.

    • bufferoverflow 5 days ago

      The results I get from google's AI at the top of search results are usually spot-on. I often search for code-related technical stuff, and like 90% of the time it saves me a trip to StackOverflow.

      • ImaCake 5 days ago

        Agreed with this. A lot of people seem to think they suck, and sometimes they do, but I find them to be more useful than not. It’s actually really nice having a prose digest of the most plausible links for my search query.

    • grishka 5 days ago

      Then I guess it's good to live in a country where no AI products from western companies are available for some reason. As in, if I would really need to use ChatGPT, I'd have to do that through a VPN. Everyone talks about Google's AI as if it's taken over everything but I have literally never seen any. And I use Google products a lot.

    • eclipxe 5 days ago

      They work pretty well for me.

    • layer8 5 days ago

      Use verbatim mode (tbs=li:1). Use a content blocker that removes the AI responses and ads.

    • mepian 4 days ago

      I'm using the udm14 extension in Firefox to hide those.

stonemetal12 5 days ago

No, I find it unwilling to produce factual information.

For example Jeep consistently lands at the bottom of the reliability ratings. Try asking GPT if Jeeps are reliable. The response reads like Jeep advertising.

  • s1artibartfast 5 days ago

    GPt models have a people-pleasing bias and positivity bias. If you want factual information, you have to modify your prompt. I imagine you would get very different results if you asked "are Jeeps more reliable than Toyotas", or "how do Jeeps compare to the median car in terms of reliability"

    My impression is that different llms are more or less people pleasing. I found grok is more willing to tell me something is a bad idea.

    • hbn 5 days ago

      I wouldn't be surprised if the training was weighted to favor text it learned from more "reliable" or "professional" resources, which in the case of products, would be official sales listings that talk about how great their product is.

      • jacobr1 5 days ago

        It is more subtle than that. It is trained on "everything" so the specific adjectives and words you use in your prompt will cause it to generate very different responses. The fine tuning causes it to prefer by default more reliable/professional responses ... but that is not because the training data is weighted toward them as such. If you mention specific publications, or forums, it will give you responses more likely to come from those.

        Looking at the reasoning traces for for the new reasoning models you can actually see how fine tuning is moving toward having models list the assumptions around data sources, which should be trusted, list multiple perspectives and then summarize, resulting in better answers. You can do that today with non-reasoning models, but you need to prompt engineer it to ask for that explicitly. This process of identifying not just extant content, but teaching systems how to approach problem analysis (instruction tuning, reasoning traces, etc ...) will be key to influencing how the models work and increasingly how they are differentiated.

      • s1artibartfast 5 days ago

        I don't think that's a big part of it, although it may be included.

        In general, the models lean towards being Yes-Men on just about every topic, including things without official sources. I think this is a byproduct of them being trained to be friendly and agreeable. Nobody wants a product that's rude or contrarian, and this puts a huge finger on the scale. I imagine an a model unfiltered for safety and attitude and political correctness would have less of this bias (but perhaps more of other biases)

        • dr_kiszonka 4 days ago

          Do people really prefer Yes-Men models? It appears that you are right but I find it surprising.

          I very much prefer more disagreeable, critical models. GPT 4o and o3-mini will sometimes not tell you that you, e.g., didn't attach a file you asked to be analyzed and will instead hallucinate its contents, presumably not to upset you. Of course, their hallucinations are way more annoying.

  • 0xbadcafebee 4 days ago

    Instead ask it to show you links to websites that review reliability ratings and highlight the results for Jeeps along with sources. It's annoying, but how you ask it questions is often more important than what you're asking. (This was a thing when search engines were first introduced too)

  • afpx 5 days ago

    Yes, often unreliable. And, they will give different answers at different times.

    https://chatgpt.com/share/67f57459-2744-8009-a94e-3b67dce8fd...

    “[Jeeps] often score below average in reliability rankings from sources like Consumer Reports and J.D. Power.”

    • jay_kyburz 5 days ago

      I tried the trick of asking Gemini to search the web. It respond by telling me Jeep was "average". When I checked the Consumer reports website it gave me, It scores Jeep 19th out of 22. I don't call that average.

  • marcusverus 5 days ago

    LLMs are like humans. They don't know what you mean, only what you say. They can't tell you what you want to know, they can only answer the question you actually ask! The question you asked is broad and phrased in a way that begs a simplistic answer about the entire brand. Obviously an answer to that question will do a worse job of laying out the relative reliability of current jeep models than would a report which was created to address that specific question.

    If you want to know how modern Jeep models stack up against their peers in terms of reliability, try asking GPT that question!

    • jay_kyburz 5 days ago

      Actually, you hit the nail on the head, they are _not_ like humans because they only know what you say, not what you mean. A human would understand that when you ask a broad question, you want a broad answer. Broadly speaking, Jeeps are bad.

      Our current LLM are kneecapped because they are very reluctant to be negative.

RattlesnakeJake 5 days ago

I use DuckDuckGo, with the occasional reddit !g appended if I'm looking for something experience-based.

For me, searches fall into one of three categories, none of which are a good fit for LLMs:

1. A single business, location, object, or concept (I really just want the Google Maps or Wikipedia page, and I'm too lazy to go straight to the site). For these queries, LLMs are either overkill or outdated.

2. Product reviews, setup instructions, and other real-world blog posts. LLMs want to summarize these, and I don't want that.

3. Really specific knowledge in a limited domain ("2017 Kia Sedona automatic sliding door motor replacement steps," "Can I exit a Queue-Triggered Azure Function without removing it from the queue?"). In these cases, the LLMs are so prone to hallucination that I can't trust them.

  • georgemcbay 5 days ago

    I still use google but I pretty much always append site:reddit.com to the query.

    The answer I'm seeking is not always on reddit itself, but google limited to reddit is far more likely to give me quality starting links than google unbound is.

  • hbn 5 days ago

    I've tried DDG but it doesn't seem much different from the results on Google/Bing (as I understand it they use Bing's search index anyway?)

    • fsflover 4 days ago

      If DGG isn't much different and it doesn't track you, then what's the point of using Google?

  • supportengineer 5 days ago

    In response to the final sentence, you can work around this by breaking your problem or question down into smaller pieces. Essentially forcing it to reason, manually.

    • throwawa14223 5 days ago

      You can but that is incredibly slow for me compared to a search term.

  • inferiorhuman 5 days ago

    Unfortunately DDG still enshittifies their results with AI garbage as much as they can. Both with the intrusive AI blob at the top and replacing the page summaries with dreadful AI goobleygook.

    • dijksterhuis 5 days ago
      • inferiorhuman 5 days ago

        Other way around. That disables the AI chatbot monstrosity at the top of the page. But only if you're logged in and/or don't clear your cookies.

        The AI word salad summaries for each individual page have no toggle (unless you count !g).

        • dijksterhuis 5 days ago

          I actually have two settings in the AI features settings area: one for "chat" and another for "assist" -- which I think is the word salad summaries? I've got chat set to Off and assist set to Never.

          Searching 'octatrack' has the wikipedia page on the top right as a summary box thing. no ai word salad for me :shrug:

          • inferiorhuman 5 days ago

            A/B testing. I don't always get AI gibberish in place of actual page summaries. Either way I want to see actual results even if I clear my cookies or open a new private tab.

anoldperson 5 days ago

AFAICT ChatGPT is mostly useless and can't be trusted to answer questions accurately. So no, mostly all search engines. To be honesty I'm surprised anybody uses it for anything other than trivial uses.

  • jacobr1 5 days ago

    Are you using a paid version? Do you use web-search? And have you tried alternatives like Claude?

    I've mostly switched to using Claude these days, with MCPs for websearch and fetching specific remote or local files. It answers questions generally very accurately (from the source documents it identifies) and includes citations.

    I've found that people that haven't really tried the latest models, and just rely on whatever knowledge is in the model training are really missing out on the potential power. GPT4o+ and equivalent models really changed the game. And using tools to do a search, or pull in your code, or run a db query or whatever enables them to either synthesize information or generate context relevant material. Not perfect for everything, but much better than a year ago, or what people are doing with the free systems.

    • voidUpdate 5 days ago

      It's an interesting business model to give a version that everyone says is awful to people just trying it out, and locking the actually useful version behind a paywall, so anyone who doesn't want to immediately give money gets a bad experience...

  • 0xbadcafebee 4 days ago

    Search engines aren't accurate either, they show you 10,000+ pages for your search query. You probably weren't looking for 10,000 answers. The problem is, they can't read your mind. ChatGPT can show you results you want, just like search engines can, you just may have to tweak your query.

    • anoldperson 20 hours ago

      The last thing I searched for was about the lady who sunk the New Zealand Naval vessel. It told me she was a captain in the United States Navy. I said that was absolutely not true and it told me of course you are correct, she is from the Australian Navy, and nothing I could say could convince it otherwise.

      If it can't manage one small fact on something that was covered quite a bit in the previous month, then it is worse than useless. At the very least it should say I don't know. It reminds me of that one guy we all know that does nothing but make stuff up when talking about stuff outside of their wheel house. Never backs down, never learns anything, and ultimately dumped from the relationship.

  • IAmGraydon 5 days ago

    It answers questions extremely accurately in my experience. It's improved a lot in just the last few months.

senko 5 days ago

I use (and pay for) Kagi.

Even without much customization (lenses, scoring, etc) it's so much better (for my use cases) I happily pay for it.

Recently I have also started to use Perplexity more for "research for a few minutes and get back to me" type of things.

Queries like "what was that Python package for X" I usually ask an AI right from my editor, or ChatGPT if I'm in the browser already.

disambiguation 5 days ago

I use both, but direct search is faster since I have to fact check the LLM's answer.

2 recent success stories:

I was toying around with an esp32 - i was experimenting to turn it into a bluetooth remote control device. The online guides help to an extent, setting up and running sample projects, but the segue into deploying my own code was less clear. LLMs are "expert beginners" so this was a perfect request for it. I was able to jump from demos to live deploying my own code very quickly.

Another time I was tinkering with opnsense and setting up VLANs. The router config is easy enough but what I didnt realize before diving in was that the switch and access point require configuration too. What's difficult about searching this kind of problem is that most of the info is buried in old blog posts and forum threads and requires a lot of digging and piecing together disparate details. I wasn't lucky enough to find someone who did a writeup with my exact setup, but since LLMs are trained on all these old message boards, this was again a perfect prompt playing to its strengths.

footy 5 days ago

I use Kagi and sometimes DDG. When I do a search I'd rather do my own reading than be lied to. It's not even like using it for code, when you can quickly iterate if needed-there is no way to verify the information you got is correct and that is a major problem imo.

matt_trentini 5 days ago

Using search engines are still _significantly_ faster for me for the vast majority of the queries I want answers for.

The results from LLMs are still too slow, vary too much in quality and still frequently hallucinate.

My typical use-case is that when I'm looking for an answer I make a search query, sometimes a few. Then scan through the list of results and open tabs for the most promising of them - often recognising trusted, or at least familiar, sites. I then scan through those tabs for the best results. It turns out I can scan rapidly - that whole process only takes a few seconds, maybe a minute for the more complex queries.

I've found LLMs are good when you have open-ended questions, when you're not really sure what you're looking for. They can help narrow the search space.

joseda-hg 5 days ago

I use Kagi, but I will say, the Quick Answer (Place an interrogation after your query for an LLM based answer) has been way more useful than I initially thought

saaaaaam 5 days ago

Do you really though? Because I had this conversation recently and she was still typing stuff into the browser bar in chrome and then clicking on stuff from search results. I think a lot of folk think ChatGPT has superseded search but they don’t realise they are still carrying out a load of low level or transactional search queries via chrome.

jpc0 5 days ago

100% still search first. If I am not super knowledge on the domain I am searching for I use an AI to get me keywords and terminology and then search.

At most I use AI now to speed up my research phase dramatically. AI is also pretty good at showing what is in the ballpark for more popular tools.

However I am missing forum style communities more and more, sometimes I don't want the correct answer, I want to know what someone that has been in the trenches for 10 years has to say, for my day job I can just make a phone call but for hobbies, side projects etc I don't have the contacts built up and I don't always have local interest groups that I can tap for knowledge.

GuB-42 5 days ago

I only use LLMs when I don't know what I am looking for. Otherwise, search engines all the way.

LLMs can't be trusted, you have no way to tell between a correct answer and a hallucination. Which means I often end up searching what the LLM told me just to check, and it is often wrong.

Search engines can also lead you to false information, but you have a lot more context. For example, a StackOverflow answer has comments, and often, they point out important nuances and inaccuracies. You can also cross-reference different websites, and gauge how reliable the information is (ex: primary source vs Reddit post). A well trained LLM can do that implicitly, but you have no idea how it did for your particular case.

  • pizzly 5 days ago

    I use LLM for almost everything from summarizing, finding out terminology in very different fields that I have no knowledge of, initial research of any field, coding (no more google searching of stack overflow anymore), pretty much everything. I use search engine only for searching for companies/products due to some mistrust that it would find all the products or companies including very small companies/products. But I am very open to removing search engines completely if this last point is satisfied.

axegon_ 4 days ago

Yes, I do. I'd never use an LLM for any meaningful or important information because by design, they will simply shove the most likely next token and you get a ton of responses which are pure nonsense if you start digging into it. Mind you, I've been noticing that Google has been terrible for a long while now. Kagi seemed alright at first but it also gave a lot of nonsense. The final straw for Kagi was the fact that they are backed by yandex and by extension fund the russian terrorist state. Lately I've switched to Qwant and so far it covers almost all of my needs.

  • mavamaarten 4 days ago

    I've just recently switched to Qwant from Google. It serves all my purposes perfectly, except for programming queries unfortunately.

    Google seems to be better at bringing up a variety of stackoverflow and blogposts relevant to my search queries. Qwant seems to struggle exactly with that: it's great at giving exactly what I was searching for but that's sometimes not what I was looking for, if you get what I mean.

    In a sense, LLM's are actually perfect for that. But like you say, the super confident hallucinations are just too frustrating. Literally every time I've asked it a serious programming question, it's hallucinated an API that doesn't exist. Everyone seems to be focusing on letting LLM's solve math and thinking problems. That's exactly what _I'm_ good at. I would much rather have an LLM that is good at combining sources and giving me facts (knowledge, rather than thinking) while most of all being able to say "I don't know".

    • axegon_ 4 days ago

      Qwant has been reasonably good with programming for me though or maybe I'm more commonly going straight to the documentation or source code, idk. My only complaint is that wikipedia tends to be pretty far down in the results for non-programming questions. LLMs(all of them) have been terrible with programming questions too. Imaginary APIs, leaks, code that outright breaks your data, you name it. Oh and don't get me started on optimization and performance(which is ultimately one of my main concerns on day to day basis). They are absolutely horrible.

foragerdev 5 days ago

I still use search engines. I do not like to be spoon fed. I want to learn from the real people. Not AI generated shit. ChatGPT or other LLMs are trained on old data, they do not contain newer information. Newer information and knowledge are produced by the real humans. LLMs are great for quick fact checking, but not for searching. For example: What's the height of mount Everest? LLMs will most probably give right answer.

What are the specs for new Goolge Pixel 9a? LLM can't answer this may after a year they can.

  • rrr_oh_man 5 days ago

    > Newer information and knowledge are produced by the real humans.

    Not anymore

    • GuinansEyebrows 5 days ago

      Can you elaborate?

      • rrr_oh_man 5 days ago

        Whenever you search for some advice (laptop with best battery life in 2025) or reviews (hubspot vs. salesforce) you are barraged with torrents of AI-generated slop. Now that it's also taking over the search engine itself (AI summaries) your whole experience will be dumbed down to the point of mindless automaton consuming pre-packaged tidbits of revenue-generating pre-approved sanitised elite-friendly information.

        I've always found it curious how the level of technology in Children of Men seemed to have stalled at around the time of the outbreak. From where I see it, the maturation of LLMs is the zombie outbreak for human communication.

juliangmp 4 days ago

I don't even consider using a llm as a search engine. But I do agree, google has declined drastically in quality. Personally I'm on duckduckgo, though it always depends on the topics you search for.

dcl 5 days ago

Search engines are still required for me. LLM's still get lots of very important things wrong.

Last night, I asked Claude 3.7 Sonnet to obtain historical gold prices in AUD and the ASX200 TR index values and plot the ratio of them, it got all of the tickers wrong - I had to google (it then got a bunch of other stuff wrong in the code).

Also yesterday, I was preparing a brief summary of forecasting metrics/measures for a stakeholder and it incorrectly described the properties of SMAPE (easily validated by checking Wikipedia).

I constantly have issues with my direct reports writing code using LLM's. They constantly hallucinate things for some of the SDK's we use.

  • throwaway422432 5 days ago

    Asking for a list of companies in a specific sector also gives you made up tickers, or at best a list it found on some blog.

    Was a bit more useful at questions like "Rank these stocks by exposure to the Chinese market", as you can prioritise your own research but in the end you just have to go through the individual company filings yourself.

caseyy 5 days ago

I was among the first to champion AI search, even before Perplexity rose to fame. You.com was the first AI search to quote sources well, and I used it extensively.

But now, the veracity of most LLMs' responses is terrible. They often include “sources” unrelated to what they say and make up hallucinations when I search for what I'm an expert in. Even Gemini in Google Search told me yesterday that Ada Lovelace invented the first programming language in the 18th century. The trust is completely gone.

So, I'm back to the plain old search. At least it doesn't obscure its sources, and I can get a sense of the veracity of what I find.

  • esperent 5 days ago

    > The trust is completely gone

    I mean, for everyone else it was never there to begin with. Hallucinations are constantly raised as the biggest issues with AI. According to the tests, and my experience, newer AI models are objectively better, not worse than the ones from a few years ago. They still have a long way to go/may never be fully trustworthy though.

    What I have lost trust in, and what I and many feel has become much worse over the few years, is Google search, and all the other search engines that are based on it.

    • caseyy 5 days ago

      I think the old ones were more trustworthy, as the LLM wasn't used for knowledge retrieval. Instead, it wrapped around a search API and summarized the "10 blue links". That's not the case anymore. Now they produce the search results, hallucinating along the way.

      • esperent 5 days ago

        > That's not the case anymore

        Isn't that what ChatGPT/anthropic web access does?

geocrasher 5 days ago

I use search engines (Google) and when they (it) fails to provide me the responses I need, I turn to ChatGPT. For example:

I recently upgraded my video card, and I run a 4K display. Suddenly the display was randomly disconnecting until I restarted the monitor. I googled my brains out trying to figure out the issue, and got nowhere.

So I gave ChatGPT a shot. I told it exactly what I upgraded from/to, and which monitor I have, and it said "Oh, your HDMI 2.0 cable is specced to work, but AMD cards love HDMI2.1 especially ones that are grounded, so go get one of those even if it's overspecced for your setup."

So I did what it said, and it worked.

throwawa14223 5 days ago

I use Kagi and I don't think I'd notice if quick answer disappeared. Most of the time I have an answer in the time it would take for GPT to present a prompt.

neilsimp1 5 days ago

Yes, DDG for 95% of issues. Using an AI to search seems really, really, really dumb to me.

  • nailer 5 days ago

    That’s a perfectly fine answer, but providing no supporting arguments makes this a very difficult conversation.

  • jrvarela56 5 days ago

    I’d say come back in a few years for a bad take. But this is already a bad take.

    A query in a regular search engine can at best perform like an LLM-based provider like Perplexity for simple queries.

    If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.

    • subsection1h 5 days ago

      > If you have to click or browse several results forget it, makes no sense not to use an LLM that provides sources.

      I just searched for "What is inherit_errexit?" at Perplexity. Eight sources were provided and none of them were the most authoritative source, which is this page in the Bash manual:

      https://www.gnu.org/software/bash/manual/html_node/The-Shopt...

      Whereas, when I searched for "inherit_errexit" using Google Search, the above page was the sixth result. And when I searched for "inherit_errexit" using DuckDuckGo, the above page was the third result.

      I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.

      • josefresco 5 days ago

        > I continue to believe that LLMs are favored by people who don't care about developing an understanding of subjects based on the most authoritative source material. These are people who don't read science journals, they don't read technical specifications, they don't read man pages, and they don't read a program's source code before installing the program. These are people who prioritize convenience above all else.

        This makes a lot of sense to me. As a young guy in the 90's I was told that some day "everyone will be fluent in computers" and 25 years later it's just not true. 95% of my peers never developed their fluency, and my kids even less so. The same will hold try for AI, it will be what smartphones were to PCs: A dumbed down interface for people who want to USE tech not understand it.

      • dijksterhuis 5 days ago

        I've really wanted to write a clickbait blog post article to post on HN [0] with the title "Hackers don't use LLMs". You've pretty succinctly summarised how I feel about the subject with your last paragraph.

        [0]: not that I write blog post articles anyway, it's just a fantasy day dream thing that's been running through my head

      • dbmnt 4 days ago

        In the last paragraph you describe the overwhelming majority of humanity, seemingly without any sense of irony. What outcome do you expect here?

      • jrvarela56 5 days ago

        Why would you even search for that out of the context of the IDE where you're coding or writing documentation? If you're writing bash you'd have all those man pages loaded in context for it to answer questions and generate code properly.

        • ziddoap 5 days ago

          >you'd have all those man pages loaded in context for it to answer

          Or I can just go to DDG/Google, and be done with it. No need to pre-load my "search engine" with context to get results.

        • dijksterhuis 5 days ago

          Not GP.

          Alt + Tab > Ctrl + T > Type > Enter > PgDn > Click > PgDn > Alt + Left > Click > PgDn > Alt + Left > Click > PgDn > Alt + Tab > [Another 45-60 minutes coding] > GOTO Start

          With these keybinds (plus clicking mouse, yuck) I can read Nx sources of information around a topic.

          I'm always looking to read around the topic. I don't stop at the first result. I always want to read multiple sources to (a) confirm that's the standard approach (b) if not, are there other approaches that might be suitable (c) is there anything else that I'm not aware of yet. I don't want the first answer. I want all the answers, then I want to make my own choices about what fits with the codebase that I am writing or the problem domain that I'm working in.

          Due to muscle memory, the first four/five steps i can do in like one or two seconds. Sometimes less.

          Switching to the browser puts my brain into "absorb new information" mode, which is a different skill to "do what IDE tells me to do". Because, as a software engineer, my job is to learn about the problem domain and come up with appropriate solutions given known constraints -- not to blindly write whatever code I'm first exposed to by my IDE. I don't work in an "IDE context". I work in a "solving problems with software context".

          ==

          So I agree with the GP. A lot of posts I see about people saying "why not just use LLM" seem to be driven by a motivation for convenience. Or, more accurately, unconsidered/blind laziness.

          It's okay to be lazy. But be smart lazy. Think and work hard about how to be lazy effectively.

    • giantg2 5 days ago

      I like to see multiple ideas or opinions on a subject. LLMs seem to distill the knowledge and opinions in ways that are more winner-take-all, or at most only the top few samples. Even if you prompt for a deeper sampling it seems it seems the quality drops (like resolution reduces for each) and its still based on popularity vs merits for some types of data.

Delk 5 days ago

Yes. I like being able to evaluate my sources. For programming or other technical topics, I'll rather read the original documentation, or third-party information whose credibility I can have some idea about.

For other topics, exact pedantic correctness may not always be as important, but I definitely do want to be able to evaluate my sources nevertheless, for other obvious reasons.

Search is actually pretty much what I want: a condensed list of possible sources of information for whatever I'm looking for. I can then build my own understanding of the topic by checking the sources and judging their credibility. Search seems to have been getting worse lately, sadly, but it's still useful.

  • s1artibartfast 5 days ago

    I feel like it has been years since search was useful for finding original documentation and sources. Around the time fuzzy responses were introduced and quote searching was removed.

leephillips 5 days ago

I subscribe to https://kagi.com/. I use search to find expert and authoritative sources of information with human authors who can be held responsible for their contents, and that I can cite in my own work. I’m not interested in the output of a copy-paste machine that steals others’ work, makes things up, and spits out prose worse than a politician’s.

jasonvorhe 5 days ago

Kagi with a lengthy exclusion/block list (fact checkers, Pinterest, etc), Brave Search, DDG as a rare 3rd option. Not using any explicit AI search engines like Perplexity, but I make use of Kagi's summaries a lot.

layman51 5 days ago

Search engines have gotten worse but they are still much more helpful for finding certain resources compared to LLMs. I am fond of the search operators that still work like `filetype:pdf`, site:example.com`, `intitle:trailmix`.

If they get rid of those operators, then that would be really bad. But I have a feeling that’s what a lot of search engine people are itching to do.

milesvp 5 days ago

There is a class of problems I no longer use search for. I find LLM’s give really good results for things like command line usage. Or even things like configuring an application. Basically anything that can summarize lots of disperate sources.

Conversely it’s a huge mistake to rely on LLMs for anything that requires authoritative content. It’s not good at appropriately discounting low quality sources in my experience. Google can have a similar problem, but I find it easier to find good sources there first for many topics.

Where LLMs really replace modern google is for topics you only kind of think should exist. Google used to show some pretty tenuously related links by the time you got to page 5 results and there you might find terms that bring you closer to what you’re looking for. Google simply doesn’t do that anymore. So for me, one of the joys is being able to explore topics in a way I haven’t been able to for over a decade

trumbitta2 5 days ago

Nah. I'm perfectly conscious of the fact that ChatGPT can't be trusted with searches. Google is still my daily driver.

  • bromuro 5 days ago

    My ChatGPT “searches the web” and provides URL of the sources as well.

    • JohnFen 5 days ago

      True, but that doesn't mitigate the problem I have with using LLMs as a search engine replacement. The issue I have is that LLMs "predigest" things and present you with the sources that are relevant to its response.

      However, it still blinds you to the larger picture. Providing supporting sources is all well and good, but doesn't help you with the larger view. I want the larger view.

      • keithnz 5 days ago

        I find the exact opposite, LLMs are much better in giving larger views, google will just spit out whatever it matches with no concept of a larger view and no way to ask google to broaden it's search around a concept.

        • JohnFen 4 days ago

          Well, if you're comparing LLMs to Google, you may be right. I find Google to be borderline worthless. But compared to other search engines, I find LLM responses to be very lacking and restrictive.

sonorous_sub 5 days ago

I use ChatGPT sometimes, but only after I've exhausted google's results for my search and not found the answer I was looking for, or when the query is so obscure that the enhanced problem solving ability of ChatGPT warrants going to it first. I like ChatGPT for solving mundane math problems because I can check its work, and getting the answers I need that way are quicker than doing it myself manually. I still don't trust ChatGPT for anything subjective, because I get spurious results from it anytime the answer to my question is not cut and dry. But what it can do, it does well.

I don't have a circle of friends, so I have no idea what other people are doing, outside of what I read online.

jemmyw 5 days ago

I use search engines all the time (kagi specifically). AIs don't have up to date information. How would you find reviews for products via an AI? It'll just come up with one or two, and when you read you can pick out nuance and also tell if its a genuine review or made up garbage. Or find a place to buy something. Or a place to go, and to read other peoples comments on it. Summaries aren't very useful over comments imo.

I use an LLM a lot for coding. However, I was never as much into doing web searches for programming problems anyway, I used docs more and rarely needed sites like SO. I haven't therefore moved away from search engines for that side of things.

Cthulhu_ 4 days ago

I use search engines, but that's because I just yeet in a few words and I get a result, either directly through the preview or after a click through to the results.

With chatbots I first need to formulate a question (or, I feel like I do), then wait for it to slowly churn out an overly wordy response. Or I need to prompt it first to keep it short.

I suppose this difference is different if you already used a search engine by asking it a fully formulated question like "What is a html fieldset and how do I use it?" instead of "html fieldset" and clicking through to MDN.

plsbenice34 5 days ago

I use a search engine 99% of the time. Occasionally i use an LLM, but even for checking the most simple information I am not able to have any confidence that the answer it gives is correct. It seems to lie to me every time i use it and contradicts itself when i tell it that it made an error. It provides no citation to where it got its information and that seems completely essential. I very rarely see any use for it. Even if a search engine is much slower i will not compromise on knowing where the information is sourced from so I can judge its accuracy, bias, etc. I feel disturbed by all the people that have lower information standards

  • Slash65 5 days ago

    I think this is the real problem. If it’s unsourced, how can I verify the LLM isn’t hallucinating. That being said I started running open web ui to host models locally and have heard that some will source their content(I don’t know which, I haven’t hosted them yet) so that is promising. I also like hosting deep seek locally and being able to review its logic process so I can assess how it arrived at its conclusions. All that to say, I still use a traditional search (self hosted version of searxng) for 95% of my search. I like llms for bouncing ideas around, but not for finding accurate results quickly

thefz 5 days ago

Yes, LLMs are no match for my decades of search skills.

  • Koshcheiushko 5 days ago

    > decades of search skills.

    By decades, I assume atleast 2. So minm 20 years. I'm very interested to know about your experience.

    Would you please elaborate how do you filter or specifically what techniques you use to get your desired result?

    Thanks.

    • DigitallyFidget 5 days ago

      As someone with decades of search engine experience, it's mainly knowing key words and exact phrases to use as well as excludes to filter out garbage. It's a bit hard to teach/explain in a single post, but if you search basted turkey and get results of "turkey baster", then quote the key phrase "basted turkey" -shopping -sale -price, try and remove results from shopping websites with excludes. Understand most search engines will drop most 1-3 letter words from your search. Like searching for 'fire in the house' will only look for results most relevant to the words fire and house, because 'in' and 'the' are just common everywhere. So if you want that exact phrase, then quote it. Searching used to be something you had to learn how to do.

    • thefz 4 days ago

      Mainly knowing what to search for, and a neural engine for bullshit filtering. The engine is me.

notepad0x90 5 days ago

For those same questions that you're probably asking ChatGPT, a google search would show me google's LLM answer at top, maybe reddit threads that would illuminate the topic a bit more for me, maybe stackoverflow threads where 2-3 people show different approaches to the solution and maybe some random forum somewhere with example code i could repurpose. Sure, chatgpt will answer the question but it won't have all the other noise that I can glean from and maybe come up with a better solution.

I would use the analogy of consuming a perfectly tasty and nutritional meal crafted by chef chatgpt vs visiting a few restaurants around your neighborhood and tasting different cuisines. neither approach is wrong but you get different things and values out of each approach. Do what you feel like doing!

Last week, there was a specific coding problem I needed help with, I asked chatgpt which gave me a great answer. Except I spent a few hours trying to figure out why the function chatgpt was using wasn't being included, despite the #include directives being all correct. neither chatgpt nor google were helpful. The solution was to just take a different approach to my code, if I only googled, I wouldn't have spent that time chasing the wrong solution.

Also consider this, when you ask a question, there are a bunch of rude people (well meaning) that ask you questions like "what are you really trying to do?" and who criticize a bunch of unrelated things about your code/approach/question. a lot of times that's just annoying but sometimes that gives you really good insights on the problem domain.

quadsteel 5 days ago

It depends on the type of query, anything has to do with locality or recency, LLMs just don't _really_ work all that well, or even at all.

Someone at work yesterday asked me if I knew which bus lines would be active today due to the ongoing strike. Googled, got a result, shared back in under 10 seconds.

Out of curiosity I just checked with various LLMs through t3.chat, with all kinds of features, none had anything more than a vague "check with local news" to say. Last one I tried Gemini with Deep Research and what do you know, it actually found the information and it was correct!

It also took nearly 5 minutes..

Like I feel if your search is about _reality_ (what X product should I buy, is this restoraunt good, when is A event in B city, recipes etc.) then LLMs are severely lacking.

Too slow, almost always incomplete answers if not straight up incorrect, deep research tends to work if you have 20 minutes to spare both to get an initial answer and manually go and vet the sources/look for more information in them.

maxehmookau 5 days ago

I fundamentally cannot trust a searching system that includes a disclaimer that it can make stuff up (hallucinate) and there's nothing you can do about it.

rossdavidh 5 days ago

I use ChatGPT only occasionally, mostly for laughs, but primarily use Google. It's not as good as it used to be, but it is still the best available. I think there is an opening for a new search engine company now (unlike 10 years ago when Google was unbeatable), and I suppose LLM's might be a part of it. ChatGPT is not it, though.

Same with my wife (non-technical) and teenage daughter.

mooreds 5 days ago

I 100% use search engines, especially to find doc that I know exists. Google/DDG are so fast.

If it is more of an open ended question that I am not sure there'll be a page with an answer for, I am more likely to use ChatGPT/Claude.

deevus 5 days ago

The trend of using LLMs for everything feels like a "when all you have is a hammer, everything starts to look like a nail" situation.

People should do what makes them feel good, but I think we're all going to get a bit dumber if we rely too much on LLMs for our information.

I personally still use search engines daily when I know what it is that I am searching for. I am actually finding that I am reaching less for LLMs even though it is getting easier and cheaper (I pay for T3 Chat at $8USD p/m).

Where I find LLMs useful is when I am trying to unpack a concept or I can't remember the name of something. The result of these chats often lead to their own Google searches. Even after all this development, the best LLMs still hallucinate constantly. The best way that I've found to reduce hallucinations is to use better prompts. I have used https://promptcowboy.ai/ to some success for this.

agentultra 5 days ago

Yes. Why would I use AI to find information?

  • jillesvangurp 5 days ago

    Because AIs can be faster and more exhaustive than you'll ever be. It's really good at those needle in the haystack type searches that would take ages doing manually.

    You don't want AIs reproducing information necessarily. But they are really great at interpreting your query, digging out the best links and references using a search engine and then coming up with an answer complete with links that back that up.

    I'd suggest just giving perplexity a spin for a few days. Just go nuts with it; don't hold back. It's one of the better AI driven search tools I've seen.

DavidaGinter 5 days ago

I'm using ChatGPT or Perplexity as my defaults for any research/questions I have (open research). I do go to Google when I have a specific company I want to quickly check some details (close research).

Manfred 5 days ago

I don’t use LLMs for factual information at all because it is likely biased or wrong.

mbirth 2 days ago

I’m someone that grew up with AltaVista and thus I’m pretty good with my search terms and modifiers. And I often remember specific phrases from the websites I’m looking for. However, Google is more and more optimised for people NOT knowing what they’re looking for and is now even ignoring “quotes” for exact terms unless you switch it to verbatim mode. Which is a shame.

I’m mostly using my personal SearXNG instance and am still finding what I’m looking for.

On systems where I don’t have access to that, I’m currently trying Mojeek and experiment with Marginalia. Both rather traditional search engines.

I’m not a big fan of using LLMs for this. I rather punch in 3-5 keywords instead of explaining to some LLM what I’m looking for.

legohead 4 days ago

Mostly GPT, but for World of Warcraft, GPT is absolutely horrible. It's like it has been corrupted by the 20 years of bad/incorrect user data, or maybe just the sheer amount of it in general.

As an example, someone typo'd an abbreviation, so I asked GPT and it gladly made up something for me. So I gave it a random abbreviation, and it did the same (using its knowledge of the game).

Even when I tell it the specific version I'm playing it gets so much wrong it's basically useless. Item stats, where mobs are located, how to do a certain quest - anything. So I'm back to using websites like wowhead and google.

JohnFen 5 days ago

Yes, I still use search engines. So do all but one of my friends, both technical and not. I have not found LLMs to be anything close to a good replacement for them.

promiseofbeans 5 days ago

I use Kagi search when I want to find something, and chatgpt free when I want a question answered.

II2II 5 days ago

Most of my searches still use traditional search engines for two reasons:

- If I am seeking technical information, I would rather get it from the original source. It is often possible to do that with a search. The output from an LLM is not going to be the original source. Even with dealing with secondary sources, it is typically easier to spot red flags in a secondary source than it is with the output of an LLM.

- I often perform image searches. I have no desire for generated images, though I'm not going to object to one if someone else "curated" the outputs of an AI model.

That said, I will use an LLM for things that aren't strictly factual. i.e. I can judge if it is good enough for my needs by simply reading it over.

maximilianburke 5 days ago

All the time. I don't like LLMs, and don't trust them. I tried to use copilot but ended up shutting it off because I spent more time trying to decipher and ignore its (wrong) suggestions than I did solving the problem.

tiffanyh 5 days ago

Yes.

Until LLMs stop responding with over confident “MBA talk” that sounds impressive but doesn’t really say much, I’ll continue to use search engines.

lqstuart 5 days ago

People started using search engines to ask stupid questions. An LLM like Gemini etc is hands-down better for that. A search engine is still better for actually searching. I do not need a 5000 word screed about a guacamole recipe.

ChrisArchitect 5 days ago

GPT is completely useless for most of my daily searches. Searching for specific content on a site? I can just put in site:domain.com keywords and get useful results without having to read useless overview paragraphs about the site in question.

Image searches without having to describe every minute detail of what I'm looking for?

Bah, even some searches that are basically looking for wikipedia/historical lookups....so much easier UI in Google Search than chatgpt's endless paragraphs with unclear sources etc.

For some things Google's AI results are helpful too, if not to just narrow down the results to certain sources.

There's no chat interface helping any of this

oldjim69 5 days ago

Why would I ever search on a ChatGPT - thats not what they are for. They are for helping summarize things, writing copy, designing excel. Making silly images.

Search is for finding specific websites and products. Totally different things.

coderjames 5 days ago

I use DDG multiple times a day, every day. I don't find ChatGPT to be a suitable substitute for helping me locate resources on the web; hallucinated links waste my time trying to get to useful information.

Hikikomori 5 days ago

Tr kagi, its pretty good with filters down/upranking sites. I usually don't use ai for search purposes very much, mostly to avoid multiple pages of docs by asking it how to do things.

miki123211 4 days ago

Here's what I do:

1. questions where I expect SEO crap, like for cooking recipes, are for LLMs. I use the best available LLM for those to avoid hallucinations as much as possible, 2.5 pro these days. With so much blogspam, LLMs are actually less likely to hallucinate at this point than the real internet IMO.

2. Questions whose answer I can immediately verify, like "how do I do x in language y", also go to an LLM. If the suggestion doesn't work, then I google. My stackoverflow usage has fallen to almost 0.

3. General overviews / "how is this algorithm called" / "is there a library that does x" are LLMs, usually followed by Googling about the solutions discussed.

4. When there's no answer to my exact question anywhere, or when I need a more detailed overview of a new library / language, I still read tutorials and reference docs.

5. Local / company stuff, things like "when is this place open and how do I call them" or "what is the refund policy of this store" are exclusively Google. Same for shopping (not an American, so LLM shopping comparisons aren't very useful to me). Sadly, online reviews are still a cesspool.

mikrl 5 days ago

Yep. I ask LLMs the XY questions since they don’t get annoyed, and when my question is very concrete and reduced to its essence, I ask the search engine and usually get a better answer than the LLM would give me.

Basically, there’s a lot of good and specific information on the web, but not necessarily combined in the way I want. LLMs can help break apart my specific combination at a high level but struggle with the human ability to get to solutions quickly.

Or maybe I just suck at asking questions haah

bythckr 3 days ago

I use search engine for 2 purpose and not sure if its a common practice across.

Specific search expecting 1 answer. These type search is enhanced by ChatGPT. Google is losing here.

Wild goose chase / brainstorming. For this, I need a broad set of answers. I am looking for a radically different solution. Here, today's Google is inferior to the OG Google. That is for 2 reasons.

1. SEOs have screwed up the results. A famous culprit is pinterest and many other irrelevant site that fill the first couple of pages.

2. Self-sensoring & shadow banning. Banning of torrent sites, politically motivated manipulation. Though the topic I am searching is not political, there is some issue with the result. I can see the difference when I try the same in Bing or DuckDuckGo.

udev4096 4 days ago

Yes, why would I not? I, unlike you, do not intend to have a shallow knowledge on things I wanna know about. In a few years, it's going to get worse and no one would have deep expertise on anything (especially junior engineers) if they keep using LLMs. DDG is still far better than Google although I have started to see more ads on DDG searches which is quite annoying

entropyneur 5 days ago

Do you have that friend who knows the answer to anything and who you thought was a genius until smartphones appeared and you started googling his answers? LLMs are that guy.

For programming stuff that can be immediately verified LLMs are good. They also cover many cases where search engines can't go (e.g. "what was that song where X did Y?"). But looking up facts? Not yet. Burned many times and not trying it again until I hear something changed fundamentally.

superkuh 5 days ago

I still use google scholar, right dao for deep search (tens of thousands of results), searx instances, and kagi for now but it's not worth the $10/mo for only ~200 results per search.

The serendipity of doing search with your own eyes and brain on page 34 of the results cannot be understated. Web surfing is good and does things that curated results (ie, google's <400, bing's <900, kagi's <200, LLM's very limited single results) cannot.

akaike 5 days ago

It’s tough to find anything useful these days because of all the spam - especially due to AI, content. If I do use it, I usually use it to find something on Reddit.

  • Freak_NL 5 days ago

    For anything where practical skills are concerned (woodworking, metalworking, leatherworking, anodising stuff, etc.) I have to resort to searching on Youtube. There is, fortunately, a lot of information there in the form of tutorials and guides. Search engines are useless there. Most of the pages returned are indeed typical AI slop just there for the ad impressions.

    It's extremely disheartening. I have no trust in Youtube staying accessible as a font of public knowledge. It just works out that way now.

    Reddit seems hit or miss depending on the topic. Plenty of threads there where [deleted] asked a question and [disgruntled user] replied with something which has been replaced with random text by a fancy deletion tool.

godshatter 5 days ago

I generally use search.brave.com which has an integrative AI Assistant summary. Sometimes the summary does a nice job and other times I just skip it and go find a link that is from somewhere I recognize. If I want to know how to do something, I skip the summary. If I just want to know if something exists or is possible then the summary is sometimes enough. I have no real desire to replace my search engine usage with an LLM.

internet_points 4 days ago

Yes.

ddg is often faster for when I want to get to an actual web site and find up-to-date info, for "search as navigation".

llm's are often faster for finding answers to slightly vague questions (where you know you're going to have to burn at least as much climate on wading through blogspam and ads and videos-that-didn't-need-to-be-videos if you do a search).

ergonaught 5 days ago

When Google's results are garbage I will sometimes ChatGPT or others. This is increasing, but that has more to do with Google producing ever worsening results than any desire to use LLMs to "search".

Google wants to show me products to buy, which I'm almost never searching for, or they're "being super helpful" by removing/modifying my search terms, or they demonstrate that the decision makers simply don't care (or understand) what search is intended to accomplish for the user (ex: ever-present notices that there "aren't many results" for my search).

Recently tried to find a singer and song title based on lyrics. Google wouldn't present either of those, despite giving it the exact lyrics. ChatGPT gave me nonsense until I complained that it was giving me worse results than Google, at which point it gave me the correct singer but the wrong song, and then the correct song after pointing out that it was wrong about that.

Still can't get Google to do it unless my search is for the singer's name and song title, which is a bit late to the party.

globnomulous 5 days ago

I never use ChatGPT for anything. I don't trust it for anything (nor should anybody), don't support the company that made it (unethically and on false pretenses as a nonprofit), and have absolutely no desire to contribute to its development.

When I need to search, I use a search engine and try to find a trustworthy source, assuming one is available.

nfriedly 5 days ago

It's a mix of both for me.

I use gemini more on my phone, where I feel like going through search results and reading is more effort, but I'll fall back to searching on duck duck go fairly often.

On a desktop I generally start at duck duck go, and if it's not there, then I don't bother with AI. (I use copilot in my editor, and it's usually helpful, but not really "search").

ASalazarMX 5 days ago

Ironically, it's not that LLMs have become super useful, it's that the dominant search engines have become significantly worse, while at the same time they peddle AI results. It almost feels as if it was better for them if you used LLMs.

I won't deny LLMs can be useful, but they're like the news: double-check and form your own conclusions.

DaSexiestAlive 4 days ago

Yes, ChatGPT has flaws (strange "hallucinations"?), but I found the same with me. Questions that I get no where with Google Search-n-friends (Duckduckgo/Qwant/Bing/etc etc) I had to give a last try with ChatGPT, and ChatGPT seems to fare considerably better.

Given my time dedicated to researching thing, I feel like I am "more productive" b/c I waste less time.

But I do my due diligence to double-check what ChatGPT suggests. So if I ask ChatGPT to recommend a list of books, I double-check with Goodreads and Amazon reviews/ratings. Like that. I guess it's like having a pair-research-sesson with an AI librarian friend? I am not sure.

But I know that I am appreciative. Does anyone remember how bad chatbots were before the arrival of low-hanging-AI-fruits like generative AI? Intel remembers.

austin-cheney 5 days ago

I used an AI tool for the first time this weekend to get a military CAC to authenticate to websites through Firefox on Arch. It took more than half a dozen uses of the AI tool to get what I was looking for though. Super edge case and even the AI struggled like a human.

Yes, I still use search engines and almost always find what I need in long form if I can’t figure it out on my own.

simonbw 5 days ago

I use Kagi to search, but I usually use it with the a "?" at the end which triggers an LLM response in addition to search results. It gives me the answer I want like 95% of the time, and I don't feel the need to dig into the search results. For me this tends to be way better than just searching or just using ChatGPT.

axelthegerman 4 days ago

I can only imagine how much slower using an LLM would be, especially when it only gives you a single answer which is not what you're looking for and you have to keep asking for "something else"

I echo what others say, Kagi is a joy to use and feels just like Google used to be - useful

okayokayokay123 5 days ago

Switched over to DuckDuckGo a month ago. Results aren’t always great but it works 90% of the time.

I use perplexity pro + Claude a lot as well. Maybe too much but mostly for coding and conversations about technical topics.

It really depends on intent.

I have noticed that I’ve started reading a lot more. Lots of technical books in the iPad based on what I’m interested in at the moment.

cosmic_cheese 5 days ago

LLMs have taken up a significant share of my technical/programming questions, because there’s a pretty good chance it’ll give me a correct or mostly correct answer and if it doesn’t, the results aren’t catastrophic. I don’t trust them for much else though, and so I still use a search engine (Kagi) for most other things. For odd exceptions, I ask the LLM to cite its answers and in the event that it can’t do that or provides false citations, I fall back on search engines.

These tools are useful, but in my view the level of trust seemingly commonly being placed in them far exceeds their capabilities. They’re not capable of distinguishing confidently worded but woefully incorrect reddit posts from well-verified authoritative pages which combined with their inclination for hallucinations and overeagerness to please the user makes them dangerous in an insidious way.

wolrah 5 days ago

This thread is yet another thing that makes me fear for the future of humanity.

No, I don't use the hallucination machines to search, and I never will.

I use search engines to search. I use the "make shit up" machine when I want shit made up. Modern voice models are great for IVR menus and other similar tasks. Image generation models have entirely taken over from clipart when I want a meaningless image to represent an idea. LLMs are even fun to make up bogus news articles, boilerplate text to fill a template, etc. They're not search engines though and they can't replace search engines.

If I want to find real information I use a search engine to find primary sources containing the keywords I'm looking for, or well referenced secondary sources like Wikipedia which can lead me to primary sources.

  • AbraKdabra 5 days ago

    Wow u mad bro, chill, OP just made a simple question.

gwbas1c 5 days ago

I have Gemini results in my Google searches. They're "good enough" that I rarely venture to LLMs.

When I do, it's because either I can't think of good terms to use, and the LLM helps me figure out what I'm looking for, or I want to keep asking follow-up questions.

Even then, I probably use an LLM every other week at most.

mitthrowaway2 5 days ago

Often I remember having read an article or seen a website in ~2014 or something, and now I want to find a link to it so I can cite it. I use a search engine for this, typing in the gist of what I can remember, set a date range (more clicks than it should take), and that's how I get to it.

This can be very difficult, if there's a lot of semantic overlap with a more commonly-searched mainstream topic, or if the date-range-filtering is unreliable.

Sometimes I'll look for a recipe for banana bread or something, and searching "banana bread recipe" will get me to something acceptable. Then I just have to scroll down through 10 paragraphs of SEO exposition about how much everyone loves homemade banana bread.

Searching for suppliers for products that I want to buy is, ironically, extremely difficult.

I don't trust LLMs for any kind of factual information retrieval yet.

add-sub-mul-div 5 days ago

The idea of taking an answer from any black box is profoundly unacceptable. Even if the black box didn't hallucinate. Why wouldn't I prefer to follow a link to a site so that I can evaluate its trustworthiness as a source?

Why would I want to have a conversation in a medium of ambiguity when I could quickly type in a few keywords instead? If we'd invented the former first, we'd build statues of whoever invented the latter.

Why would I want to use a search service that strips privacy by forcing me to be logged in and is following the Netflix model of giving away a service cheap now to get you to rely on it so much that you'll have no choice but to keep paying for it later when it's expensive and enshittified?

0xbadcafebee 4 days ago

I try to use Google. If I put my search question into the Android Firefox url bar and hit enter, Google will show up with some useful answers (if it's not in the AI answer, Google is useless, because there are 5 pages of bullshit before it begins to show me actual web page search results).

But if I then click the Google search text box at the top, and start typing, it takes 20 seconds for my text to start appearing (the screen is clearly lagged by whatever Google is doing in the background), and then somehow it starts getting jumbled. Google is the only web page this happens to.

I actually like their results, they just don't want me to see their results. Weird business model.

Shorel 4 days ago

I still use search engines, and not ChatGPT or any LLM as my primary behaviour.

Of course, I have used Phind and other LLMs, and the results sometimes are useful, but in general the information they give back feels like a summary written for the “Explain Like I'm Five” crowd, it just gives me more questions than answers, and frustrates me more than it helps me.

Where LLMs excel is when I don't know the exact search term to use for some particular concept. I ask the LLM about something, it answers with the right terms I can use in a search engine to find what I want, then I use these terms instead of my own words, and what I want is in the search results, in the first page.

alkonaut 4 days ago

ChatGPT takes 5-10 seconds to respond. Until it's as fast as google, i'm not switching.

The question is: are you searching for answers to something, or are you searching for a site/article/journal/whatever in order to consume the actual content? If you are searching for a page/article/journal/ in order to find an answer, then the journal/article itself was just a detour, if the LLM could give you the answer and you could trust it. But if you were looking for the page/article itself, not some piece of information IN the article then ChatGPT can (at best) give you the same URL google did, but 100x slower?

nkrisc 5 days ago

Exclusively. I don't want to think of a question to ask, or think about phrasing some prompt so I get a useful result, I just want to through a few related words and terms into a search box and see where that gets me, and then use the results to refine my search terms further.

  • nh23423fefe 5 days ago

    Those don't seem different to me. it seems like youve internalized the query syntax of search engines and youre fine with erroneous results.

    • nkrisc 5 days ago

      > it seems like youve internalized the query syntax of search engines

      If by syntax you mean "words related to what I want to find" then I suppose.

      > youre fine with erroneous results.

      Because AI is famously correct all the time. And there are no "erroneous" results in my method, only ones not closely enough related to what I wanted, so that's feedback my search terms should be refined.

      Any answer I get from AI I’m going to have to verify anyway so I just skip the AI step.

      • nh23423fefe 5 days ago

        no by query syntax i mean what people have called google fu.

        yeah seems the same to me. type some stuff, read the response, evaluate, loop or break. cool you hate ai and dont want to learn a new pattern. but you say i dont want to think which doesnt make sense. you are thinking just as hard.

        • nkrisc 4 days ago

          > what people have called google fu

          I don’t know what that means in any specific sense.

          I don’t want to think about how to interact with the search mechanism. It’s the same problem with using voice assistants like Siri - I have to think about how to construct my query so that it will be parsed correctly.

          With most search engines I can just type in disparate terms that might be related. The order doesn’t matter, the phrasing doesn’t matter, I don’t need to give it instructions on how to respond, etc.

sReinwald 5 days ago

It depends on what I'm after. I still use regular searches quite a bit.

But a lot of my classic ADHD "let's dive into this rabbit hole" google sessions have definitely been replaced by AI deep searches like Perplexity. Instead of me going down a rabbit hole personally for all the random stuff that comes across my mind, I'll just let perplexity handle it and I come back a few minutes later and read whatever it came up with.

And sometimes, I don't even read that, and that's also fine. Just being able to hand that "task" off to an AI to handle it for me is very liberating in a way. I still get derailed a bit of course, but instead of losing half an hour, it's just a few seconds of typing out my question, and then getting back to what I've been doing.

mooiedingen 5 days ago

Absolutely, as the great Fravia+ RIP :( once said, it is in your advantage to know where and how to find possible solutions for your problems. And i am willing to even go so far as to say:

The more you thrust the models, the less cognitive load you are spending checking and verifiefing which will lead to what people call ai but which actually is nothing more than a for loop over in memory loaded data. That those who still think that: for Μessage in messages... can represent any sort of intelligence actually has already brainwashed on a new itteration of the "one armed bandit" where you click regenerate indefinatly with a random seed being distracted from what is going on around you

kryptiskt 4 days ago

No, I judge ChatGPT by the same standards as I judge humans. It's an inveterate liar, and I much prefer to deal with trustworthy sources of information even if they are slightly less convenient than that smarmy bullshitter.

runarberg 5 days ago

How do you know the information it generates is correct?

Just now for example I wanted to know how Emma Goldman was deported despite being a US citizen. Or whether she was a citizen to begin with. If an LLM gave me an answer I for sure would not trust it to be factual.

My search was simple: Emma Goldman citizenship. I got a wikipedia article, claiming it was argued that her citizenship was considered void after her ex husband’s citizenship was revoked. Now I needed to confirm it from a different source and also find out why her ex’s citizenship was revoked. So I searched his name + citizenship and got an New Yorker article claiming it was revoked because of some falsified papers. Done

If an LLM told me that, I simply wouldn’t trust it and would need to search for it anyway.

n_ary 5 days ago

I still use search engine. LLMs are great but often get outdated by 6-12 months. Usually, I search for random coding topics and asking LLMs will often reproduce the outdated top(most voted) answer verbatim. However, there are some nice folks at SO/SE who come back later and update the answers or submit a new one, but LLMs often don’t return these but continue producing various spin off of the top answer by modifying variable names or adding/removing comments.

Hence, search still remains my hope until SO and the likes decay.

Additionally, many search engines now already generate quick summaries or result snippets without a lot of prompt-fu, hence LLMs have actually become 40:60(llm:search) ratio day to day.

wodenokoto 5 days ago

Constantly use search. Using chatgpt exclusively is like those kids that only use tiktok

ruszki 5 days ago

LLMs are unreliable transformators for information which is already quite unreliable. So yes, I use Kagi. Averagely, using a search engine takes less time to achieve the same reliability (of course, perfect reliability is impossible). At least for me for sure.

wenbin 5 days ago

Yes, if I need (relatively) accurate answers (with the sources / urls / web pages), I'd use keyword search on Google.

Still have a trust issue with LLM/ChatGPT for facts. Maybe in a couple years my mindset will shift and trust LLM/chatgpt more.

ttctciyf 5 days ago

If I was habitually asking some llm for nuggets of information I'd have to use web search to verify it in any case.

But in fact I overwhelmingly use search over llm because it's an order of magnitude quicker (I also have google search's ai bobbins turned off by auto-using "web" instead of "all".)

I've used llm "for real" about 3 times in the last two months, twice to get a grounding in an area where I lacked any knowledge, so I could make better informed web searches, and once in a (failed) attempt to locate a piece of music where web search was unsuccessful.

elseleigh 5 days ago

I switched from Google to StartPage twelve years ago and have seen no need to change. I have trialed Kagi, and would move there if Startpage became unreliable. I've not used any LLM as a search engine alternative, and I have no plans to do so.

hnlurker22 4 days ago

Search engines are up to date. I can search something that happened today. LLMs are several years behind. Until that's fixed i think we'll still be using the once very useful search engines

Kostic 5 days ago

I've stopped using search engines actively 18 months ago. My first stop is an LLM. Once I understand what I actually need, I do a web search to go to the product/tool website. I do this not because LLMs are that good but because web search result quality went way down in the same period.

One interesting trend that I like is that I started using local LLMs way more in the last couple of months. They are good enough that I was able to cancel my personal ChatGPT subscription. Still using ChatGPT on the work machines since the company is paying it.

bflesch 5 days ago

Kagi is a very good alternative to google. When you're actually doing some research and have an exhaustive look at search results, kagi provides much more detailed results than google.

I'd rank kagi > chatgpt > google any day.

jacobgkau 5 days ago

I still primarily use search engines like Brave Search, DuckDuckGo, Bing, and Google (in that order). I've started sometimes bothering to read the search engines' AI overviews instead of skipping them, although I almost always still click through to their sources for any particular statement.

I just tried ChatGPT and saw that you can ask it to search the web and also can see its sources now. I still remembered how it was last time I used it, where it specifically refused to link out to external sources (looks like they changed it around last November). That's a pretty good improvement for using it as search.

lovehashbrowns 5 days ago

Search has gotten so bad I have replaced like 80% of it with LLMs, typically Claude or Gemini. I've also switched my searches over to duckduckgo whenever I do end up searching for something but even that is on the bad side.

johnny_canuck 5 days ago

When searching something non-programming related, I do. For example, I'm building an addition on our home. Searching for building materials, ideas, and any building science questions I have, I often find LLMs lacking. Even then, maybe 40% of the time Gemini gives me a good enough response.

On the flip side, any time I'm searching for something programming (FE, JavaScript in my case) it's last resort because an LLM is not giving me the answer I'm looking for.

This is still shocking to me, I really never thought I would replace my reliance on Google with something new.

renegat0x0 5 days ago

I use mixture of solutions for Web browsing

- I use RSS to see 'what's new', and to search it. My RSS client support search

- I maintain list of domains, so when I want to find particular place I check my list of domains (I can search domain title, description, etc.). I have 1 million of domains [0]

- If I want more precise information I try to google it

- I also may ask chatgpt

So in fact I am not using one tool to find information. I use many tools, and often narrowing it down to tools that most likely will have the answer.

[0] https://github.com/rumca-js/Internet-Places-Database

laweijfmvo 5 days ago

I just very recently tried using ChatGPT for a situation where’d I’d typically search, draw some conclusions, search again, etc. Basically, planning an open ended vacation.

The biggest issue is when GPT returns something that doesn’t match your knowledge, experience, or intuition and you ask the “are you sure?” question, it seems to inevitably come back with “you’re right!”. But then why/how did it get it wrong the first time? Which one is actually true? So I go back to search (Kagi).

So for me, LLMs are about helping to process and collate large bodies information, but not final answers on their own.

sans_souse 5 days ago

I will use search as part of any research based learning so long as it remains a functional option. So long as there's some chance of the AI giving incorrect data where precision is necessitated to learn the fact, I will remain leary and manually search those specific areas later on. At the very least.

Operator words still do work in google, albeit less so than in the past - they still do the job.

I see the AI as being there to do the major leg work. But the devil's in the details and we can't simply take their word that something is fact without scrutinizing the data.

JamesAdir 5 days ago

I've recently tinkered with creating an automated install script for a home server. It was good practice for me and I want to setup a small home server with pihole, sonarr and so on. I've created it with Claude and ChatGPT and both preformed poorly. Huge chunks of code, that might be running, but creates much more mess than it should. Only after going and reading the documentation the old fashioned way and with the help of search, I was able to reduce the size of the script and solve many problems with it.

AndrewDucker 5 days ago

Yup. I want to see a variety of sources, evaluate them, and understand the answer.

d4mi3n 5 days ago

I think this is mainly a symptom of poor alignment between search engines and their customers. ChatGPT works well now but the plan there seems to be to monetize search-like queries. I fear it won't be long until chat AI agents using this model bring us back to the same frustrations we have today with Google et all.

There is some room for optimism, though. There's been a rise in smaller search engines with different funding models that are more aligned with user needs. Kagi is the only one that comes to mind (I use it), but I'm sure there are others.

j-bos 5 days ago

I find LLMs good for general knowledge, and clever rubber ducking, but if I have a very specific niche issue in some sort of language or framework, generally search is a better bet to find a stack overflow post of people going through the exact same thing I went through. And even if they don't have solutions, they'll usually link you to more valuable and specific references that you can use.

Though lately for more in-depth research I've been enjoying working with the LLM to have it do the searching for me and provide me links back to the sources.

yieldcrv 5 days ago

ChatGPT was 12 months ago for me

I use Claude pretty exclusively, and GPT as a backup because GPT errors too much and tries to train on you too much and has a lackluster search feature. The web UIs are not these company’s priority, as they focus more on other offerings and API behavior. Which means any gripe will not be addressed and you have to just go for the differentiating UX.

For a second opinion from Claude, I use ChatGPT and Google pretty much the same amount. Raw google searches are just my glorified reddit search engine.

I also use offline LLM’S a lot. But my reliance on multimodal behavior brings me back to cloud offerings.

Adachi91 5 days ago

Agreed, I also use ChatGPT now mostly for searches, because it will pick the best sources that are not content farms, so I don't have to look through garbage to get a result. I started doing this about a year ago and was like "Oh, wow. This could disrupt search engines" and refer to it as "Super Google" now. I always have it link me to the source of what I'm looking for so I don't worry about it hallucinating, for common information I'm looking for reputable sources but don't know which `X` source has them for `Y` info.

holografix 5 days ago

Everyone still uses search for news, fact checking and as a nat language dns (eg: website for golang) The two worlds are en route for a merger and Google will most likely come out ahead.

That’s if they can swing the immense ads machine (and by that I mean the ads organisation not the tech) and point it at a new world and a different GTM strategy.

They still haven’t figured out how to properly incentivise content producers. A lazy way would be to display ads that the source websites would display alongside the summary or llm generated response and pass on any CPM to the source.

sfblah 5 days ago

I've been tracking this (literally on paper). I've moved around 75% of my queries to LLMs from search engines. And, the main reason I use search engines is because of queries on mobile, where the devices still make it much easier to search with a traditional search engine; and the omnibar.

Keep in mind that I'm not counting in my 75% queries where I get my answer from Google Gemini I'm just guessing if you added that in, it would rise to 85-90%.

My thought is if browsers and phones started pushing queries over to an LLM, search (and search revenue) would virtually disappear.

lo_fye 4 days ago

I was primarily using ChatGPT & Perplexity. Then I started calling B.S. on them, and more often than not their replies were "You're right! Sorry about that!" Simply saying "that's wrong" reveals a terrifying amount of "hallucination" by AI. Far, far more answers stated confidently as fact, turned out to be completely made up.

If I want to play with ideas, I chat with AI. If I need facts, I use search.

d1an 5 days ago

My native language is Chinese. Most of my colleague use www.baidu.com as search engine. But I do not like baidu, because the search results are full of ADs. I also use ChatGPT or deepseek. But in my work (linux kernel driver portting), the AI is not good enough, I can not believe it totally. So for some case, I still use Google to search answer with keyword in English. If you want to konw why I use English as keywords, because of CSDN,this site has been polluting the internet for far too long.

booleandilemma 5 days ago

Yes, because I still can't trust the output from LLMs, at all, really.

Saris 5 days ago

ChatGPT is frequently wrong with its answers, so yes search and forums and websites are still the best option.

For example I asked it about rear springs for a 3rd gen 4runner and it recommended springs for a 5th gen.

GuinansEyebrows 5 days ago

Yes, of course, because i find it way more informative to search in broad terms, digest varying sources and arrive at conclusions rather than by asking a question that invariably lacks enough context (that I’d find by reading docs or SO posts) to actually produce helpful results, let alone a deeper understanding of the topic than before i started.

Learning is fun! Reading is good for you! Being spoon fed likely-inaccurate/incomplete info or unmaintainable code is not why i got into computers.

locallost 5 days ago

It's become kind of tempting to use chatgpt for that because you don't have to search yourself for that one post somewhere that describes what you're looking for. But I found little use for anything critical because it's just wrong way too often. Recently it gave me the info on how to use an API which turned out to be deprecated since two years. But for setting up my parents iPad where I was looking for a setting I couldn't find, it's fine.

runjake 5 days ago

I'll use Claude about 75% of the time, and then a search engine about 25% of the time. That 25% of the time, I'm usually looking for:

- Specific documentation

- Datasets

- Shopping items

- Product reviews

But for the search engines I use, their branded LLM response takes up half of the first page. So that 25% figure may actually be a lot smaller.

It's important to note that these search engine LLM responses are often ludicrously incorrect -- at least, in my experience. So now I'm in this weird phase where I visit Google and debate whether I need to enter search terms or some prompt engineering in the search box.

jerejacobson 5 days ago

I recently had someone reach out and tell me they liked a Chrome extension that I built. They found it by asking ChatGPT if there was a way to do X on Y, and it recommended my extension to them.

I was very surprised to hear this, and it made me wonder how much of traditional SEO will be bypassed through LLM search results. How do you leverage trying to get ranked by an LLM? Do you just provide real value? Or do you get featured on a platform like Chrome Extensions Store to improve your chances? I don't know, but it is fun to think about.

satisfice 4 days ago

I don’t trust LLMs for search and neither should anyone. I speak as a professional tester. They are essentially untested. It should offend us all that OpenAI puts such unreliable software out there.

Unlike Google, or Duck Duck Go, which serve up links that we can instantly judge are relevant to us, LLM spin stories that sound pretty good but may be and often are insidiously wrong. It’s too much effort to fact check them, so people don’t.

grishka 5 days ago

Of course I do and always will. Not once did I consider using an LLM for anything even remotely serious. Generative AI, in any form, is a toy, a novelty. It's fun to play with and make fun of sometimes, but that's about it. And honestly, I'm tired as hell of generative AI being this new hotness that everyone must stick somewhere somehow. In their products, in their workflows, in their lives. I'm so looking forward for this fad to pass.

nsluss 5 days ago

The category of search I've stopped doing is the one where I'd append "reddit" to the end. The models are going to do a great job of distilling a wide range of opinion into something super digestible better than the old flow of looking for a bunch of threads on the subject from expert-amateurs and having to read them all myself.

For the people who say they've reduced their search engine use by some large percentage, do you never need to find a particular document on the web or look for reference material?

mancerayder 5 days ago

I've become very very bad at Google searches. Nothing seems accurate anymore, I should say precise, I'm hitting vendor/official/party-line stuff, or wordy blogs that say nothing.

I use ChatGPT at home constantly, for history questions, symptoms of an illness, identification of a plant hiking, remembering a complex term or idea I can't articulate, tips for games, and this list goes on.

At work it's Copilot.

I've come to loathe and mock Google search and I can't be the only one.

jonathanstrange 4 days ago

I don't use ChatGPT or any other AI. If I search for something, I search for authoritative documents on the topic. That is, official docs, articles, books. Asking ChatGPT would be like asking a random person who provides an opinion without any guarantees. It's potentially useful information but needs to be verified. So I need to search again to verify the information and get to an authoritative source.

nullbio 4 days ago

Only a matter of time before OpenAI starts selling advertising services and weighting specific websites and services higher than others in their response generation for money. We should really outlaw this before it becomes a problem, and before it becomes a thing, because once it happens there is no turning back. Unlikely that anyone with the power to do this will actually have the foresight to do it though.

noer 5 days ago

It depends on what I'm looking for. If I have a specific thing that I'm just looking for an answer on, then I typically will use ChatGPT. Most of my google searches are either navigational, things i know Google will return more quickly than ChatGPT ("how old is this actor", "when was xx player drafted") or when I'm interested in browsing results (looking for recipes for borscht, I want to see a few different recipes).

llm_nerd 5 days ago

Almost always use search-augmented LLMs now, to infinitely better results. Whether I'm wondering about a movie or looking for information on a programming language feature, or even specifics about niche things, an LLM gets me there much quicker.

Earlier today I was trying to remember the name of the lizard someone tweeted about seeing in a variety store. Google search yielded nothing. Gemini immediately gave me precise details of what I was talking about, it linked to web resources about it.

Nevis1 4 days ago

Google has become useless as a search engine. Most of the results are from Reddit, Quora, or YouTube. I'm not going to watch a 5 minute video to get an answer. The information from Reddit/Quora is hardly written by experts. I now do a "verbatim" search with -reddit -quora -youtube. I do use ChatGPT to do a search for websites that have information about what I am looking for.

degrees57 5 days ago

I switched my default search engine in my browser to perplexity.ai a few months ago and am super happy with it. The only time I use Google anymore is to specifically visit www.google.com and put site:example.com in the search field, when I know the results I am looking for are only found within that site. I've only had to do that five or six times in the last few months.

And yes, just plain old Google search is completely lackluster in comparison to the perplexity.ai search I get to do today.

  • jillesvangurp 5 days ago

    Perplexity is nice indeed. I actually pointed my parents at this over the summer. They are Dutch and not really fluent in English. But you can talk to it in Dutch and it's really good. To my surprise, especially my father really seems to like it. He had a stroke a few years ago and he was never that tech savvy. But this works really well for him.

    Google is trying to be too clever and failing at that at the same time. I use it for some searches when I roughly know what I'm looking at. The longer the query the more likely it is I'll be using perplexity.

  • sflanker 5 days ago

    Same. I find perplexity to be much better for researching technical topics than Google or other classical search engines. I have only had occasional accuracy issues (like it suggesting something from a feature request that hasn’t been implemented rather than official documentation), but the reference links makes it easy to verify.

hemc4 4 days ago

I do search from openai app with search option tunred on. It has better filtering than Google which shows all the irrelevant and low quality links like fake news as well. Quality of the search matter. Google does not have a way for me to mark what all websites i don't want in my search results so we need to rely on a more authentic source to filter all the chaos on the internet.

shmerl 5 days ago

Yes. But I noticed a trend of people asking stuff like "why doesn't this work" in various community forums, which ends up being them sourcing the method from the likes of ChatGPT / Gemini, etc. Lesson - don't do that, especially when you are going to be wasting others' time on explaining why things didn't work. Search things properly. Read documentation. Even if you use AI, never trust its results.

linacica 5 days ago

Depends on content, sometimes i use GPT to find stuff im lazy for and i know google would waste my time more likely, but generally i still use google, there are a lot of miscellaneous searches where an LLM would do worse than a search engine (currency exchange rate, stock price, quick facts etc..) tho I wish google had an option to block some sites from showing up, some searches are just filled with garbage - and i would like to block the whole domain from ever showing up

  • JKCalhoun 5 days ago

    Actually "quick facts", as I define them, are much better with an LLM for me. I prefer not having to wade through pages of links to sites trying to sell me something.

DigitallyFidget 5 days ago

I use a variety of mostly search engines that get me an answer much faster. Google/Bing frequently point to sites/articles written by AI anyway. Using a LLM directly often gives too much garbage and doesn't often stick to just answering my question, so it becomes as useless as a modern google search. I prefer old style searching of just using key words and refining my query opposed to having it (miss)interpreted.

CivBase 5 days ago

Yes. I even pay for Kagi. I very rarely feel the need to ask a chatbot for anything and every time I have I've been disappointed in the results. I'm surprised so many people find them useful.

These are the things I usually search for:

* lazy spell check * links to resources/services * human-made content (e.g. reviews, suggestions, communities)

Genuinely curious - those who use chatbots regularly in lieu of search, what kinds of things are you prompting it for?

successful23 5 days ago

Yeah, I still use search engines.

LLMs are amazing for technical research or getting a quick overview and a clear explanation without clicking through ten links. But for everyday searches — checking restaurant hours, finding recent news, digging into niche forums, or comparing product — search engines are still way better.

I don’t think it’s a matter of one replacing the other — they serve different purposes.

klauserc 5 days ago

Yes, all the time. For reference-level information, I don't trust AI summaries. If I need to know facts, I cannot have even the possibility of a lying auto-complete machine between me and the facts.

Exploratory/introductory/surface-level queries are the ones that get handed to auto-complete.

I like how Kagi lets me control whether AI should be involved by adding or omitting a question mark from my search query. Best of both worlds.

intellectronica 5 days ago

Only if I know exactly what I'm looking for but don't want to type the URL or don't remember it. I never actually search for information with a search engine anymore.

The only advantage Google and other traditional search engines have over AIs is that they're very fast. If I know for certain I can get what I want in under 1s I might as well use Google. For everything else, Perplexity or ChatGPT is going to be faster.

asaddhamani 5 days ago

Yes, but increasingly rarely.

I mostly use Perplexity for search, sometimes ChatGPT. Only when I am looking for something _very_ specific do I use a traditional search engine.

Dropping usage of search engines compounded by lack of support led to me cancelling my Kagi subscription and now I just stick with Google in the very rare occasions that I use a search engine at all. For a dozen searches or so a month, it wasn't worth it to keep paying for Kagi.

firecall 5 days ago

Yes, I use Google Search.

But I appreciate and read the Google Gemini AI generated response at the top of the page.

Also, I'm an iPhone user. But I have a Google Pixel phone for deve work.

I find myself now using 'Hey Google' a lot more because of the Gemini responses.

It's particularly fun playing with it with the kids on road trips as we ask it weird questions, and get it to reply in olde english, or speak backwards in French and so on!

InfiniteLoup 5 days ago

Almost all of my “searches” are now done by either ChatGPT or Claude.

I'm still using Google for searches on Reddit these days because Reddit's own search engine is terrible.

t1234s 5 days ago

There was an interview with Eric Schmidt of Google on PBS around 2006 where he describes having multiple results for a search query is a "Bug" and there should be only one answer. It was interesting how OpenAI was first to market with ChatGPT beating out Google in this space. Its also interesting how the current Google CEO was not asked to step down after the flawed initial launch of Gemini.

forgetfreeman 5 days ago

I use search exclusively. I've tried ChatGPT for a number of tasks on various subjects and found it's responses to be shallow to the point of uselessness and frequently riddled with errors. It's fun to get it to do dumb shit like rewrite my resume in the style of Warhammer 40k, but for anything serious it's proven to be largely useless.

ChrisArchitect 5 days ago

Lacklustre doesn't mean ineffective. There was/is hardly any reason to switch other than 'new fangled thing' and in some specific circumstances (with a myriad of google switches and tricks that power users here know and use to squeeze out more useful results). And like a billion ppl are happily using google and getting decent results regularly.

We're in a bubble here.

Zak 5 days ago

Yes. It's still pretty common that I want the official website for a thing, or product reviews written by humans who actually know what they're talking about rather than a single answer to a question. DuckDuckGo is better at delivering it than LLMs are. I also don't want a hallucination, so I appreciate Perplexity's easily-checked citations when I do use an LLM.

PStamatiou 5 days ago

I've been feeling the same - I use Perplexity mostly and then ChatGPT most of the time (sometimes Grok if I know it's more likely to be based on X info). I wrote about some of the side effects of this new behavior https://paulstamatiou.com/browse-no-more

worik 5 days ago

My use of Stackoverflow is the main casualty of LLMs

I used to use DDG for syntax problems (so many programming languages....) and it usually sent me to SO.

Now I use DeepSeek. Much friendlier, I can ask it stupid questions without getting shut down by the wankers on SO. Very good

I still use DDG to interface with current events and/or history. For history DDG is primarily, not only, an interface to Wikipedia

CraigRood 5 days ago

ChatGPT et al are quite neat, but the interfaces are not great at all. For example, I'm going on a trip to Paris in a couple of months. I can ask a LLM what to do but it will just give me a list. There are no visuals, click throughs, maps, tips, experiences. blending 'AI' with search and a 'reader' can create a much better experience.

ggm 5 days ago

I continue to get given ai responses which contradict their primary sources. Ask for UK specific Web and get given an American entity, which says so in the body text. It is clearly not British. It does not even misrepresent itself as non american.

Until the false results rate drops, it can't be trusted.

dbmnt 4 days ago

I'm using Kagi as my default, falling back to Google (typically via '!g' in Kagi) for some technical queries. I use ChatGPT 4o several times a day, but typically not as a 1:1 replacement for web search.

mohi-kalantari 5 days ago

I'm also using chatgpt with its search enabled or perplexity for searching almost anything. Way more accurate and to the point.

I feel like the google search will become obsolete in a short time and they have to make big changes to their UX and search engine.

Although I guess most of its user base are still relying on the old ways so changing it right now has huge impacts on older users.

  • justonceokay 5 days ago

    Interested in what your benchmark for accuracy is. I feel like for my searches that I am normally looking at a few different sources and cross referencing them to come to a conclusion about what is best for me. Do you find that AI is good at automatically figuring out what is best for you?

    For instance I wanted help cooking Coq au vin yesterday. I’ve cooked it before but I couldn’t remember what temperature to set the oven to. I read about five recipes (which were all wildly different) and choose the one that best suited the ingredients and quantities I was already using.

    I asked chat gpt for a coq au vin recipe, and I’ll just say I won’t be opening a restaurant using ChatGPT as my sous chef anytime soon.

    • mohi-kalantari 4 days ago

      Honestly, I haven't really thought about how much better my searches have gotten but one thing for sure is that now it's way faster.

      I can only really validate the generated response when it's code. Usually on other stuff, I trust and read the response which is not good I guess.

      Hope you were satisfied with the food at the end :)

AdieuToLogic 5 days ago

> ... I exclusively use ChatGPT for any kind of search or question.

This constrains the search space to whatever training data set used for the LLM. A commercial search engine includes resources outside this data set.

Using a search engine for responses to natural language questions is of dubious value as that is not their intended purpose.

xbmcuser 5 days ago

Yes I still use search engine specifically google as 1 habit 2 it's Ai search results with links to the actual content. Without the ai results answering my questions I would not have probably stayed with google. Google also seems to mitigate hallucinations by only showing content with links

HellDunkel 5 days ago

I still use google. However, the most annoying thing i find is the AI generated response. I use google as a translator by adding „dict“ to the search. I also use the search function on old fashioned forums. It works. Yesterday i tried Chat GPT on math homework of one of my kids. Result was just crazy wrong. Complete garbage.

  • kiddico 5 days ago

    I've always wondered if other do that... I add "def" to the end of mine for definitions.

    • HellDunkel 5 days ago

      It almost feels like an old people kind of thing to do but i actually enjoy the stubbornness of age.

ActVen 5 days ago

Way less than I used to. I have been a pretty advanced user since before google. The combination of AI and quick auto links to wikipedia articles on iOS have replaced much of it. The one place I still use it extensively is in local searches for businesses and when trying to find a brand or business that I know if they don't have an app.

nitwit005 5 days ago

If Google provided an option to disable AI search results, I'd happily turn it off.

I'd also happily turn off several other search features, more directly tied to revenue, which is probably why they don't like adding options. I'm sure their AI will be selling products soon enough. Got to make those billions spent back somehow.

  • rav 5 days ago

    At the moment it seems like you can avoid AI search results by either including swear words, or by using 'minus' to exclude terms (e.g. append -elephant to your search query).

    • nitwit005 5 days ago

      I have done that on occasion, but it's easier to just scroll a bit.

charlie-83 5 days ago

Whether I reach for AI or Search depends on two questions. Am I looking for a site or information? If I'm looking for information, how easily can I verify it?

Websites have all kinds of extra context and links to other stuff on them. If I want to learn/discover stuff then they are still the best place to go.

For simple informational questions, all of that extra context is noise; asking gpt "what's the name of that cpp function that does xyz" is much faster than having to skim over several search results, click one, wait for 100 JavaScript libraries to load, click no on a cookies popup and then actually read the page to find the information only to realise the post is 15 years old and no longer relevant.

There are times where I know exactly what website to go to and where information is on that site and so I prefer that over AI. DDGs bangs are excellent for this: "!cpp std::string" and you are there.

Then there's the verifiability thing. Most information I am searching for is code which is trivial to verify: sometimes AI hallucinates a function but the compiler immediately tells me this and the end result is I've wasted 30 seconds which is more than offset by the time saved not scrolling through search.

Examples of things that aren't easy to verify: when's this deprecated function going to be removed, how mature is tool xyz.

Of course, there's also questions about things that happened after the AI's knowledge cutoff date. I know there are some that can access the internet now but I don't think any are free

dismalaf 5 days ago

Of course. Most of the time I'm searching for a physical place, a companies' website, a product, or news. ChatGPT is terrible and giving any of those answers. It's rare I want to know some sort of random fact. ChatGPT also doesn't give sources like, say, Wikipedia.

bossyTeacher 5 days ago

I use search for things where factual accuracy is critical (i.e. the address of a specific store, a speech by a specific person, the lyrics of a song).

I use LLMs for things where an explanation where accuracy ranging between 0% to 100% is not a problem. When I need to get a feel for something, a pointer to some resource

guillaume8375 5 days ago

I’m a Kagi subscriber; I like it but I use it less and less.

The more times goes by, the more I use both ChatGPT and Claude to search (at the same time, to cross-check the results) with Kagi used to either check the results when I know strictly nothing of the subject or for specific searches (restaurants, movie showings…).

I’ve almost completely stopped using Google.

  • sitkack 5 days ago

    A question mark after a search on Kagi gives you an AI summary, the latency is good.

    If you go for the highest tier subscription on kagi, you get https://kagi.com/assistant which gives you a huge swath of AI models to handle your searching.

    • guillaume8375 5 days ago

      True, but I still pay for both Claude and ChatGPT so that I can use the latest models.

      • sitkack 5 days ago

        Kagi includes access to Claude 3.7 Sonnet with Extended Thinking and ChatGTP 4o, both with search.

IAmGraydon 5 days ago

I've done the exact same thing in the last few months. If it's a search that ends in a question mark, I go to ChatGPT or another AI. Sometimes I even find myself going to Google out of pure autopilot habit, and then catch myself and go to ChatGPT. Old habits...

babyent 5 days ago

Yes, for finding local information or to search for specific things. For example, to find events or shops near me, or finding reviews.

I use ChatGPT for learning about topics I don't know much about. For example, I could spend 15 minutes reading wikipedia, or I could ask it to use Wikipedia and summarize for me.

owenpalmer 5 days ago

I wish search engine algorithms/SEO were versioned, which would allow you to get a more consistent experience. The same applies to system prompts of closed LLMs.

The most important part for me is understanding how to communicate with each system, whether it's google-fu or prompting.

t0bia_s 5 days ago

Mix of startpage, perplexity, qwant, 4get.ch, teclis and crowdview. ChatGTP occasionally for coding, never in details of specific topics - it fabulate a lot and shape answers to be politically correct. Google focus on ads so I abandoned them long time ago.

SirMaster 5 days ago

Yes, all my searching is using Google and I haven't had any issues with the results or finding what I want.

jaccola 5 days ago

I have reduced my traditional search engine use by, I'd guess, 90%.

Having said that, I use ChatGPT exactly like a search engine. If I want to find info I will explicitly enable the web search mode and usually just read the sources, not the actual summary provided by the LLM.

Why do this? I find if I don't quite know the exact term I am looking for I can describe my problem/situation and let ChatGPT make the relevant searches on my behalf (and presumably also do some kind of embedding lookup).

This is particularly useful in new domains, e.g. I've been helping my wife do some legal research and I can explain my layman's understanding of a situation and ask for legal references, and sure enough it will produce cases and/or gov.uk sources that I can check. She has been impressed enough to buy a subscription.

I have also noticed that my years (decades!) of search engine skills have atrophied quicker than expected. I find myself typing into Google as I would to ChatGPT, in a much more human way, then catch myself and realise I can actually write much more tersely (and use, e.g. site:).

frou_dh 5 days ago

Accuracy issues aside, a draw that I feel towards using e.g. ChatGPT is that the information is displayed in a more consistent way. When using a search engine and opening a bunch of the results in tabs, I have to reorient myself to each site because they all have different visual designs.

ncdlek 4 days ago

It depends. If I am searching for an answer with one variable, I use search engines, but if there are multiple variables, then GPTs are better.

* adult cat sleep time -> search engines. * my cat drops his toy into his water and brings it to me -> GPT

seafoamteal 5 days ago

Every day. There are some questions I have that are too vague and descriptive to ask a traditional SE, so I direct those to an LLM, but on the whole I don't want to have to specify to my search engine to not give me a 500 word essay every time I use it.

sakopov 5 days ago

Yes, all the time. I use ChatGPT to help me get a general sense of direction when I'm learning something new. Once I have a few potential paths to explore, I combine regular Google searches with ChatGPT prompts to make sure my understanding aligns with reality.

throwaway290 4 days ago

Would never use LLMs to find things, it's like asking a human to find things, they suck at it mostly and unlike human LLm doesn't have awareness to know it sucks

RaSoJo 5 days ago

My searches have become site specific.

- What other people think of product XYZ: reddit - Subject specific/Historical: Wikipedia - News specific: My favored news sources - Coding related: I start with ChatGPT. To validate those answers I use Google

cvhc 5 days ago

I still use Google a lot. I don't bother chatting when a few keywords will bring me to the resource I want (90%+ of my searches).

Besides, Google has some convenient features that I frequently use, e.g., currency/unit/timezone conversion, stock chart.

hideload 5 days ago

Overall, GOOD! But LLMs don't work as expected in some cases. For technical solutions, they usually don't take software version as a parameter which may cause issues. So I always had to cross-check the solution in forums and documentations

tushar-r 5 days ago

Yes - I haven't used ChatGPT so far. I initially tried to not read what the Google AI summarized, since I did not trust it, but these days, if I know enough about the subject to identify errors, I do read the summaries.

actinium226 5 days ago

I have had a number of occasions where I had a vague query that I was sure an LLM would handle better than a search engine, and yet I ended up failing to get a result with the LLM and Google came to my rescue (their search engine, not their AI!)

eclipxe 5 days ago

I use search the same as I did before LLMs. I don't find ChatGPT to be useful for finding information. I do use Claude extensively for writing code and explaining errors via Github Copilot in VSCode, but I still primarily use Google.

nashashmi 5 days ago

This is pretty much the reason why I think Google should divest of its search engine to private equity, and go all in on AI query while it is still high in valuation.

It will also help get rid of the antitrust issues that the chrome browser has created

mistahchris 5 days ago

I use search engines every day (primarily kagi). But I will use a fast llm with a search tool for some things, like providing context about a news story etc, (primarily gemini 2.0 flash with "grounding" on).

seb1204 4 days ago

For simple searches e.g. webpage of company x in y I use DuckDuckGo, because the results are good and the CO2 footprint low. More questioning or interactive queries I use copilot.

magicalhippo 5 days ago

I use DDG for normal stuff, many times a day. I use LLMs for difficult to find stuff or to discover keywords.

They can be very useful, especially when looking for something closely adjacent to a popular topic, but you got to check carefully what they say.

marcuschong 3 days ago

Almost exclusively with "reddit" + something. Although I am using Deep Research more and more even for some of that.

johnea 5 days ago

I really think this is a "drink the koolaid" phenomenon.

Personally, I don't want an LLM synthesized result to a query. I want to read original source material on websites, preferably written by experts, in the field in which my search is targeted.

What I find in serious regression in search, is interpretation of the search query. If I search for something like "Systems containing A but not B" I just get results that contain the words A and B. The logical semantics of asking for "not B" is completely ignored. Using "-B" doesn't work, since many discussions of something that doesn't have B, will mention the word B. These errors didn't seem to be so egregious historically. There seemed to be more correct semantic interpretation of the query.

I don't know if this has to do with applying LLMs in the backend of search, but if LLMs could more accurately interpret what I'm asking for, then I would be happy to have them parse my queries and return links that meet my query specifications more accurately.

But again, I don't want a synthesized result, I want to read original source material. I see the push to make everything LLM synthesized prose, to be just another attempt to put "Big Tech" between me and the info I'm trying to access.

Just link me to the original info please...

p.s. Something like the "semantic web" which would eliminate any 3rd party search agent completely would be the ideal solution.

thrownaway561 4 days ago

I use CoPilot that is build into Windows 11 now almost exclusively. It's right there and gets the job done 99.9% of the time for me. that other 0.1% is when I use google.

mrankin 2 days ago

I never use ai for search. It's too unreliable and biased.

foobahify 5 days ago

Yes for holiday planning. At work we are a massive confluence/rovo user so that is my go to with very occasional search engine use.

Our projects heavily use platform tools so I am looking there rather than Googling.

joshdavham 5 days ago

Yes, but far less. I find I primarily use them when I want to go straight “to the source”.

Like I could interrogate an LLM about something technical “X” or I could just search “X documentation” and get to the ground truth.

asciimov 5 days ago

I’ve been told enough falsehoods from AI, that it hasn’t earned my trust yet.

On the other hand, Google search is starting to be useless without curating my queries. And their AI suggestions are full of lies.

glial 5 days ago

I use Kagi (paid) to search for websites/articles/PDFs and ChatGPT (4o) otherwise.

I started using Kagi in an attempt to de-googlify, but it turns out that it's just downright good and now I prefer it.

keithnz 5 days ago

chatgpt search has completely taken over my search, it's just better. Great that you then get to have follow up questions to narrow in or ask more. So common thing, looking for a product, it will find it, you can ask for other options, or ask it about the reviews for a specific product (it will give a summary and links, and may even embed a youtbue review), best pricing, nearest retailer, .... oh, none in your area, best place to order it from.... etc etc. I just have zero need for the classic search engine any more.

thunder-blue-3 5 days ago

The only time I use search engines now is when I’m screen sharing and feel obligated not to show my five different ChatGPT tabs. I glance over the links and feign interest, "Oh, that’s great..."

pton_xd 5 days ago

I use ChatGPT for queries like, "translate XYZ to English," particularly for short phrases where I don't care if it's exactly right; good enough works.

For everything else, I still use search.

pipeline_peak 5 days ago

Yes, because Google also has AI and it’s integrated into my browser bar, Chat-Gpt is just secondary tool to me.

If I need something more complex like programming, talk therapy, or finding new music then I’ll hop on over to Chat.

  • keithnz 5 days ago

    chat gpt search plugs in as a search engine and you can use it from your browser bar.... it also supports bangs like DDG, so you can do !g <search> and it will go to google, but I never find myself wanting google these days

    • pipeline_peak 4 days ago

      How does it handle typical Google searches like “Chinese food near me”

pryelluw 5 days ago

Yep, I continue to use multiple search engines and strategies to find stuff. Llms have been added as a tool but in general they mostly allow me to expand my context rather than provide outright answers.

setopt 5 days ago

I definitely still use search engines too. Googling topics with site:Reddit.com still a better way to get genuine opinions on e.g. product comparisons and recommendations.

65 5 days ago

I don't use any AI at all. I would prefer to keep my critical thinking abilities in tact.

I use Kagi as my search engine and GitHub code search for searching for code examples.

I haven't found a reason to use AI yet.

delusional 5 days ago

I'm very happy with my Kagi subscription. The results get me exactly what I want, and I can easily check the sources and downrank stuff I don't want.

I average around 1400-1600 searches per month.

mywittyname 5 days ago

I still use DDG several times a day (google is awful now). But I have transitioned over to Perplexity for certain searches, since it provides links to the source material it used.

rglover 5 days ago

Depends on the topic/goal. If I need concrete info, I'll use Kagi, and for more general questions ("should I be concerned about my cough"), I'll use an LLM.

madrox 5 days ago

I'm reaching for ChatGPT almost exclusively now, but if I talk to others I say I "googled it" because if I say I got it from ChatGPT the uninitiated don't trust it

vitorgrs 5 days ago

Yes, a lot. Search engine for me is not just for simple questions. I still want to search for specific articles, websites, etc. Want to filter by date...

SAI_Peregrinus 5 days ago

Yes. Kagi. Or sometimes DuckDuckGo if I'm on a computer that isn't my own so I don't want to log into Kagi. Never Google though, that's gone to shit.

6510 5 days ago

google and bing are much worse now. I cant find anything. I cant remember the last time a discussion [platform] made it into the results. Kinda comical how even the ads are terrible. I think they discover it is better for me to keep searching rather than buy anything?

Twitter and reddit are garbage.

I sometimes use youtube search then fast forward with the subs on and the sound off.

The internet has ended. It's been a fun ride, thanks everyone.

6510 slaps hn with a large trout

danirogerc 5 days ago

I tend to use Google to to research and go through multiple website but do exclusively use AI to solve tech issues. So it really depends on the activity for me.

Crosseye_Jack 4 days ago

Yes, I still use search engines, although my use of them has decreased.

What I tend to use LLMs for is rubber ducking or the opening of research on a topic.

kotaKat 4 days ago

I always have. Why would I want to use AI garbage?

nailer 5 days ago

I mainly use search engines indirectly via Copilot (the app). It uses Bing in the background to give current results, so I can ask it about what happened yesterday.

shebnik 4 days ago

just started trying asking question DeepSeek. Search engines return too many marketing materials, bloggers copy&paste nonsense and, recently, completely stupid AI autogenerated sites.

It is easy to filter them when you working with familiar domain, but trying to learn something completely new - it is better to ask DeepSeek for a summary, and then decide what to explore.

Szpadel 5 days ago

I got so disappointed with Google search results that I started looking into alternatives. I tried duck duck go, phind and chatgpt and while those have their strengths, they weren't best fit for me.

I often search for solutions for some specific (often exotic) problems and LLMs are not best to handle them.

DDG does not have best results I'm not sure if those are better than those from Google. Definitely have different set of issues.

Finally seeing another positive comment at HN about Kagi I decided to pull the wallet and try it. And it's great. It feels like Google from 2000s

I decided to replace my subscription to anthropic and chatgpt with Kagi where I have access to both providers and also Gemini, meta and others. So in bottom line I it actually saving me money.

Their Ki assistant (LLM that iterate with multiple search queries when looking for answers) is actually neat. In general it best of both worlds. depending what do you need you can use LLM interface or classic search and they have both working great

wruza 5 days ago

Search for what is known, llm for exploration.

Boils down to the fact that the internet is full of shitty blogspam that search happily returns if your question is vague.

victorbjorklund 5 days ago

Yes. But more for finding something I already know exists but don't remember the exact url (or are too lazy to type it in). Like going to some docs.

BirAdam 5 days ago

I have not used any public AI, and I primarily use search. I use a local LLM running on two old Tesla GPUs for help with coding, but that’s about it.

j45 5 days ago

Search is still dominant, using AI as search (perplexity, etc) is still growing.

Sparktoro (no affilitation) had a post or video about this somewhere very recently.

xfp 5 days ago

Daily. On top of general search engines, I have a bunch of custom ones added to Firefox so I can skip the Google/Bing/Yandex step.

mediumsmart 5 days ago

I started with Alta vista then DuckDuckGo and now Kagi. Recipes only via chatgpt and reverse image search with yandex of course.

boplicity 5 days ago

As a side note, we're seeing a small but meaningful amount of traffic referred from ChatGPT. Many people essentially use it as a search tool.

mxxx 5 days ago

I gave LLMs a bit of a go but a couple of occasions wasted a bunch of time by giving incorrect answers, so my trust levels are pretty low.

aurizon 5 days ago

When they became infested with Ad engines = use declined. Now Chatxyz seems good, but Ad engines loom = evolutionary shittification

ivanjermakov 5 days ago

LLMs and search engines have different use cases, albeit having overlap. Kind of similar to traditional computing vs machine learning.

dethos 5 days ago

The answer is yes, I still use, but not Google nor Bing. Relying exclusively on LLMs sounds a bit dangerous and naive.

Until when, I don't know.

gloosx 5 days ago

Of course we do! Imagine search engine taking 10¢ for every search query you enter, what a ridiculous replacement.

lelanthran 5 days ago

Sure. It's sometimes faster to do "allowable attributes for CSS visibility" and visually scan the results for the keywords.

xyst 5 days ago

2025 Google is trash. Kagi is in.

LLM is okay for some use cases, but the amount of times it hallucinates bullshit makes it not trustworthy.

NoSalt 4 days ago

My go-to is always Google. After I have exhausted the resources, and my patience, I turn to ChatGPT.

amunozo 5 days ago

I use mostly Perplexity, sometimes a "normal" LLM and, very rarely, Google. It depends on what I'm looking for.

tcper 5 days ago

I'm still using it.

If some AI answers I'm not sure or suspicious AI crafted it, I'll search it for cross validation.

rconti 4 days ago

I still use Google for everything; the difference is now I'm just unhappy with the results.

plextoria 4 days ago

I use Kagi search + Instant Answer. Instant Answer is most often giving me the result I am looking for.

ge96 5 days ago

yes, I never got into using chatGPT a lot for example

but I will stay I have started to just use the AI summary at the top of Google though although it is wrong like I searched "why is the nose of a supra so long" and it started talking about people's faces vs. the car which granted yeah it's not a nose but yeah

tonymet 5 days ago

grok (legal, historical, current events) , perplexity (claude , coding questions), gemini (via fabric / terminal -- coding questions, youtube, URL & book summaries). I only google by accident out of habit. Immediately regret the outcome using google ( spam ads, spammy content marketing, salacious content)

karmasimida 5 days ago

Somewhat. Much less frequent than before

With LLM being good enough, I go to LLM for what I used to go for Wikipedia and StackOverflow.

koehr 5 days ago

Have you tried matterrank.ai? It might give you want you want, but as a search engine, instead of a chat interface.

n8cpdx 5 days ago

Kagi all day every day. Except for maps, I use dedicated maps app for places/businesses/etc.

Lammy 5 days ago

According to my Kagi usage stats I have made 8907 searches and 9 AI interactions since June last year.

knicholes 5 days ago

I tried using ChatGPT to find my porn, and it refused to answer, so yeah, right back to a search engine.

omarozr 5 days ago

Most engines now include LLM outputs on top pf search results now. Solely using an LLM is just not practical

ripped_britches 5 days ago

Almost never unless I am using it to navigate to a specific known domain that isn’t in my history

rambambram 5 days ago

I rely purely on RSS-based search and email for asking questions to friends.

No, just joking. I use libraries to read books.

postsantum 5 days ago

Google for quick searches (as a doorway to reddit and SO)

Perplexity for anything complex

Yandex for pics (Google pics got ridiculously bad)

nickpsecurity 4 days ago

I trust humans more than LLM's. They're like whatever was most popular mixed with hallucinations. I can gauge human credibility better.

I usually search for specific terms, often in quotes. My extra terms are variations on how people might word the question or answer.

Over time, I notice many sites are reliable for specific topics. I'll use site: operator to limit the search to them initially. If it's a research paper, adding "paper" and PDF usually links to it immediately. If government, it's often on a .gov page. And so on.

Search works well for me with these techniques for most of my needs. There has certainly even a drop in quality, with an increase in work, due to them optimizing for what generates ad clicks. That gives me a lot of sites that appear to be helpful but actually arent. I can usually spot and weed them out in one session for a given topic, though, since click farm sites are recognizable (intuitable) once you're used to them.

Finally, I try to follow the law since my Savior, Jesus Christ, requires it where possible. A.I.'s are usually trained with massive copyright infringement with outputs that may be copyright infringement. Search engines link me to the content creator to use directly. The creator also often says if they want it shared or how which I try to respect when I see it mentioned.

INTPenis 4 days ago

People keep saying this but I keep using Google and I keep being happy with my results.

jryan49 5 days ago

I find it to be wrong so often, I'd be really careful assuming it's actually correct...

hnpolicestate 5 days ago

About a year ago it was 100% search engine. Today it's closer to 50/50 search/Chatgpt

bigomega 5 days ago

Not really, no. My peers and I were constantly opening ChatGPT when probing new topics. But now, with Gemini integrated within Google and the hallucinations of LLMs, seeing the SEO result along with the AI summary has become my goto choice. The one thing I find extremely frustrating (which I hope Google fixes) is not being able to continue the conversation with Gemini if I have follow-up questions.

I think this also stems from a new design paradigm emerging in the search domain of tech. The content results and conversational answers are merging – be it Google or your Algolia search within your documentation, a hybrid model is on the rise.

  • owlninja 5 days ago

    You may have to enable it, but their experimental 'AI Mode' does basically what you are saying (I think). I can hit 'dive deeper with AI mode' at the bottom of the 'AI Overview', and it just becomes an interactive search session...sort of.

jszymborski 5 days ago

I primarily use Brave Search. I'll yap with Claude for ideation every so often though.

photochemsyn 5 days ago

I don't use Google anymore, but I do use Google Scholar, sci-hub, Yandex and sometimes DDG.

vbezhenar 4 days ago

For me Google serves several roles.

1. Bookmark manager. I can write "maven download sources", click on baeldung and copy&paste command from there. I did that 100 times and I'll do it 101-th time. I have plenty of webpages that I know they exist and I know how to find them. I'm too lazy to actually bookmark and organize them and Google works just fine for me.

2. Search for new knowledge in general domains. This category of queries I sometimes use ChatGPT for it, but not always. It's hard to formulate the rules, but I have a feeling which tool is more useful for given question. Sometimes it's just quicker to type few keywords in Google rather than asking full-fledged question to ChatGPT and wait for it to return full-fledged answer.

3. Search for new knowledge in arcane domains. For example I might need new info about my city, where to buy laptop, etc. ChatGPT might know about US, but its knowledge about Kazakhstan is definitely limited, shallow and outdated, so real web comes to rescue here.

4. Using Google-specific functions. For example I very often use queries like "site:bla.com something", because website-hosted searches in 100% cases are braindead and I wouldn't even bother with it.

For me, ChatGPT main function is to create new content. For example I might want to start new Go project, so I'll ask ChatGPT to write hello world golang tutorial for me, then ask follow-up questions like what identifier naming should I follow, etc. There are, of course, hundreds of similar articles in the Internet, but ChatGPT delivers tailored data much faster.

TomMasz 4 days ago

I use DuckDuckGo mostly, but have a Kagi subscription for other searches.

andrepd 5 days ago

ChatGPT manages to be even worse than Google, and let me tell you: that's a low bar

dwayne_dibley 4 days ago

I think the kids today use TikTok and such for searching, which is mental.

b8 5 days ago

Search Indexers are still way faster than LLMs currently, so I use Kagi or searxnc.

ddawson 5 days ago

For years I was screaming at Google because I just wanted answers rather than endless blue links for me to research my answer. LLMs were such an amazing innovation in that it provides answers instead but now I'm finding my way back to keyword search. I think I'm still searching about 15% of the time.

TimK65 4 days ago

I use a search engine if I can more or less precisely describe what I want in a few words.

If I find that I really need to be more extensively descriptive, or to actually ask a question, I'll use an LLM, but that's pretty rare.

Here's an example of the latter that I did yesterday (this is Google Gemini):

"Hello! There is a poetic device where one line of a poem can be read both as the continuation of the previous line and as the first part of the following line. What is that called?"

"Hello! That poetic device is called an anastrophe or sometimes a hyperbaton, but these terms are more generally used for the inversion of the usual order of words or clauses in a sentence.

"While anastrophe or hyperbaton can create the effect you're describing, there isn't one single, widely recognized term specifically for a line that functions as both the end of one thought and the beginning of another in the way you've described.

"However, the effect you're describing is closely related to the idea of enjambment (also known as "run-on lines"). Enjambment is when a line of poetry ends without punctuation or a natural pause, and the thought continues onto the next line.

"..."

I proceeded to provide an example, saying "Points if you know what song this is ;-)", and Gemini confidently misidentified the song. (The correct song is "Black Cow" by Steely Dan; Gemini told me that the lyrics I'd quoted were from Bob Dylan's "Positively 4th Street," which doesn't contain any lyrics that even resemble what I'd quoted.)

ZoomZoomZoom 4 days ago

Yes and I'm still, years later, get annoyed that I need to force them to search, instead of making self-humiliating attempts to answer some question that isn't even there while feeding me a bunch of spam blogs and irrelevant but somehow profitable results.

bananapub 5 days ago

if you're using chatgpt, how are you ever actually sure if what you're being told is something a human actually said? are you really cross-checking everything, even things that aren't immediately falsifiable?

  • keithnz 5 days ago

    chatGPT search puts links through out its results and has a sources button that shows all the sources it used. Just a headsup as I think a few people are getting this wrong, this isn't chatgpt but chatgpt search, chatGPT search is trying to find source information and then summarizes it, you can then go to those source. It's up to date with all current events etc.

jdsleppy 4 days ago

I have not started to use LLMs, so yes I still use search engines.

charcircuit 5 days ago

Youtube's search engine is still good for finding songs I want to listen to.

rainm4n 5 days ago

For local searches and product searches yes. For mostly everything else, no.

brailsafe 5 days ago

Stopped using chatgpt a while ago, use search engines almost exclusively

nixpulvis 5 days ago

Searching google still gives more current info on changing topics, no?

reidrac 4 days ago

I use DDG, but mostly with the bangs, so I search in the place I'm likely to get the answer I want. Which can be limiting, because I'm always using the same sites (e.g. imdb for media information) and won't discover new ones, and sometimes I don't know where I can find what I'm looking for.

For more general searches, depending on the topic, DDG is close to useless because link farms, AI slop, and returning results that aren't really what I'm looking for (some of the keywords weight too much). But I suspect this is a common problem in all search engines, so I'm not looking for a replacement. It is frustrating though. I can't believe the information doesn't exist, is just that it is unreachable.

I don't search using AI. Generally I'm not looking for information that can be distilled into an "answer"; and there's also that DDG is not feeding me AI answer (I think? May be I'm not paying attention).

habosa 5 days ago

I don’t use LLMs for anything if I can avoid it. So far so good.

kwant_kiddo 5 days ago

no, and there are two major features of using LLM instead of search engines.

1. No prompt about decline/accepting cookies every time I want to look something up.

2. No ads.

the results are mediocre the same way using google is.

  • Ken_At_EM 5 days ago

    Unfortunately I feel like the no ads is just a matter of time.

Marsymars 5 days ago

I use Kagi for searching the web.

I use an LLM to generated regular expressions.

muzani 5 days ago

I usually do both at the same time. Ironically because Google.com is the shortest path to Gemini.

The other day I was also searching for something dumb: how to hammer a nail into concrete.

Google will find me instructions for a hammer-drill... no I just have a regular hammer. There's a link from wikiHow, which is okay, but I feel like it hallucinates as much as AI. Actually I just opened the link and the first instruction involves a hammer drill too. The second one is what I wanted, more wordy than ChatGPT.

Google then shows YouTube which has a 6 minute video. Then reddit which has bad advice half the time. I'm an idiot searching for how to hammer nails into a wall. I do not have the skill level to know when it's BS. Reddit makes me think I need a hammer drill and a fastener. Quora is next and it's even worse. It says concrete nails bend when hit, which even I know is false. It also convinces me that I need safety equipment to hit a nail with a hammer.

I just want a checklist to know that I'm not forgetting anything. ChatGPT gives me an accurate 5-step plan and it went perfectly.

agiacalone 5 days ago

Search engines? I'm still using web rings. ;-)

nicman23 4 days ago

it really depends if i know what i am trying to search or just researching the general notion. the latter part i will use llms for

insane_dreamer 5 days ago

Yes, Kagi. Don’t use ChatGPT at all. Sometimes use Claude

bicepjai 5 days ago

I have replaced all my search need with perplexity

nickhodge 5 days ago

Why disengage your brain and trust AI when searching?

devmor 4 days ago

I have repeatedly tried to use LLMs as search engines, both general (like ChatGPT) or more focused on specific domains.

I have not been impressed by the results. In my experience, LLMs used this way generally output confident-sounding information but have one of two problems the majority of the time:

- The information is blatantly wrong, from a source that doesn't exist.

- The information is subtly wrong, and generated a predictive chain that doesn't exist from part of a source.

I have found them about on-par with the reliability of a straightforward Google search with no constraints, but that is more of a condemnation of how poor Google's modern performance as a search engine is, than an accolade for using LLMs for search.

sanjeevverma1 5 days ago

Perplexity is by far the best web search replacement

fortran77 5 days ago

I use perplexity more than anything else these days.

softwreoutthere 4 days ago

dunno, google searches don't seem that different to me compared to 10 years ago. i don't use paid services.

fedeb95 5 days ago

yes. I find duckduckgo more on point and fast about what I need. Also, I don't have a limit for free usage.

crossroadsguy 4 days ago

All the time and no it's not Kagi (not saying it's bad; I just don't need it). I use Google and on good days DDG and DDG is really bad anything local I want to search and so is Kagi (in my limited trial) and by local I don't mean my city but my entire gigantic country.

As for those AI chatbots -those are anything but useful for the general search purposes beyond a bit of surface level answers which you can't fully trust because they (still) hallucinate a lot. I tell chatgpt - "Give me a list of good X. (And don't you make anything yup!!!)" - yeah with those bangs; and it still makes shit up.

bitwize 5 days ago

If I don't know something, I'm not gonna trust ChatGPT to get me the right answer. It may do so 90% of the time, and make shit up the other 10%. Google sucks compared to what it was (and DDG still sucks worse, which is why I still use Goog as a fallback), but I still know how to sift through the results to find something truly informative (if it's out there; some searches, like whether some product is a scam, have been SEO'd to oblivion by the scammers).

Oh, and a major reason why Google sucks now? AI enshittification. They basically jettisoned their finely tuned algorithm in favor of "just run it through the LLM sausage grinder".

casenmgreen 5 days ago

I've moved over to Kagi, last month.

Liking it a lot.

bttrpll 5 days ago

Brave Search + Google Scholar for me.

k__ 5 days ago

I mostly use Phind and Brave Search.

epolanski 5 days ago

Open ended stuff -> LLMs

Rest? Still search engines

jeffwask 5 days ago

Considering I have been using different search engines since Excite and Alta Vista, the state of modern search is worse than when crawlers were in their infancy. It is so front loaded with SEO, a search for a simple doc reference gives you ten pages of links back to the sales and marketing pages for 12 applications that do something similar.

AI is a better search for now because SEO and paid prioritization in search hasn't infested that ecosystem yet but it's only a matter of time.

I dropped Google search years ago but every engine is experiencing enshitification.

krembo 5 days ago

Yes, but not for knowledge.

thiht 5 days ago

Nope. I’ve despised using Google Search for years, and thought it would eventually be replaced with another better search engine. At one point I even switched to a paid Kagi subscription for a few months and it was sooo much better than Google. I only stopped using Kagi because I’ve completely switched to ChatGPT now. It’s a really great search engine but for my daily use ChatGPT is more convenient and faster to use.

rikthevik 5 days ago

I got a few months of free Kagi with a Pragmatic Engineer newsletter subscription and I'm enjoying it. It reminds me of old Google before it got polluted. Kagi can't do anything about the web itself being polluted, but the search experience is good. I use Gemini a lot as well.

I'm very disappointed in Apple that changing the default browser in Safari requires you to install a Safari extension. Super lame stuff.

pcthrowaway 5 days ago

I mainly use Google or Duckduckgo (depending on what I'm searching for), but I think search quality has been declining because of AI slop.

Which is kind of a problem, especially for Google, because their incentive to limit AI slop in search results is reduced when AI is one of their products, and they stand to benefit from search quality declining across the board in relation to AI answers.

v9v 5 days ago

I am frequently disappointed with the results I obtain from search engines, but in some of these cases I can find the things I'm looking for by tweaking the advanced search settings.

On the other hand every time I've used language models to find information I've gotten back generic or incorrect text + "sources" that have nothing to do with my query.

angra_mainyu 5 days ago

Half and half, for non-political stuff, I still lean on search engines and topic-specific sites.

For political stuff, I avoid wikipedia and just search engines in general and ask Grok/ChatGPT, specifying the specific biases I want it to filter out and know pieces of misinformation for it to ignore.

baq 5 days ago

I do, but LLMs generate so much slop that ordinary search results are less useful by the week. At this rate low-background steel will be plentiful in comparison to human-written true information. Stack overflow may be dead, but it's legacy will live forever. We never knew how good we had it until we lost it.

nikolasdimi 5 days ago

for me chatgpt has completely replaced google.

adelrosarioh 5 days ago

yes, most of the time I want a link to a page

danielodievich 5 days ago

I go out of my way to avoid any LLM generated thing, ESPECIALLY in search results, whether for coding, product research or news. The world is drowning in misinformation and misdirection, I don't need any additional hallucinations.

postepowanieadm 5 days ago

Not really. I use grok, if it fails, google will fail as well, so I switch to searching projects' documentation. It's not that grok/llms are awesome, it's just that google is useless.

andrei_says_ 5 days ago

I find the regurgitated slop of ChatGPT to have an unsatisfactory signal to noise ratio + too much confident lies and so I prefer direct searches.

Gemini is similar.

I sometimes use phind and find myself jumping directly to the sources.

Consider paying for kagi.

foxylad 5 days ago

I use Kagi, and it's worth every penny. Google has enshittified itself into irrelevance, and ChatGPT is too ponderous.

Kagi is like Google in it's prime - fast, relevant and giving a range of results.

  • pclark 5 days ago

    using Kagi makes me appreciate how many of my searches are region specific. Kagi seems so incredibly dumb in this regard compared to Google

cassepipe 4 days ago

I think it depends on your use cases :

1. *Browsing*

This can be completely avoided. Here is a thing you can do on firefox with some tweaks in order to achieve no-search browsing

- Remove search suggestions in (about:preferences#search)

- Use the [History AutoDelete](https://addons.mozilla.org/en-US/firefox/addon/history-autod...) addon to remove searches from your History. This will avoid searches from your history to pollute the results

- Go to (about:config) and set `browser.urlbar.resultMenu.keyboardAccessible` to `false`

Now when you Ctrl + L into the tab, you will get results from your history, bookmarks and even open tabs. And the results are only a few Tab presses away, no need to move your hands off the keyboard.

If you don't like the results and want to launch a search anyways, well just press Enter instead and it will launch a search with the default search engine. A cool trick is to type % + space in the awesome bar to move around opened tabs/ You can also specifically look into bookmarks with * and history with ^

P.S : Ctrl + L, Ctrl + T, Ctrl + W and Ctrl + Alt + T are your best friends.

P.P.S: Now you can also learn more about custom search engines : https://askubuntu.com/a/1534489

2. *Quick answer* on a topic. This is the second most common use case and what Google has been trying to optimize for for a long time. Say you want to know how many people are there in Nepal or what is the actual percentage blue-eyed people in Germany. This is where llm shine I think but to be fair Google is just as good for this job.

3. *Finding resources* to work with. This one is a bit on the way out because, it's what people who want to understand want but we probably are few. This is valuable because those resources do not just give an answer but also provide the rationale/context/sources for the answer. But.

On the one hand, most people just want the answer, and most people can be you if, even though you deem yourself a curious person, you don't have the time right now to actually make the effort to understand. On the other hand, llms can craft tutorials and break down subjects for you which turn those resources much less valuable. I kind of feel like the writing is on the wall and the future for this use case is for "curating" search engines that will give you the best resources and won't be afraid to tell you "Nothing of value turned up" instead of giving you trash. Curious to hear your thougts about that.

_0ffh 5 days ago

Now and then, but I also quite often use perplexity.ai for search. Sometimes it's just too convenient to let a robot sift through the search results for the information I want.

fcantournet 5 days ago

I use search engines because I want a source for the info I get that I can assess the trustworthiness of.

Sadly search is massively enshitified by AI generated SEO'd crap...

antegamisou 5 days ago

Yes for the sole reason LLMs are very arrogant and I don't want to end up similarly delusional about whatever happens to be that I'm researching.

GiorgioG 5 days ago

Yes, as borked as Google search results have become, it doesn't make shit up like LLMs do.

supportengineer 5 days ago

No, I go straight to GPT. Because I’m not usually searching for a webpage. What I’m really looking for is to learn through the course of an interactive discussion. Where I can ask any question no matter how stupid it is. Imagine a patient elderly colleague who will never lose their temper or mock you. Sometimes they get things wrong, but that’s where critical thinking comes in.

  • supportengineer 5 days ago

    I thought of one more thing I want to add. GPT listens to you. It makes you feel heard. Let’s say I ask a question but I have a strong bias. For example, what if I said “JavaScript is stupid, why can’t we go back to using Java Server Pages?”

    Instead of clowning me or making me feel invalidated it would present an argument that covers both sides and would probably start with “JSPs have certain advantages, and I understand why you would feel that way. Here is a list of pros and cons…”

SteelByte 5 days ago

The potential of AI to augment human creativity is immense, but we must thoughtfully consider the implications. While AI-generated content is impressive, it's crucial to establish clear boundaries, maintain human oversight, and ensure AI is used ethically to enhance, rather than replace, human artistry and originality.