How does Claude pick sources for its answers?
By default, Claude answers from its training data, which has a knowledge cutoff date. There are no live citations in that mode. In newer Claude products with web search enabled, Claude can browse the open web and cite sources in a Perplexity-style format, but it tends to retrieve fewer pages and quote them more conservatively than Perplexity does.
Two modes, very different behaviour
Anthropic ships Claude in several surfaces (claude.ai, the Claude API, Claude in Slack, Claude in Cursor, etc.) and the source-handling behaviour depends on which one and which tools are enabled.
- Default mode (no browsing). Claude generates answers from its training data. It knows about brands, products, and concepts that existed and were well-documented before its knowledge cutoff. It cannot cite live URLs because it is not retrieving anything in real time.
- Web search enabled. In the Claude app and via the web search tool on the API, Claude can issue search queries, read pages, and cite the sources it used. This is the mode that competes with Perplexity and ChatGPT browsing.
For brand visibility, the no-browsing mode is the harder one. There is no link click, no source badge, just the model deciding whether to mention your brand from memory.
How Claude behaves without browsing
Ask Claude (with browsing off) "what are the best project management tools for a remote team of 10?" and you get a structured answer mentioning Asana, Linear, Notion, Monday, and ClickUp. Those names come from how often and how authoritatively the brands appeared in Claude's training corpus before the cutoff.
Three things drive whether your brand surfaces in this mode:
- Mention volume in the training data. Were you written about a lot, in a lot of places, before the cutoff? Wikipedia, major publications, well-known forums, GitHub readmes, Stack Overflow answers all feed in.
- Co-occurrence with the right concepts. If your brand is mentioned alongside the relevant category terms ("project management tool", "async collaboration"), Claude is more likely to retrieve you for that category.
- Recency relative to the cutoff. Brands that launched after the cutoff cannot be in this mode at all. A 2026 product is invisible to Claude until the next training run.
How Claude behaves with browsing
With web search on, Claude usually issues 1-3 search queries, reads a handful of pages, and cites maybe 3-6 sources at the end of an answer. Compared to Perplexity, the source list tends to be:
- Smaller. Perplexity often shows 8-15 sources. Claude tends to show fewer.
- More conservative in quoting. Claude is trained to be careful about misrepresenting source content, so it paraphrases more and quotes less.
- Biased toward authoritative domains. Government sites, established publications, official documentation surface more often than long-tail blogs.
One concrete pattern: ask Claude with browsing on "what is HubSpot's pricing?" and it will typically cite hubspot.com directly plus one or two third-party review sites. It rarely synthesises from a dozen low-authority blogs the way some other engines do.
How Claude compares to the other engines
The simplest summary:
- vs ChatGPT. ChatGPT's default mode is similar (training-data recall) but ChatGPT browses more aggressively when the search tool is on. ChatGPT also tends to cite a wider mix of sources per answer.
- vs Perplexity. Perplexity is browsing-first by design, with longer source lists and shorter generative passages. Claude is the opposite: longer prose, shorter citation list.
- vs Gemini. Gemini in the chat app has tight integration with Google Search, so its browsing mode often pulls authoritative SERP results directly. Claude's web search is more general-purpose and less tied to a single search index.
- vs DeepSeek. DeepSeek's default models do not browse natively. In default mode, both Claude and DeepSeek answer from training data, and Claude usually has a more recent cutoff and broader English-language coverage.
What this means for getting cited
If you want Claude (no browsing) to mention your brand, the work is slow and unglamorous: get covered in places that will end up in the next training run. Wikipedia where appropriate. Major publication features. Well-cited Reddit threads. GitHub presence if you are technical. None of it is fast, and there is no real-time feedback loop.
If you want Claude (with browsing) to cite you, the work overlaps heavily with classical SEO and AI search optimisation. Have authoritative pages, clean structure, recent updates, and rank reasonably well in Google. Claude tends to land on the same domains traditional search would surface.