How long does it take to rank in AI search?
It depends on which engine. Live-browsing engines like Perplexity, browsing-enabled ChatGPT, and Google AI Overviews can pick up strong new content in days to weeks. Training-only modes - default ChatGPT without browsing, default Claude, DeepSeek - operate on training cutoffs that are months to over a year old, so meaningful citation there waits for the next training cycle. GEO is not an overnight ranking play.
The split between live and trained
The five major engines fall into two buckets when it comes to how fast they see your content:
- Live-browsing: Perplexity (almost always), ChatGPT with browsing, Google AI Overviews (which sit on top of Google search results), Gemini in grounded mode. These query the web at answer time. If your page is in Google's or Bing's index and the engine decides it is relevant, it can be cited the same week.
- Training-corpus: default ChatGPT (no browsing), default Claude, DeepSeek. These rely on whatever was in the training data, frozen at the cutoff. New content is invisible until the next model version is trained and shipped.
The time-to-rank question splits along that line. Different engines, different clocks.
Days to weeks: the live-browsing track
If you publish a strong, well-structured page on a topic with real demand and you have a site Google and Bing already trust, expect this:
- Day 0-3: Google and Bing crawl and index the page.
- Day 3-14: Perplexity will surface it on direct queries that match the page.
- Day 7-30: Google AI Overviews may stitch it into multi-source answers if your page is one of the top organic results.
- Day 14-60: ChatGPT with browsing on tends to cite established sources, so a brand-new page on a brand-new domain often takes longer.
Concrete example. A B2B SaaS with an existing 200-page site publishes a comparison page on April 1. By April 8 it ranks position 6 on Google for the head term. By April 10 Perplexity cites it on the matching prompt. By April 25 it appears in a Google AI Overview as one of three sources. That is realistic. What is not realistic: a brand-new domain with three pages cracking AI Overviews in a week. Domain trust still matters here.
Quarters to years: the training-corpus track
For default Claude, default ChatGPT without browsing, and DeepSeek, the picture is different. These engines only know what was in their last training run. Major labs ship new model versions roughly every three to six months, but each new training run also has a cutoff that is months behind the release date.
So even if you publish a great page today, the optimistic timeline for it to enter training data and influence default-mode answers is:
- Three to six months: page gets indexed and cited across the broader web (other articles linking to it, Reddit threads, comparison sites).
- Six to twelve months: the next training run includes content from that window.
- Twelve to eighteen months: that training run ships as a new model version and starts answering user queries.
And that is the optimistic case. If the lab filters your source out, or if the page is not cross-referenced widely enough to register as a notable entity, it may not enter training at all. Wikipedia mentions, Reddit discussion, and citations from large sites are the strongest signals that something will get absorbed.
What "strong content" actually means here
The same content does not perform the same on both tracks. Live-browsing engines reward:
- Direct answers to a specific query (clear H2 questions, scannable paragraphs).
- Citable structure (lists, tables, named entities, statistics with sources).
- Domain authority, because engines bias toward established sources.
The training-corpus track rewards different things:
- Wide cross-referencing. The more independent sources mention you, the more likely the lab's training pipeline registers you as a real entity.
- Wikipedia presence. Wikipedia is heavily weighted in most training datasets.
- Time. There is no shortcut to being the canonical reference for a topic.
How to set realistic expectations
If a vendor or consultant tells you they will get you ranking in ChatGPT in two weeks, ask which mode of ChatGPT. Browsing-on, on a query that matches a single fresh page? Possible. Default ChatGPT for a head-term question that gets answered from training? Not in two weeks, not in two months.
The honest GEO playbook runs both tracks:
- Publish content that lives well on the live-browsing track now. You can measure citations within weeks.
- Build the durable footprint - links, mentions, comparisons, structured data - that the next training cycle will absorb. You see the payoff in quarters, not weeks.
Think of it the way SEO worked in 2010: ranking for a long-tail term could happen fast, but ranking for the head term took a year of consistent work. AI search has the same shape, with two different clocks running in parallel.