Tech for Retail 2025 Workshop: From SEO to GEO – Gaining Visibility in the Era of Generative Engines

Back to blog

Understanding AI-Powered Search Engines

GEO

Discover Incremys

The 360° Next Gen SEO Platform

Request a demo
Last updated on

1/4/2026

Chapter 01

Example H2
Example H3
Example H4
Example H5
Example H6

AI-powered search engines in 2026: landscape, challenges, and the link to generative engine optimization

 

AI-powered search engines have changed how your content is discovered, understood, and, crucially, reused within answers. If you are already working on generative engine optimization, this article takes a more specialised angle: mapping the main AI-driven search experiences in 2026 and translating their technical logic into practical GEO decisions. The goal is simple: help you stay visible even when users do not necessarily click through.

 

Why this article complements the generative engine optimization guide

 

The GEO guide sets the strategic framework; here, we go one level deeper into how the engines actually work. You will clarify what distinguishes each environment (Google AI Overviews, ChatGPT Search, Perplexity, Gemini, Claude) and what they truly pull in as sources. You will also take away a method to monitor impact using measurable signals, without jumping to conclusions.

 

What changes in user journeys: from SERP to answers (and the impact on traffic)

 

In traditional search, a search engine crawls, indexes, and ranks pages to present relevant resources as links. That is the model described for conventional engines: web crawling, indexing, relevance algorithms, and results pages (source: https://sup-ubs.fr/faq/quelle-difference-entre-moteur-de-recherche-et-ia-generative/). With generative search, the interface shifts towards a synthesised answer, often conversational, which reduces the need to browse links.

This shift amplifies a trend that is already huge: zero-click searches. The GEO guide highlights that 60% of searches end without a click and that, when an AI Overview is present, the CTR for position 1 can fall to 2.6%. This is not a minor UX tweak: it is a change in the acquisition model, where being cited and selected as a source becomes a visibility KPI.

 

2026 mapping: AI search engines and AI assistants to watch

 

In 2026, talking about a "new search engine" does not necessarily mean a brand-new web index. Many experiences combine a language model with an information retrieval layer (open web, partners, knowledge bases) to produce answers. The strategic implication is clear: it is no longer only about where you rank, but where you are retrieved, cited, and kept as a reliable source.

 

Google SGE and Google AI Overviews: what SEO for Google SGE and AI Overviews means in practice

 

With Google, the critical factor is not the feature name, but how sources are selected inside overviews. The GEO guide states a structuring fact: 99% of AI Overviews cite pages that are already in the organic top 10. In practical terms, "classic" SEO remains the entry ticket to appear in generated answers.

To go deeper on implications and guardrails (reused formats, click-loss risk, technical priorities), you can rely on the dedicated resources Google SGE SEO and Google AI Overviews SEO. In B2B, the challenge is to secure organic rankings on core funnel queries, then optimise the reusability of segments (definitions, steps, statistics, comparisons) that are most likely to be cited.

 

ChatGPT Search: conversational search and how sourcing works

 

ChatGPT, on the search side, stands out through its experience: a conversation where users chain follow-up questions and the system must keep the thread consistent. Technically, it uses the building blocks of modern AI search (NLP, ML, LLMs) and, depending on the context, retrieves external documents before generating an answer (see the RAG principle explained below; source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). The GEO implication is straightforward: the answer needs to be anchored in passages that are verifiable, not in vague content.

Another often underestimated point is the multiplication of sub-queries. The GEO guide notes that assistants generate many variants and reformulations, so you need to cover angles (definition, criteria, limits, procedures, objections) rather than focusing on a single wording.

 

Perplexity: Perplexity SEO, citations, and discovery

 

Perplexity is often described as a hybrid between a search engine and a conversational agent (source: https://sup-ubs.fr/faq/quelle-difference-entre-moteur-de-recherche-et-ia-generative/). Its public positioning emphasises sourced, contextualised answers, beyond a simple list of links (source: https://blog.simplebo.fr/moteurs-recherche-ia). For a GEO strategy, this increases the importance of "citable" content: factual claims, clean definitions, sourced figures, and a structure that is easy to extract.

For specific recommendations (citation formats, page types that tend to surface, mistakes to avoid), see Perplexity SEO. Keep in mind a key idea from the GEO guide: generative systems frequently cite sources outside your brand site, including community platforms, which forces you to think credibility beyond your own domain.

 

Gemini: the research assistant and its role in Google's ecosystem

 

Gemini illustrates Google's push to embed more AI into query understanding and result delivery, continuing a longer history of machine learning in search (for example, RankBrain; source: https://blog.simplebo.fr/moteurs-recherche-ia). For you, the question is not "assistant vs engine" but where Gemini influences decision-making: intent interpretation, answer synthesis, and source selection.

The best defence remains fundamentals done well: topical authority, comprehensive coverage, and the ability to provide self-contained passages. If your content only makes sense as one large block, it is inherently less reusable by systems that extract fragments.

 

Claude: research use cases and structural limits to understand

 

Claude is cited among examples of AI search engines or AI search experiences (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). In practice, the key is understanding structural limits: an answer can be excellent at synthesis yet heavily dependent on access to retrieved sources that are current and well selected. That is exactly the context where your content needs to be verifiable, dated when necessary, and easy to cross-check.

 

How generative search works: understanding RAG, citations, and sources (no myths)

 

An AI-assisted search engine is not a "brain" that knows everything. It combines AI components (NLP, machine learning, LLMs) with search mechanisms over corpora to produce a relevant answer (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Understanding these building blocks helps you prioritise what genuinely increases your odds of being reused.

 

RAG: retrieve, select, then generate

 

RAG (Retrieval-Augmented Generation) follows a simple sequence: retrieve relevant documents first, then generate an answer based on those documents (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). For GEO, this is crucial: you are not only optimising to be found, but to be retrievable as a piece of evidence among a set of sources.

  1. Retrieval: finding relevant passages via classic indexes and/or semantic search (embeddings, vectors) (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine).
  2. Selection: arbitrating between sources (freshness, perceived quality, coherence, redundancy).
  3. Generation: synthesising in natural language, sometimes with citations or links depending on the product.

 

Indexes, the open web, and publisher content: what actually feeds answers

 

A traditional search engine indexes the web; an AI-powered engine can build on that legacy whilst adding stronger semantic understanding (sources: https://sup-ubs.fr/faq/quelle-difference-entre-moteur-de-recherche-et-ia-generative/ and https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Some players claim an independent index: Brave, for example, explains what an index is and says it bases results on its own index, whilst offering an AI answer feature on top of results (source: https://brave.com/fr/search/). This matters: depending on the index and sourcing agreements, your visibility can vary even with identical content.

Do not forget: the GEO guide underlines that citations do not come only from brand-owned sites. A significant share comes from community platforms and media, which means your presence needs to be coherent across public sources that retrieval systems can consume.

 

Citations, sources, and links: formats, common biases, and when sourcing disappears

 

Some products promise "always sourced" answers (Perplexity is presented this way; source: https://blog.simplebo.fr/moteurs-recherche-ia), and Brave states its AI answers always cite the sources used (source: https://brave.com/fr/search/). In reality, generative search remains inconsistent: depending on the query, context, or product policy, you may see links, partial citations, or no explicit sources at all.

  • Selection bias: over-representation of repeatedly cited sources (popularity effects) versus specialised sources.
  • Format bias: a preference for structured content (lists, tables, short definitions) that is easier to extract.
  • No sourcing: possible when a system answers "from memory" or considers the question too broad, increasing the risk of approximation.

 

Reliability: mistakes, "hallucinations", and reduction mechanisms

 

IBM explicitly highlights the risk of "hallucinations": incorrect answers delivered with confidence, caused in particular by outdated or low-quality data (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). This reinforces an operational rule: the more verifiable and cross-checkable a piece of information is, the more likely it is to be reused correctly. Conversely, vague, unsourced content increases the chances of being misread and misquoted.

Engines try to reduce this risk through external document retrieval, selection mechanisms, caching and distributed indexing, and (in some contexts) real-time data integrations via API (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). For publishers, the most effective contribution is to ship content that is clean, dated, maintained, and traceable.

 

Market and usage: adoption, market shares, and signals to watch in 2026

 

Demand is surging, but it cannot be tracked only through the market shares of historic web search engines. In 2026, usage is shifting towards conversational interfaces, sometimes without a traditional results page, and often embedded in work tools. You therefore need to track both acquisition signals (sessions, CTR, queries) and signals of presence within answers.

 

Where to find reliable figures (and how to interpret them without overreacting)

 

To avoid made-up numbers, start with published, dated sources. A concrete example: Backlinko reports 900 million weekly ChatGPT users in 2026, and Imperva estimates that 51% of web traffic in 2024 comes from bots and AI (summary: AI statistics). These figures do not tell you which engine dominates search, but they do confirm one thing: AI is becoming a large-scale gateway to information.

What to avoid: mechanically translating an audience figure into SEO opportunity. Assistant usage can generate fewer clicks but more influence (recommendations, shortlists, decision criteria), especially in B2B.

 

Useful indicators: queries, sessions, click share, and zero-click pressure

 

Useful indicators fall into two families: those that measure traffic, and those that measure zero-click pressure. The GEO guide highlights two benchmarks: 60% of searches end without a click, and a 2.6% CTR for position 1 when an AI Overview is present. In your own data, focus on trends by query type (informational vs comparative) and by page type (guides, definitions, solution pages).

Signal What it indicates Possible decision
CTR drops whilst rankings stay stable Increased competition from on-page answers Strengthen citability and differentiation (evidence, tables, definitions)
Impressions rise without clicks rising Visibility is up, but demand is captured by synthesis Optimise for mentions and reinforce brand recognition within answers
Traffic declines on generic queries Direct answers are becoming more frequent Shift effort towards high-intent queries and evidence-led content

 

Measuring impact on your site: a simple workflow using Google Search Console and Google Analytics

 

Without multiplying tools, you can build a robust workflow with Google Search Console and Google Analytics. Work in cohorts of pages (guides, comparisons, product pages) and track the combined evolution of impressions/CTR/clicks, then the quality of traffic (engagement, conversions). The key is to isolate queries where on-page synthesis captures demand, rather than concluding that overall performance has declined.

  1. In Search Console: identify pages where impressions are up and CTR is down over 28 days vs the previous period.
  2. In Analytics: check whether the click drop actually affects conversions (and which ones).
  3. Prioritise: pages with high business impact plus high exposure to synthesis (informational/comparative).

 

Getting referenced in an AI-powered search engine: a GEO method focused on citability

 

GEO does not replace SEO: it includes it and extends it (as the GEO guide explains). The operational difference is the deliverable: you are no longer optimising only a page to earn a click, but passages to be reused as sources. That demands discipline around structure, proof, and updates.

 

What engines reuse most: entities, definitions, data, procedures, and evidence

 

Generative systems favour what they can isolate and reuse without distorting meaning: crisp definitions, lists, tables, steps, dated figures, and verifiable elements. The GEO guide stresses extractability: AI extracts fragments, not whole pages. It also stresses verifiability: factual statements must be cross-checkable.

  • Entities: concepts, standards, categories, acronyms, market players (defined and contextualised).
  • Data: dated figures with an explicit source when public.
  • Procedures: numbered steps with validation criteria.
  • Evidence: examples, excerpts, operational definitions, limitations and conditions.

 

Structuring for reuse: sections, tables, FAQs, structured data, and extractable passages

 

The GEO guide notes that tables and lists are handled especially well because they are easy to extract. It also recommends structured FAQs (using natural questions) and schema.org markup (FAQPage, HowTo, Article, etc.). Your goal is to write sections whose first sentence can stand on its own, and then expand.

Element Why it gets reused Best practice
One-sentence definition Self-contained answer, low ambiguity Place the definition at the start of the section
Bullet list Simple, faithful extraction One idea per bullet, consistent vocabulary
Comparison table Reusable criteria/value structure Use objective criteria, avoid marketing claims
FAQ Direct match to user questions Give a short answer first, then add detail

 

Building trust: primary sources, updates, authorship, and traceability

 

AI systems must arbitrate between contradictory, outdated, or incomplete content. IBM notes that data quality and governance reduce errors (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). In practical terms, you increase your chances of being cited by documenting update dates, clearly attributing authorship, and citing public sources whenever you share a statistic.

  • Show a "Last updated" date when information is time-sensitive (market, regulation, offers).
  • Avoid unverifiable superlatives; use criteria and constraints instead.
  • Add references in the form "source: URL" for key external data points (one strong source beats decorative sourcing).

 

Multi-engine strategy: prioritise without duplicating content

 

Each engine has its own response habits, but GEO fundamentals remain stable: strong SEO, extractable content, evidence, and external brand credibility. To avoid duplication, start from a pillar page, then build satellite pages by intent (definition, comparison, implementation, common mistakes), each with a distinct angle and its own examples. This also helps you absorb the flood of sub-queries generated by assistants.

 

Operational integration: moving from diagnosis to execution (SEO + GEO)

 

A common trap is "rebuilding the whole site" without prioritisation. In B2B, you need to arbitrate like a portfolio: pages with business impact, pages exposed to synthesised answers, and pages that can become cited references. The focus is less about producing more and more about publishing content that is more verifiable.

 

Mapping high-potential pages: which content genuinely deserves to be cited

 

A page deserves GEO effort if it meets at least two criteria: strong demand, high business value, and the ability to provide evidence. The GEO guide suggests a useful question: "If an AI had to summarise this topic, which parts of my content would be worth citing?" If you cannot find a clear answer, the page likely needs data, structure, or examples.

  • "Definition + criteria" pages (top of funnel) to capture framing citations.
  • "Comparison + limitations" pages (mid funnel) to influence shortlists.
  • "Method + steps" pages (bottom of funnel) to become a procedural source.

 

Roadmap: quick wins, foundational work, and an update routine

 

A realistic roadmap separates what improves citability quickly from what builds authority over 3 to 6 months. Keep it simple: consolidate 10 to 20 key pages, then industrialise. And maintain an update cycle, because time-sensitive figures become stale fast (a principle covered in Incremys resources on data quality).

  1. Quick wins (2–4 weeks): definitions at the top of sections, tables, FAQs, sourcing.
  2. Foundational work (1–3 months): topical consolidation, internal linking, external evidence, SEO improvements.
  3. Routine (monthly): update figures, verify sources, add and refine objections.

 

Quality control: avoid unverifiable promises and vague content

 

Generative systems do not handle vagueness well: they can turn it into certainty. IBM explicitly mentions the risk of confident errors (hallucinations), so your content should reduce grey areas (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Apply a simple pre-publish checklist.

  • Does every figure have a public source and a year?
  • Does every recommendation include conditions ("it depends on...") when needed?
  • Does every section start with a self-contained sentence that remains accurate if quoted alone?

 

A quick word on Incremys: managing SEO & GEO visibility without piling up tools

 

 

Audit, prioritisation, production, and reporting, with Google Search Console and Google Analytics API integrations

 

Incremys is an all-in-one SaaS platform designed to structure auditing, prioritisation, and scaled content production, whilst also integrating Google Search Console and Google Analytics via API to centralise measurement. If you want an operational framework to arbitrate between SEO and GEO without spreading yourself too thin, the resources what is GEO, GEO vs SEO, GEO referencing and AI SEO provide practical reference points. And if your thinking is organisational (skills, governance, support), the resource AI agency can help you define roles.

 

FAQ on AI-powered search engines

 

 

What is an AI search engine?

 

An AI-powered search engine combines AI techniques (NLP, machine learning, LLMs) to interpret the intent and context behind a query and deliver a more semantic, personalised output than a simple list of links (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Some systems blend search and conversation to synthesise results from the web (source: https://sup-ubs.fr/faq/quelle-difference-entre-moteur-de-recherche-et-ia-generative/).

 

How is an AI search engine different from a traditional search engine?

 

A traditional engine crawls, indexes, and ranks pages to present resources as links, relying on relevance algorithms (source: https://sup-ubs.fr/faq/quelle-difference-entre-moteur-de-recherche-et-ia-generative/). An AI-driven engine places more emphasis on context, intent, and semantics, and may produce conversational answers, sometimes using vector search (embeddings) and synthesis (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine).

 

How does an AI search engine generate answers (how generative search works)?

 

Often, it follows a RAG-like approach: it retrieves relevant documents first, then a language model generates an answer based on those sources (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Answer quality therefore depends on document selection, freshness, and clarity.

 

What sources does an AI search engine use and how does it cite them?

 

Sources can come from the open web, search engine indexes, and sometimes external databases integrated via API for real-time data (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Depending on the product, the answer may include links and citations (Brave says it cites sources in its AI answers; source: https://brave.com/fr/search/), or present a synthesis with sourcing that varies by case.

 

What is the best AI-powered search engine?

 

There is no single "best" engine, because it depends on your use case: general web search, monitoring, source exploration, or fast synthesis. To compare multiple models on the same prompt, one approach is to use public benchmarks such as Chatbot Arena (LMSYS Org), cited as a comparison tool (source: https://sup-ubs.fr/faq/quelle-difference-entre-moteur-de-recherche-et-ia-generative/). In marketing contexts, the best choice is the one whose citation and update behaviour matches your constraints (traceability, freshness, compliance).

 

What are the best AI search engines available in France?

 

In France, you can access several AI-assisted search experiences depending on products and rollout. IBM cites Google (with Gemini), ChatGPT Search, Perplexity, and Claude Search as examples of AI search engines (source: https://www.ibm.com/fr-fr/think/topics/ai-search-engine). Brave Search is also available in French and offers AI answers with claimed citations (source: https://brave.com/fr/search/).

 

How can you measure visibility inside AI answers (not just on Google)?

 

Start by measuring indirect effects via Search Console (impressions, CTR, clicks) and Analytics (traffic quality and conversions), then isolate pages exposed to synthesis-style answers. Next, document your strategic queries and regularly check whether your pages appear as sources or references in answers, using a consistent grid (query, date, engine, citation present/absent, cited URL). What matters is trend tracking, not a single snapshot.

 

Do AI Overviews reduce organic CTR, and how do you adapt?

 

The GEO guide indicates that when an AI Overview is present, the CTR for the first position can fall to 2.6%, and 60% of searches end without a click. To adapt, increase your likelihood of being cited (extractable passages, data, definitions) and shift part of your effort towards high-intent queries where users still need to compare, decide, or buy.

 

What should B2B teams do for SEO on Google SGE and Google AI Overviews?

 

Priority 1: strengthen your SEO, because the GEO guide reminds us that 99% of AI Overviews cite pages in the organic top 10. Priority 2: make pages easier to synthesise (short sections, tables, FAQs, evidence). For a more detailed framework, refer to Google SGE SEO and Google AI Overviews SEO.

 

What should you do for Perplexity SEO (citations, sources, formats)?

 

Build content that holds up as fragments: definitions, lists, and tables, with each important claim being verifiable. Perplexity is presented as highlighting sourced answers, making the quality and clarity of your passages decisive (source: https://blog.simplebo.fr/moteurs-recherche-ia). For dedicated guidance, see Perplexity SEO.

 

Which GEO optimisations deliver the best impact-to-effort ratio in B2B?

 

  • Add an operational definition at the top of strategic pages.
  • Turn comparison sections into tables with measurable criteria.
  • Add a short, natural FAQ, then implement schema.org FAQPage markup.
  • Update and source statistics (year + source), especially on market pages.

 

How do you avoid cannibalisation between SEO content and content designed for generative engines?

 

Do not duplicate "SEO pages" into "AI pages". Keep one pillar page per topic, then create satellites by intent (definition, method, comparison, mistakes) with a distinct angle and different examples. This expands coverage of assistant-generated sub-queries without repeating the same blocks.

 

Which content formats are most often reused (definitions, lists, comparisons, tables)?

 

The formats most often reused are structured and extractable: short definitions, bullet lists, comparison tables, step-by-step guides, and FAQs. The GEO guide particularly emphasises FAQs and the strong performance of structured content (lists, tables) for easier extraction.

 

How can you reduce the risk of being misquoted or quoted out of context?

 

  • Write self-contained opening sentences that keep their meaning when isolated.
  • Add conditions and limitations when recommendations depend on context.
  • Source figures and date time-sensitive information.
  • Avoid absolute claims unless you can demonstrate them.

To continue with up-to-date, operational guides on SEO and GEO visibility, visit the Incremys Blog.

Discover other items

See all

Next-Gen GEO/SEO starts here

Complete the form so we can contact you.

The new generation of SEO
is on!

Thank you for your request, we will get back to you as soon as possible.

Oops! Something went wrong while submitting the form.