Tech for Retail 2025 Workshop: From SEO to GEO – Gaining Visibility in the Era of Generative Engines

Back to blog

Website Performance Audit: A Reliable Method

SEO

Discover Incremys

The 360° Next Gen SEO Platform

Request a demo
Last updated on

19/2/2026

Chapter 01

Example H2
Example H3
Example H4
Example H5
Example H6

Running a Website Performance Audit: Method, SEO Limits, and Practical Decisions

 

If you are already working on technical SEO, start by revisiting the technical SEO audit to keep your priorities in the right order (crawl, rendering, indexing, architecture) and avoid turning speed into the sole objective. In this complementary guide, we focus on conducting a thorough website performance audit with a pragmatic approach: measure correctly, connect speed to UX and conversions, then decide what to optimise — and what to leave alone.

One key point to keep in mind: performance is often a marginal factor in "pure" SEO. You will encounter websites with very low scores in certain tests (sometimes only a handful of points out of 100) that still rank extremely well, because other signals dominate (relevance, authority, internal linking, search intent). The objective is not to "achieve 100/100", but to remove the friction that genuinely costs you traffic, crawl efficiency, or conversions.

 

Why Performance Can Remain a Marginal Ranking Factor (and When It Becomes Decisive)

 

Google officially incorporated user experience into ranking through several milestones: the Mobilegeddon update (early 2015) increased the weight of mobile-friendliness in mobile search results, and the Page Experience Update announced in May 2020 and rolled out through to August 2021 introduced UX-related signals, including perceived speed (source: agence-wam.fr).

In practice, however, the SEO impact of performance is most visible "all else being equal": for comparable content quality, a faster site has an advantage (source: Blog du modérateur). This explains two realities that coexist without contradicting each other:

  • Yes, slowness can hurt UX (drop-offs, bounces) and sometimes crawling and rendering, ultimately affecting visibility.
  • No, speed is not a magic SEO lever: Google uses "more than 200 ranking factors" (a figure cited in SEO statistics, source HubSpot 2026), which naturally puts any single signal into perspective.

Performance becomes truly decisive most often in one of these situations:

  • Mobile: mobile accounts for 60% of global web traffic in 2026 (Webnyxt, 2026, via SEO statistics), and users tolerate delays less on mobile devices.
  • Transactional journeys and lead generation: every extra second of friction costs more on pages designed to convert (forms, bookings, shopping baskets).
  • Large-scale websites: templates that are expensive to render (heavy JavaScript, unstable servers) can slow indexing and reduce crawl budget efficiency (logic detailed in the main article).
  • Highly competitive SERPs: where content quality is comparable, UX can make the difference.

 

What Your Audit Must Demonstrate: UX, Accessibility, Conversions, Crawl, Stability, and Priorities

 

A useful website performance audit does not merely aim to push a score upwards: it must demonstrate where the blockers are, on which pages, and what you can realistically gain. A strong deliverable brings together:

  • Metrics (Core Web Vitals, load time, stability, interactivity).
  • Segments (mobile vs desktop, countries, business-critical pages vs secondary pages).
  • Impact (on drop-off, engagement, conversion, or crawl and indexing).

On user behaviour, Google has published widely cited benchmarks: after 3 seconds of load time, the probability of a user leaving the page increases by 32%; it rises to 90% after 5 seconds and to 123% after 10 seconds (Google research, 2017, via agence-wam.fr). SEO statistics also note that in 2025, between 40% and 53% of users abandon a site if it loads too slowly (Google, 2025), and that an additional 2 seconds of load time can lead to a +103% increase in bounce rate (HubSpot, 2026).

Maintain a diagnostic mindset throughout: optimising without an audit is like treating symptoms without addressing root causes (source: redsen.com). The audit exists precisely to surface those root causes and establish clear priorities.

 

Measuring Website Speed with the Right Data

 

A common trap is mixing incompatible measurements (lab vs field data) or drawing conclusions about an entire site from a handful of URLs. Measurement must be segmented, repeatable, and interpretable within your specific context (device, pages, audiences).

 

Lab Data vs Field Data: Avoiding Misdiagnosis

 

Lab tests (simulations) are excellent for debugging, but they do not always reflect reality. Field data describes what users actually experience, though it aggregates many different contexts (devices, networks, geographies).

PageSpeed Insights can display data from the Chrome User Experience Report (CrUX), based on real-world usage. This data is aggregated monthly, with a delay of a few days (source: agence-wam.fr). Methodologically:

  • Use lab data to isolate a specific issue (render-blocking resources, oversized images, excessive JavaScript) and validate a fix.
  • Use field data to decide whether the issue warrants attention, because it genuinely affects real users on key pages.

This distinction helps you avoid a classic mistake: spending weeks gaining a few points on a low-traffic page while a key commercial template suffers from a recurring issue that is clearly visible in real-world data.

 

Using Google Search Console to Target Pages That Drive Traffic

 

PageSpeed Insights works largely on a URL-by-URL basis, which quickly becomes impractical for websites with thousands of pages (source: agence-wam.fr). To scale effectively, Google Search Console provides a macro-level view by grouping URLs as "fast, slow, or needs improvement" and highlighting which Core Web Vitals metric is failing (same source). This allows you to work by template rather than page by page.

In a website performance audit, focus particularly on:

  • underperforming URL groups that correspond to high-traffic pages;
  • sudden drops (regressions following a release);
  • issues concentrated on mobile.

An organisational tip: for each recommendation, document the affected pages, the problematic resources, their file size, and an estimate of potential time savings (a reporting approach recommended by agence-wam.fr).

 

Linking Performance to Conversions in Google Analytics, Without Attribution Bias

 

To connect speed to business outcomes, Google Analytics (GA4) helps you verify whether slowness actually corresponds to lower engagement or conversion. The important thing is to avoid drawing conclusions too hastily:

  • A drop in conversions may stem from an offer change, seasonality, less qualified traffic, or a tracking modification.
  • A speed improvement may have no measurable effect if it does not touch the pages that matter within the user journey.

Recommended approach: segment by page type (lead-gen landing pages, product pages, articles), by device, and compare "before and after" periods while controlling for concurrent changes (content, campaigns, tracking). This reflects the logic set out in the main article: connect each technical workstream to an expected, measurable outcome rather than an abstract score.

 

Understanding Core Web Vitals and How to Interpret Their Thresholds

 

Core Web Vitals are the reference trio for auditing perceived performance: LCP (loading speed), FID (interactivity), and CLS (visual stability) (source: agence-wam.fr). Frequently cited benchmarks (Google thresholds referenced by Blog du modérateur) include: LCP < 2.5s, FID ≤ 100ms, CLS < 0.1.

These thresholds are not a universal pass mark: they help you classify, prioritise, and track progress. The audit's role is to understand why a template fails and on which pages that failure actually costs you something.

 

LCP: Identifying What Delays Perceived Loading and Prioritising Fixes

 

LCP marks the moment the largest visible element (often a hero image, a block of text, or a banner) is rendered. An LCP that "needs improvement" is typically the symptom of a chain of delays: a slow server response, heavy resources, render-blocking CSS, or JavaScript that delays the initial paint.

What the audit should produce: the typical LCP element per template (category page, product page, article), its associated resource, and a reduction plan. One cited quick win is a potential 0.6s gain when images are correctly sized (source: agence-wam.fr).

 

CLS: Stabilising Layout to Reduce Friction and Mis-clicks

 

CLS measures layout shifts that occur during loading. A high CLS causes missed clicks, frustrating form interactions, and a general impression of an unstable site. In your audit, look for recurring patterns:

  • images without reserved dimensions;
  • blocks appearing late in the load sequence (banners, cookie notices, widgets);
  • web fonts that cause size changes when they finish loading.

The benefit here is often more UX-related than SEO-related: you reduce mis-clicks and cognitive load, which is more likely to lift conversions than to trigger a sudden jump in rankings.

 

FID: Reading Responsiveness Without Over-interpreting Contextual Variation

 

FID measures the delay between a user's first interaction (click or tap) and the page's ability to respond. It can vary considerably depending on device and context (entry-level mobile handsets, CPU saturation, third-party scripts, etc.). In an audit, avoid over-interpreting minor variation between tests.

The right question is not "are we at 98ms or 105ms?" but rather: which scripts or tasks are blocking the main thread on the pages that matter? The answer usually lies in JavaScript resource analysis, third-party tags, and components that run heavy code on load.

 

What Core Web Vitals Do Not Cover (and What to Add)

 

Core Web Vitals do not cover everything that constitutes good performance from a business and technical standpoint. In a website performance audit, add at minimum:

  • Server stability (errors, latency spikes, uptime);
  • Page weight (images, scripts, fonts, tags);
  • UX friction (user journeys, accessibility, readability), because a fast but confusing site still converts poorly;
  • Tracking quality (heavy analytics scripts can degrade the experience, but removing them can break measurement).

This aligns with a broader audit principle: performance is never a single lever; it interacts with technical foundations, content, UX, and security (source: redsen.com).

 

Auditing the Root Causes of Long Load Times

 

Once measurement is properly scoped, the audit must trace issues back to their root causes. The most common slowdowns typically sit across four areas: rendering (front-end), media, server and network, and templates (design-related technical debt).

 

Critical Rendering Path: CSS, JavaScript, and Resource Prioritisation

 

A page can be light on content yet still slow if the browser is waiting on critical resources or executing excessive JavaScript. In your findings, pay particular attention to:

  • resources that block rendering (such as those flagged under "eliminate render-blocking resources") (source: agence-wam.fr);
  • bulky CSS files loaded globally even though they only apply to a single template;
  • scripts executed on load rather than deferred.

Prioritisation advice: do not tackle JavaScript "in general". Target the code affecting high-stakes templates (landing pages, product pages, category pages) and your mobile users specifically.

 

Images and Media: Formats, Dimensions, Lazy Loading, and Useful File Weight

 

Images remain one of the most common causes of sluggish load times. One practical warning sign cited is overly heavy images, "typically above 100KB" (source: redsen.com). Your audit should distinguish between:

  • "visually essential" images (hero shots, product images) that must be optimised first;
  • decorative or comfort images that can be deferred or heavily compressed.

Beyond file size, also address mismatches between displayed dimensions and actual image dimensions. The cited example of a potential 0.6s gain from correctly sizing images illustrates the value of this workstream (source: agence-wam.fr).

 

Server and Network: TTFB, Caching, Compression, CDN, and Stability

 

Strong front-end performance cannot compensate for a slow or unstable server. Common causes cited in performance audits include slow server response times, heavy images, and unoptimised code, with levers such as caching, compression, reducing the number of requests, and minification (source: Blog du modérateur).

In an audit, do not rely solely on averages: look for spikes during busy periods, on specific pages, or at dynamic endpoints, as well as instability that creates an inconsistent user experience. From an SEO standpoint, such instability can also make certain pages expensive for crawlers to process (logic referenced in the main article).

 

Templates and Front-End Debt: Spotting Pages That Are Heavy by Design

 

Some templates are structurally heavy: stacked components, sliders, third-party tags, "related products" modules, A/B testing scripts, and so on. The audit must isolate these templates and then answer a governance question: what is genuinely essential to the business, and what is simply accumulated historical debt?

This is critical to avoid score chasing: if a page converts well, an aggressive optimisation that degrades content, indexing, or tracking may cost more than it delivers.

 

Interpreting PageSpeed Insights Without Falling into Score-Chasing

 

PageSpeed Insights is useful for generating signals and ideas, but a website performance audit must remain decision-led. A score is not a business KPI. It can serve as a helpful internal benchmark, but it does not replace field data (CrUX) or analysis through GSC and GA4.

 

Why Scores Fluctuate (and What That Means for Your Decisions)

 

Scores fluctuate because test conditions change (simulation environment, network conditions, resource variability) and because pages evolve (content updates, new tags, script changes). The practical takeaway: never validate a workstream based on a single one-off measurement.

Prefer structured comparisons: the same URLs, the same segments (mobile and desktop), the same time periods, and above all the same objectives (reducing drop-off, improving a critical template, stabilising the experience).

 

Turning Recommendations into Actions: Quick Wins vs Structural Work

 

A website performance audit should produce an actionable list that includes, ideally, the affected pages, the implicated resources, their file size, and an estimated potential gain (a reporting approach described by agence-wam.fr).

Examples of typical quick wins with a high likelihood of ROI:

  • fixing image sizing (the potential 0.6s gain cited above);
  • reserving space for images and embeds to reduce CLS;
  • removing unused third-party scripts from key pages.

Examples of structural work: reworking a heavy template, rationalising CSS and JavaScript, or addressing server configuration. These require strict prioritisation, as both effort and risk are considerably higher.

 

Improving PageSpeed Without Hurting SEO: Content, Indexing, and Tracking

 

Optimising purely for the tool can damage your results if you remove what creates SEO and business value. Three simple safeguards:

  • Content: do not delete blocks that serve search intent (social proof, demonstrations, FAQs) just to gain a few extra points.
  • Indexing: avoid changes that alter how important content is rendered for Google (JavaScript rendering issues, poorly managed deferred loading).
  • Tracking: lighten and rationalise your tag setup, but do not break the events you need to measure ROI.

On that last point, bear in mind that performance assessment also feeds into broader strategic evaluation. For example, comparing the relative effectiveness of organic and paid channels can draw on macro benchmarks from SEA statistics (post-click engagement, bounce rates, etc.) — without conflating marketing performance with technical performance.

 

Prioritising Optimisations: ROI, Risk, and SEO Speed

 

Prioritisation is what prevents an audit from becoming overwhelming: rather than stacking 200 recommendations, you select the ten decisions that genuinely matter. This aligns with the "impact × effort × risk" approach highlighted in the main article.

 

Selecting Which Pages to Optimise: Traffic, Conversion, Seasonality, and Business Criticality

 

Do not start with the slowest pages. Start with the pages where slowness has a measurable cost. A simple framework:

  • Pages driving SEO traffic (acquisition);
  • Pages driving conversion (leads, demos, purchases);
  • Seasonal pages (where delays are particularly costly);
  • Templates (one template fix often outweighs twenty page-level micro-corrections).

A useful SEO reminder: traffic gaps between positions are enormous. For example, position 1 on desktop generates around 34% CTR, whilst page 2 delivers just 0.78% (figures cited in SEO statistics via SEO.com 2026 and Ahrefs 2025). Anything preventing your strategic pages from reaching or holding page one carries a significant opportunity cost — but that encompasses far more than speed alone.

 

Balancing Effort Against Gain: A Simple, Actionable Matrix

 

Apply an "impact × effort × risk" matrix by batches (templates, directories, business segments) rather than isolated URLs. In practice:

  • Impact: the expected effect on measurable UX (drop-off, engagement), crawl and rendering efficiency, or conversion rates.
  • Effort: development complexity, dependencies (release cycles, QA), and coordination across teams (front-end, back-end, product).
  • Risk: the likelihood of regression (tracking breakage, SEO rendering issues, mobile bugs).

This framework protects you from a common bias: prioritising what is easy to highlight in a report rather than what actually changes business outcomes.

 

Validating Impact: The Before-and-After Method and Side-Effect Control

 

Validating an optimisation means measuring before and after and confirming you have not introduced new problems. In practice:

  • monitor Core Web Vitals on the targeted pages, using field data where possible;
  • in GSC, track the relevant URL groups and observe how signals evolve;
  • in GA4, compare engagement and conversion for the same population (device, source, pages).

Most importantly: if UX improves but SEO or conversions decline, investigate side effects first (content rendering, indexing, tags, cookie consent) rather than concluding that speed "makes no difference".

 

Can You Rank Well with a Slow Website? Typical Scenarios and Decision Criteria

 

Yes, you can. That is precisely why a website performance audit should avoid blanket statements such as "you must score 90 or above on mobile". The real question is: "Does our slowness cost us something measurable on our strategic pages?"

 

When a Slow Website Can Still Compete: Intent, Content, Authority, and SERPs

 

A site can perform well in SEO despite average speed if:

  • it answers search intent better than competitors (content depth, structure, social proof);
  • it benefits from strong domain authority and effective internal linking;
  • the SERP rewards other signals more heavily (high topical expertise, brand recognition, etc.).

In these cases, performance is often an amplifier rather than the primary driver. This is consistent with a "systems" view of a website: visibility, technical foundations, content, UX, and security all interact (source: redsen.com).

 

When Speed Becomes a Blocker: Mobile, Transactional Journeys, and High Competition

 

Slowness becomes a genuine blocker when it pushes users out of the journey before they reach value: drop-offs, frustration, mis-clicks caused by CLS, abandoned forms, and so on. Behavioural benchmarks are unambiguous: beyond 3 seconds, abandonment rates rise sharply (Google data cited via agence-wam.fr), and SEO statistics highlight the scale of bounce and abandonment effects.

In SEO terms, speed can also become a blocker if pages are too expensive to render, slowing indexing or recrawlability on large sites. In a context where Google makes between 500 and 600 algorithm updates per year (SEO.com, 2026 via SEO statistics), technical robustness and the ability to improve without causing regressions often matter more than any single point-in-time score.

 

Integrating Performance into a Broader SEO Audit Without Cannibalising the Technical Audit

 

Performance should not dominate the technical audit. It sits within a broader approach that also covers indexability, architecture, canonicalisation, internal linking, mobile compatibility, and more. For overall context, you can consult the SEO audit, then return to performance as a specialised workstream within that wider framework.

 

Expected Deliverables: Evidence, Priorities, Backlog, and Acceptance Criteria

 

An execution-led audit deliverable should include:

  • Evidence: affected pages, metrics, segments, and useful screenshots or data exports.
  • Priorities: what you address now, what can wait, and what to deprioritise entirely.
  • A backlog: tasks written so they can be estimated, assigned, and tracked.
  • Acceptance criteria: how to validate success (target metric, pages tested, no SEO or tracking regression).

The action-based reporting approach — listing pages, resources, file sizes, and potential savings — is particularly effective for aligning marketing, product, and engineering teams (source: agence-wam.fr).

 

Connecting Performance, Crawl, and Indexing: Useful Relationships and Common Mistakes

 

Common mistakes arise from siloed thinking. A few connections that can save weeks of wasted effort:

  • Slow but non-strategic pages: do not invest resources until you have demonstrated a measurable impact on traffic, conversion, or crawl.
  • Strategic pages that are poorly crawled: performance improvements alone will not resolve discovery issues if internal linking or site architecture is the real blocker (framing detailed in the main article).
  • Optimisations that break rendering: deferring resources too aggressively can reduce the content that is genuinely visible to both Google and users.
  • Overloaded tracking: too many third-party tags can hurt performance, but removing them without a measurement plan prevents you from assessing ROI (connect this to organic versus paid channel management via SEA statistics).

Finally, with the growing prevalence of zero-click journeys and AI-generated answer interfaces, many challenges are shifting towards converting and reassuring users once they actually land on your site. Benchmarks on these developments (AI Overviews, zero-click behaviour, declining CTR) are summarised in GEO statistics. This does not make speed the central concern — but it does reinforce the importance of maximising the value of every visit you earn.

 

Scaling Ongoing Monitoring with Incremys (Practical Use)

 

 

Centralising GSC and GA via API, Documenting Actions, and Tracking Impact on Key Pages

 

Without multiplying tools, the priority is to centralise and monitor consistently. Incremys integrates Google Search Console and Google Analytics via API, making it straightforward to consolidate signals (performance, traffic, conversions) and document actions over time. In practice, this supports a genuine "SEO 360°" approach — not just a focus on speed — through the SEO 360° Audit module: you prioritise, execute, then verify impact on the pages that matter, rather than chasing an abstract score.

 

FAQ: Website Performance Audits

 

 

How Do I Analyse a Website's Performance Step by Step?

 

1) Define the scope (business-critical pages, mobile and desktop). 2) Measure with both lab and field data (CrUX if available). 3) Group findings by templates and segments. 4) Identify root causes (rendering, media, server, third-party scripts). 5) Prioritise using an impact × effort × risk matrix. 6) Validate before and after in GSC and GA4, whilst monitoring for side effects.

 

What Are Core Web Vitals for, and How Should I Interpret Them?

 

They help qualify perceived performance across three dimensions: loading speed, interactivity, and visual stability. Interpret them by segment (particularly mobile) and by template, and use thresholds as guides (LCP < 2.5s, FID ≤ 100ms, CLS < 0.1, per benchmarks cited by Blog du modérateur) rather than treating them as the sole objective.

 

Does Performance Influence SEO, or Is It Still a Marginal Factor?

 

Both can be true depending on context. Google incorporates page experience signals (including speed) into its ranking systems, but the impact is most noticeable "at equal quality". Because rankings depend on many factors (more than 200 cited in SEO statistics), performance often remains marginal — unless it significantly degrades UX, rendering, or the crawlability of strategic pages.

 

Can a Slow Site Still Rank Well on Google?

 

Yes. Highly relevant content, strong authority, and sound architecture can offset average performance. However, if slowness drives users away (the 3, 5, and 10-second abandonment figures from Google, cited via agence-wam.fr) or makes templates too expensive for crawlers to process, it becomes a genuine constraint on growth.

 

How Can I Improve My PageSpeed Score Without Optimising Purely for the Tool?

 

Start with the templates that drive traffic and conversions, then address root causes (overweight images, render-blocking resources, third-party scripts). Validate your changes in field data and in GA4 (engagement and conversion metrics). The score often improves as a result — but it is not the final judge of success.

 

What Signals Indicate a CLS Problem, and How Do I Fix It?

 

Visible layout shifts during loading, missed clicks, buttons that move, and unstable form fields are all tell-tale signs. Fix them by reserving space for images and embeds, stabilising banners (cookie consent notices, promotional messages), and avoiding the late insertion of content blocks above the fold.

 

High LCP: What Are the Most Common Causes, and What Should I Tackle First?

 

Common causes include an overweight hero image, a slow server response, render-blocking CSS or JavaScript, and JavaScript that delays the initial paint. Address what affects your most strategically important template first. Image optimisation (correct sizing and formatting) can deliver quick gains, as illustrated by the 0.6s example cited by agence-wam.fr.

 

FID: What Kind of Degradation Should I Watch for, and How Do I Explain It?

 

Watch particularly for clear degradation on mobile devices and on entry pages (landing pages). It is most often explained by excessive JavaScript executing on load, third-party tags, or components blocking the main thread. The key is to correlate degradation with specific changes (a new tag, a new module) rather than attributing it to an isolated fluctuation.

 

Which Pages Should I Audit First When Time Is Limited?

 

Start with conversion-driving pages (lead generation, purchases) and high-traffic SEO pages. Then work by template using URL groups in Google Search Console, rather than testing hundreds of individual URLs one by one.

 

How Often Should I Run a Website Performance Audit, and How Do I Avoid Regressions?

 

Run an audit after every significant change (a redesign, the addition of new tags, or a new template rollout) and establish regular monitoring on a small set of key pages. A post-release check routine is often the most effective approach, as performance management is a continuous process rather than a one-off task (a principle referenced by Blog du modérateur).

 

How Do I Connect Performance, Conversions, and B2B Lead Generation?

 

In GA4, segment your conversion pages (contact forms, booking pages), then compare engagement and conversion rates before and after optimisation, broken down by device. Cross-reference with GSC to confirm that the pages you optimised are the ones contributing to qualified organic traffic.

 

What Should I Do When PageSpeed Insights and Field Data Contradict Each Other?

 

Treat PageSpeed Insights as a lab diagnostic and CrUX (if available) as aggregated real-user data. If lab results are poor but field data is good, the issue may be contextual or infrequent. If field data is poor, prioritise action even if lab results fluctuate. In all cases, base your decisions on strategic pages and measured impact (GSC, GA4).

For more practical resources on SEO, GEO, and digital marketing, visit the Incremys blog.

Concrete example

Discover other items

See all

Next-gen GEO/SEO starts here

Complete the form so we can contact you.

The new generation of SEO
is on!

Thank you for your request, we will get back to you as soon as possible.

Oops! Something went wrong while submitting the form.